This is one of my vision papers. Its current status is work in progress.

Problematic Wetware

Abastract

Several decades of applying Moore's law to the electronics industry has caused an interesting phenomena where the people in charge of allocating R&D funds are frequently basing their decsions on some seriously outdated concepts and ideas. ...

Introduction

I attended one of the keynote speeches back at the 4th World Wide Web conference in Boston and was introduced to a new term -- "wetware". I was quite familiar with the terms software, hardware and firmware, but that was the first time I had heard the term "wetware". Of course, what the speaker ({what was his name? -- he worked for Disney}) was talking about when he used the term "wetware" was people. There are two kinds of wetware -- the wetware that uses computer systems (user wetware) and the kind of wetware that designs computer systems (developer wetware.)

Computer systems consist of hardware, software, and wetware. A well designed computer system has partitioned the tasks between the hardware, software, and wetware just right so that the total task is accomplished in an optimal fashion. Right! A well designed computer system is rarity. All too frequently, the developer has never met or interacted with the user. Equally important, the developer frequently uses prior experience to design a computer system and the prior experience is not really applicable. The resulting system can fall way short of optimal. Indeed, the trade literature is littered with stories of computer systems that have been dismal failures.

All too frequently, in a hardware/software/wetware computer system, it is the wetware that is the bottleneck. How often has a plane flown into the ground because the pilot and copilot were so busy trying to figure out what the instruments of the plane where trying to `say' that they forgot to look out the window? And while everybody likes to blame the user wetware for not using the computer system properly , in the end, the real problem is almost always a mistake made by the developer wetware. It is the premise of this document, that the developer wetware needs to view itself as part of the problem to be solved, not just part of the solution.

Past Teachings

How does developer wetware come into existence? Well, it all starts with student wetware going to school. Do you remember what it was like to be a student? Students always complain about the fact that the professors (teacher wetware) are always teaching them stuff that is out-of-date. In an industry that been experiencing exponential growth, like the computer/electronics industry, it would be pretty hard for professors not to be teaching out-of-date material. After a period of time, the student wetware is pronoucned `trained' and is ejected into the workplace where the become employee wetware. The employee wetware undergoes additional training from other more employee wetware until after a number of additional years the emplayee wetware is fully certified developer wetware qualified to successfully build any computer system. Right!

How often have students been told by their professors that what they are being taught is likely to be of little use in their profession careers ten years hence? All too frequently, just the reverse occurs, where the students are encouraged not to question the appropriateness of the material being taught at all. In the computer industry, 50% of what you know now is likely to be considered arcane trivia ten years down the road. (E.g. who cares how to write JCL? program an 029 keypunch drum card? etc.)

The rest of this paper is basically about trying to uncover ideas and techniques that are taken as "given" in the computer industry and expose them as the old tired ideas that they really are. What is just as important is to suggest what these old tired ideas should be replaced with so that we can get on with the wild ride forward. I'll start by trying to shoot some holes into the sacred cows of computer hardwared design and the associated operating system design. Next, I'll work on programming in general. Finally, I'll ...

Hardware and Software Underpinnings

A long, long time ago (in a galaxy all too familiar) computers were very expensive. Computers were so expensive, you considered yourself very lucky if you could share one computer with hundreds/thousands of other people. Back in those dark days, the correct engineering trade-off was to design a computer system with lots of dumb peripherals (e.g. card readers and printers) that hooked up to your smart computer. The purpose of the computer operating system was to share that smart computer amongst all of those users and dumb peripherals and extract the most useful computer work you could. The alternative of assigning a dedicated computer to each peripheral and user was far too uneconomical to contemplate.

Now when we zoom forward to the present and take a look at what we got. Figure 1 below shows a typical personal computer configuration:

Typical PC Configuration
It is one big mess! Now I ask the question, if you had a blank sheet of paper to design a computer system from the ground up, is this what you would come up with? I don't think so!

So what are the alternatives? Well, back when I was at Sun, the Sun marketing department came up with the slogan `The Network is the Computer'. None of us in engineering could ever figure out what it meant, but in the end we didn't really care because the computers kept selling. Well, they were right, `The Network is the Computer.' The architecture of the future is almost certainly to put a full processor with memory and network interface into every computer peripheral as shown below in figure 2 below:

Internet Computer Arch.
Thus, each hard disk, printer, keyboard, CD-Rom drive, floppy disk drive, modem, will plug directly into a standard network hub. Thus, the mish-mash of buses in the so called modern PC (e.g. ISA, PCI, SCSI, IDE, USB, IDE, RS-232, SVGA, etc.) would be replaced by a single `bus', namely a standard network hub.

How fast does the hub have to be? Well I believe that a switched 100BaseT hub is likely to be fast enough for most people, but not everybody is convinced. The peak rate coming out of a SCSI is 400MB, but how many applications can keep a SCSI bus running at 400MB for an extended period of time? OK, you're not convinced. What about 1GB switched hub? Is that fast enough? How long will it be until 1GB switched hubs are dirt cheap?

In this future computing environment, your general purpose computation module consists of one or more processors, a bunch of memory, some sort of clock, and a network interface. As people wish to add more computational power, they just go out and buy the latest and greatest computer module from their local supermarket and plug it into their hub.

What about displays? Well displays will have a processor, memory, network interface, and one or more specialized graphics processors. Rather than having to write programs that worry about painting and repainting individual pixels on the screen, the display module will work at a much higher level protocol. I talk more about the display modules more below.

Is there any evidence that this transition from lot's of funky busses to network peripherals is taking place? Well, there is one clear example, the printer. These days you can purchase a printer from HP with an option that allows you to plug it directly into an Ethernet. There are a whole slew of vendors that sell add-on printer servers that plug into the parallel port of any printer and adapt it to the Ethernet. How much longer will it be before all printers come with an Ethernet connection as a standard option? How much longer after that will it be before the printer vendors do not even bother to put a parallel printer port on each printer?

Are there any other peripherals that are going to follow suit? Well that really depends upon how successfull USB (Universal Serial Bus) is. USB support has been on most PC motherboards for well over six months now. Unfortunately, the number of available USB peripherals is essentially zero, because no mass marketed operating system has been deployed that uses the standard yet.

What about security?

Where's the operating system in this computer of the future? Well, it sure doesn't look like Windows/NT! Basically, each device looks like a standard internet client/server architecture. If somebody wants to print

Programming Underpinnings

There has been slow steady progress with programming and programming languages. It started off with patch panels, paper tape, and sybmolic assembler, evolved forward with FORTRAN, COBOL, LISP, ALGOL, and C, took a huge step backwards with C++, and moved forward again with Java. I'm sure I missed one of your favorite programming languages. In addition to programming language improvements, there have been corresponding improvements in the programming environments ranging from lights and switches debugging, DDT, source language debugging, performance tools, etc. How do programming languages come into existence? Is there any method to language design, or is it basically an ad hoc process? Well anybody who has had the misfortune of sitting in on a language standards committee meeting can tell you that it is a pretty ad hoc process. Small changes in a computer language specification can cause enormous swings in the overall usability of a language. Frequently, the features and misfeatures in a computer language are as a result of an individual person's ego rather than any need to make life easier for the programmer.

Is there an alternative? What would happen if language design occured along a more "scientific" lines? What if programmers were tasked with writing real programs using a programming language and the language designers came along afterwards looking for common problems and fixed the language accordingly? How long

The programming language problems are still minor in comparision to the more fundamental problem. We need to ...

A long time ago we were taught the only efficient way to pass control between two pieces of code was via the procedure call. Anything else, like a process context switch, was just too inefficient. Well, just for the fun of it, I decided to see how context switches a second can be supported. On a Sharp 9020 Pentium 120 laptop running Linux 2.0.33 a simple program that passes a single character back and forth between two processes through a Unix pipe is capable of doing 2 million context switches in 13.67 seconds, resulting in 146305 context switches per second! While I wouldn't use a context switch to add two numbers together, it is certainly possible to use a context switches for procedure level granularity computation.

So what's wrong with procedure calls anyhow? Well, as long as the program doesn't become too large and convoluted there probably isn't anything wrong. What happens when you have a program that is a million lines of bloatware?

Dataflow Programming Diagram
Dataflow Programming Example Output

Summary

{Summary goes here.}


Copyright (c) 1998 by Wayne Gramlich. All rights reserved.