I can tell you almost to the day when the computer revolution as I see it started, the revolution that today has changed the lives of everyone.
It happened at the very first meeting of a strange, geeky group of people called the Homebrew Computer Club in March 1975. This was a group of people fascinated with technology and the things it could do. Most of these people were young, a few were old, we all looked like engineers; no one was really good-looking. Ha. Well, we’re talking about engineers, remember. We were meeting in the garage of an out-of-work engineer named Gordon French.
After my first meeting, I started designing the computer that would later be known as the Apple I. It was that inspiring.
Almost from the beginning, Homebrew had a goal: to bring computer technology within the range of the average person, to make it so people could afford to have a computer and do things with it. That had been my goal, too, for years and years before that. So I felt right at home there.
And eventually Homebrew’s goal just expanded and expanded. It wasn’t long before we were talking about a world—a possible world—where computers could be owned by anybody, used by anybody, no matter who you were or how much money you made. We wanted them to be affordable—and we wanted them to change people’s lives.
Everyone in the Homebrew Computer Club envisioned computers as a benefit to humanity—a tool that would lead to social justice. We thought low-cost computers would empower people to do things they never could before. Only big companies could afford computers at the time. That meant they could afford to do things smaller companies and regular people couldn’t do. And we were out to change all that.
In this, we were revolutionaries. Big companies like IBM and Digital Equipment didn’t hear our social message. And they didn’t have a clue how powerful a force this small computer vision could be. They looked at what we were doing—small computers, hobby computers—and said they would just remain toys. And a relatively minor business. They didn’t imagine how they could evolve.
There was a lot of talk about our being part of a revolution. How people lived and communicated was going to be changed by us, changed forever, changed more than anyone could predict exactly.
Of course there was also a lot of talk about specific components that would make faster computers, and about technical solutions for computers and accessories themselves. People would talk about the humanistic future uses of computers. We thought computers were going to be used for all these weird things—strange geeky things like controlling the lights in your house—and that turned out not to be the case. But everyone felt this thing was coming. A total change. We couldn’t always define it, but we believed it.
As I said, almost all of the large computer companies were on record saying that what we were doing was insignificant. It turned out they were wrong and we were right—right all the way. But back then, even we had no idea how right we were and how huge it would become.
It’s funny and maybe a little bit ironic how my involvement in the whole Homebrew thing got started. Remember Allen Baum? He shows up again and again at a lot of important times in my life. He was my friend who sometimes worked at Sylvania with me in high school, whose dad designed the TV Jammer, who did the Homestead High prank with Steve Jobs and me, and also the one who helped get me that dream job at Hewlett-Packard.
I still had that HP job at the time. One day at work I got a call from Allen. It was a call that would change my life yet again, the call that introduced me to Homebrew.
Allen called and said something like, “Listen. There’s this flyer I found at HP, it’s for a meeting of people who are building TV and video terminals and things.”
Now, TV terminals I already knew a little about. By this point, in 1975, I’d done all kinds of side projects, and had already learned a lot about putting data from computers onto TVs. Not only had I done my version of Pong plus that project at Atari, Breakout, but I’d already built a terminal that could access the ARPANET, the government-owned network of computers that was the predecessor to the Internet. My terminal even let you display a few letters, up to sixty characters a second. I know that sounds slow now, but this was about six times faster than most teletype systems at the time and a whole lot cheaper. Teletype systems cost thousands of dollars, way more than someone on an engineer’s salary could afford, but I built a system using a Sears TV and a cheap $60 typewriter keyboard.
More About Homebrew
This Homebrew Club I belonged to since its first meeting in March 1975 led to other computer companies than Apple. It was incredibly revolutionary. Other members who started computer companies included Bob Marsh and Lee Felsenstein (Processor Technology), Adam Osborne (Osborne Computers), and, of course, me and Steve Jobs, who I later talked into going with me. I once wrote an article on the importance of Homebrew, and you can find it at: http://www.atariarchives.org/deli/homebrew_and_how_the_apple.php.
Just like my Pong design and the Cartrivision VCR, I connected my video signal into the test pin of my home TV, the one I found in the schematics.
Now, if Allen had told me that Homebrew was going to be about microprocessors, I probably wouldn’t have gone. I know I wouldn’t have gone. I was shy and felt that I knew little about the newest developments in computers. By this time, I was so totally out of computers. I was just immersed in my wonderful calculator job at HP. I wasn’t even following computers at all. I mean, I hardly even knew what the heck a microprocessor was.
But, like I said, I thought it was going to be a TV terminal meeting. I thought, Yeah, I could go to this thing and have something to say.
I was scared, but I showed up. And you know what? That decision changed everything. That night turned out to be one of the most important nights of my life.
About thirty people showed up for this first meeting there in that garage in Menlo Park. It was cold and kind of sprinkling outside, but they left the garage door open and set up chairs inside. So I’m just sitting there, listening to the big discussion going on.
They were talking about some microprocessor computer kit being up for sale. And they seemed all excited about it. Someone there was holding up the magazine Popular Electronics, which had a picture of a computer on the front of it. It was called the Altair, from a New Mexico company named MITS. You bought the pieces and put them together and then you could have your own computer.
So it turned out all these people were really Altair enthusiasts, not TV terminal people like I thought. And they were throwing around words and terms I’d never heard—talking about microprocessor chips like the Intel 8080, the Intel 8008, the 4004, I didn’t even know what these things were. Like I said, I’d been designing calculators for the last three years, so I didn’t have a clue.
I felt so out of it—like, No, no, I am not of this world. Under my breath, I am cussing Allen Baum. I don’t belong here. And when they went around and everyone introduced themselves, I said, “I’m Steve Wozniak, I work at Hewlett-Packard on calculators and I designed a video terminal.” I might have said some other things, but I was so nervous at public speaking that I couldn’t even remember what I said afterward. After that, we all signed a sheet of paper where we were supposed to put down our name and what interests and talents we were bringing to the group. (This piece of paper is public now; you might be able to find it online.) The thing I wrote on that paper was, “I have very little free time.”
Isn’t that funny? These days I’m so busy and people are constantly asking for my autograph and stuff, but back then I was also just as busy: always working on projects, engineering for work and then engineering at home. I don’t feel like I’ve changed much since then, and I guess this proves it, sort of.
Well, anyway, I was scared and not feeling like I belonged, but one very lucky thing happened. A guy started passing out these data sheets—technical specifications—for a microprocessor called the 8008 from a company in Canada. (It was a close copy, or clone, of Intel’s 8008 microprocessor at the time.) I took it home, figuring, Well, at least I’ll learn something.
That night, I checked out the microprocessor data sheet and I saw it had an instruction for adding a location in memory to the A register. I thought, Wait a minute. Then it had another instruction you could use for subtracting memory from the A register. Whoa. Well, maybe this doesn’t mean anything to you, but I knew exactly what these instructions meant, and it was the most exciting thing to discover ever. Because I could see right away that these were exactly like the instructions I used to design and redesign on paper for all of those minicomputers back in high school and college. I realized that all those minicomputers I’d designed on paper were pretty much just like this one.
Only now all the CPU parts were on one chip, instead of a bunch of chips, and it was a microprocessor. And it had pins that came out, and all you had to do was use those pins to connect things to it, like memory chips.
Then I realized what the Altair was—that computer everyone was so excited about at the meeting. It was exactly like the Cream Soda Computer I’d designed five years before! Almost exactly. The difference was that the Altair had a microprocessor—a CPU on one chip—and mine had a CPU that was on several chips. The other difference was that someone was selling this one—for $379, as I recall. Other than that, there was pretty much no difference. And I designed the Cream Soda five years before I ever laid eyes on an Altair.
It was as if my whole life had been leading up to this point. I’d done my minicomputer redesigns, I’d done data on-screen with Pong and Breakout, and I’d already done a TV terminal. From the Cream Soda Computer and others, I knew how to connect memory and make a working system. I realized that all I needed was this Canadian processor or another processor like it and some memory chips. Then I’d have the computer I’d always wanted!
Oh my god. I could build my own computer, a computer I could own and design to do any neat things I wanted to do with it for the rest of my life.
I didn’t need to spend $400 to get an Altair—which really was just a glorified bunch of chips with a metal frame around it and some lights. That was the same as my take-home salary, I mean, come on. And to make the Altair do anything interesting, I’d have to spend way, way more than that. Probably hundreds, even thousands of dollars. And besides, I’d already been there with the Cream Soda Computer. I was bored with it then. You never go back. You go forward. And now, the Cream Soda Computer could be my jumping-off point.
No way was I going to do that. I decided then and there I had the opportunity to build the complete computer I’d always wanted. I just needed any microprocessor, and I could build an extremely small computer I could write programs on. Programs like games, and the simulation programs I wrote at work. The possibilities went on and on. And I wouldn’t have to buy an Altair to do it. I would design it all by myself.
That night, the night of that first meeting, this whole vision of a kind of personal computer just popped into my head. All at once. Just like that.
And it was that very night that I started to sketch out on paper what would later come to be known as the Apple I. It was a quick project, in retrospect. Designing it on paper took a few hours, though it took a few months longer to get the parts and study their data sheets.
I did this project for a lot of reasons. For one thing, it was a project to show the people at Homebrew that it was possible to build a very affordable computer—a real computer you could program for the price of the Altair—with just a few chips. In that sense, it was a great way to show off my real talent, my talent of coming up with clever designs, designs that were efficient and affordable. By that I mean designs that would use the fewest components possible.
I also designed the Apple I because I wanted to give it away for free to other people. I gave out schematics for building my computer at the next meeting I attended.
This was my way of socializing and getting recognized. I had to build something to show other people. And I wanted the engineers at Homebrew to build computers for themselves, not just assemble glorified processors like the Altair. I wanted them to know they didn’t have to depend on an Altair, which had these hard-to-understand lights and switches. Every computer up to this time looked like an airplane cockpit, like the Cream Soda Computer, with switches and lights you had to manipulate and read.
Instead they could do something that worked with a TV and a real keyboard, sort of like a typewriter. A computer like I could imagine.
As I told you before, I had already built a terminal that let you type regular words and sentences to a computer far away, and that computer could send words back to the TV. I just decided to add the computer—my microprocessor with memory—into the same case as that terminal I’d already built.
Why not make the faraway computer this little microprocessor that’s right there in the box?
I realized that since you already had a keyboard, you didn’t need a front panel. You could type things in and see things on-screen. Because you have the computer, the screen, and the keyboard, too.
So people now say this was a far-out idea—to combine my terminal with a microprocessor—and I guess it would be for other people. But for me, it was the next logical step.
That first Apple computer I designed—even though I hadn’t named it an Apple or anything else yet—well, that was just when everything fell into place. And I will tell you one thing. Before the Apple I, all computers had hard-to-read front panels and no screens and keyboards. After Apple I, they all did.
Let me tell you a little about that first computer—what is now called the Apple I—and how I designed it.
First, I started sketching out how I thought it would work on paper. This is the same way I used to design minicomputers on paper in high school and college, though of course they never got built. And the first thing was I had to decide what CPU I would use. I found out that the CPU of the Altair—the Intel 8080—cost almost more than my monthly rent. And a regular person couldn’t purchase it in small or single-unit quantities anyway. You had to be a real company and probably fill out all kinds of credit forms for that.
Luckily, though, I’d been talking to my cubicle mates at HP about the Homebrew Club and what I was planning, and Myron Tuttle had an idea. (You remember him: the guy whose plane almost crashed when I was in it.) He told me there was a deal you could get from Motorola if you were an HP employee. He told me that for about $40, I could buy a Motorola 6800 microprocessor and a couple of other chips. I thought, Oh man, that’s cheap. So very quickly I knew exactly what processor I would have.
Another thing that happened really early on was I realized—and it was an important realization—that our HP calculators were computers in a real sense. They were as real as the Altair or the Cream Soda Computer or anything else. I mean, a calculator had a processor and memory. But it had something else, too, a feature computers didn’t have at the time. When you turned a calculator on, it was ready to go: it had a program in it that started up and then it was ready for you to hit a number. So it booted up automatically and just sat there, waiting for you to tell it to do something. Say you hit a “5.” The processor in the calculator can see that a button is pushed, and it says, Is that a 1? No. A 2? No. A 3, 4…it’s a 5. And it displays a 5. The program in a calculator that did that was on three little ROM (read-only memory) chips—chips that hold their information even if you turn the power off.
So I knew I would have to get a ROM chip and build the same kind of program, a program that would let the computer turn on automatically. (An Altair or even my Cream Soda Computer didn’t do anything for about half an hour after you set switches so you could put a program in.) With the Apple I, I wanted to make the job of having a program go into memory easier. This meant I needed to write one small program which would run as soon as you turned your computer on. The program would tell the computer how to read the keyboard. It would let you enter data into memory, see what data was in memory, and make the processor run a program at a specific point in memory.
What took about half an hour to load up a program on the Altair, took less than a minute using a keyboard on the Apple I.
What Is ROM?
Read-only memory (ROM) is a term you’ll hear a lot in this book. A ROM chip can only be programmed once and keeps its information even if the power is turned off. A ROM chip typically holds programs that are important for a computer to remember. Like what to do when you turn it on, what to display, how to recognize connected devices like keyboards, printers, or monitors. In my Apple I design, I got the idea from the HP calculators (which used two ROM chips) to include ROMs. Then I could write a “monitor” program so the computer could keep track of what keys were being pressed, and so on.
If you wanted to see what was in memory on an Altair, it might take you half an hour of looking at little lights. But on the Apple I, it took all of a second to look at it on your TV screen.
I ended up calling my little program a “monitor” program since that program’s main job was going to be to monitor, or watch, what you typed on the keyboard. This was a stepping point—the whole purpose of my computer, after all, was to be able to write programs. Specifically, I wanted it to run FORTRAN, a popular language at the time.
So the idea in my head involved a small program in read-only memory (ROM) instead of a computer front panel of lights and switches. You can input data with a real keyboard and look at your results on a real screen. I could get rid of that front panel entirely, the one that made a computer look like what you’d see in an airplane cockpit.
Every computer before the Apple I had that front panel of switches and lights. Every computer since has had a keyboard and a screen. That’s how huge my idea turned out.
My style with projects has always been to spend a lot of time getting ready to build it. Now that I saw my own computer could be a reality, I started collecting information on all the components and chips that might apply to a computer design.
I would drive to work in the morning—sometimes as early as 6:30 a.m.—and there, alone in the early morning, I would quickly read over engineering magazines and chip manuals. I’d study the specifications and timing diagrams of the chips I was interested in, like the $40 Motorola 6800 Myron had told me about. All the while, I’d be preparing the design in my head.
The Motorola 6800 had forty pins—connectors—and I had to know precisely how each one of those forty pins worked. Because I was only doing this part-time, this was a long, slow process. And several weeks passed without any actual construction happening. Finally I came in one night to draw the design on paper. I had sketched it crudely before. But that night I came in and drew it carefully on my drafting board at Hewlett-Packard.
It was a small step from there to a completely built computer. I just needed the parts.
I started noticing articles saying that a new, superior-sounding microprocessor was going to be introduced soon at a show, WESCON, in San Francisco. It especially caught my attention that this new microprocessor—the 6502 from MOS Technologies in Pennsylvania—would be pin-for-pin compatible with, electrically the same as, the Motorola 6800 I had drafted my design around. That meant I could just pop it in without any redesigning at all.
The next thing I heard was that it was going to be sold over the counter at MOS Technologies’ booth at WESCON. The fact that this chip was so easy to get is how it ended up being the microprocessor for the Apple I.
And the best part is they cost half ($20) of what the Motorola chip would have cost me through the HP deal.
WESCON, on June 16–18, 1975, was being held in San Francisco’s famous Cow Palace. A bunch of us drove up there and I waited in line in front of MOS Technologies’ table, where a guy named Chuck Peddle was peddling the chips.
Right on the spot I bought a few for $20 each, plus a $5 manual.
Now I had all the parts I needed to start constructing the computer.
A couple of days later, at a regular meeting of the Homebrew Computer Club, a number of us excitedly showed the 6502 microprocessors we’d bought. More people in our club now had microprocessors than ever before.
I had no idea what the others were going to do with their 6502s, but I knew what I was going to do with mine.
To actually construct the computer, I gathered my parts together. I did this construction work in my cubicle at HP. On a typical day, I’d go home after work and eat a TV dinner or make spaghetti and then drive the five minutes back to work where I would sign in again and work late into the night. I liked to work on this project at HP, I guess because it was an engineering kind of environment. And when it came time to test or solder, all the equipment was there.
First I looked at my design on draft paper and decided exactly where I would put which chips on a flat board so that wire between chips would be short and neat-looking. In other words, I organized and grouped the parts as they would sit on the board.
The majority of my chips were from my video terminal—the terminal I’d already built to access the ARPANET. In addition, I had the microprocessor, a socket to put another board with random-access memory (RAM) chips on it, and two peripheral interface adapter chips for connecting the 6502 to my terminal.
I used sockets for all my chips because I was nuts about sockets. This traced back to my job at Electroglas, where the soldered chips that went bad weren’t easily replaced. I wanted to be able to easily remove bad chips and replace them.
I also had two more sockets that could hold a couple of PROM chips. These programmable read-only memory chips could hold data like a small program and not lose the data when the power was off.
Two of these PROM chips that were available to me in the lab could hold 256 bytes of data—enough for a very tiny program. (Today, many programs are a million times larger than that.) To give you an idea of what a small amount of memory that is, a word processor needs that much for a single sentence today.
I decided that these chips would hold my monitor program, the little program I came up with so that my computer could use a keyboard instead of a front panel.
What Was the ARPANET?
Short for the Advanced Research Projects Agency Network, and developed by the U.S. Department of Defense, the ARPANET was the first operational packet-switching network that could link computers all over the world. It later evolved into what everyone now knows as the global Internet.
The ARPANET and the Internet are based on a type of data communication called “packet switching.” A computer can break a piece of information down into packets, which can be sent over different wires independently and then reassembled at the other end. Previously, circuit switching was the dominant method—think of the old telephone systems of the early twentieth century. Every call was assigned a real circuit, and that same circuit was tied up during the length of the call.
The fact that the ARPANET used packet switching instead of circuit switching was a phenomenal advance that made the Internet possible.
Wiring this computer—actually soldering everything together—took one night. The next few nights after that, I had to write the 256-byte little monitor program with pen and paper. I was good at making programs small, but this was a challenge even for me.
This was the first program I ever wrote for the 6502 microprocessor. I wrote it out on paper, which wasn’t the normal way even then. The normal way to write a program at the time was to pay for computer usage. You would type into a computer terminal you were paying to use, renting time on a time-share terminal, and that terminal was connected to this big expensive computer somewhere else. That computer would print out a version of your program in 1s and 0s that your microprocessor could understand.
This 1 and 0 program could be entered into RAM or a PROM and run as a program. The hitch was that I couldn’t afford to pay for computer time. Luckily, the 6502 manual I had described what 1s and 0s were generated for each instruction, each step of a program. MOS Technologies even provided a pocket-size card you could carry that included all the 1s and 0s for each of the many instructions you needed.
So I wrote my program on the left side of the page in machine language. As an example, I might write down “LDA #44,” which means to load data corresponding to 44 (in hexadecimal) into the microprocessor’s A register.
On the right side of the page, I would write that instruction in hexadecimal using my card. For example, that instruction would translate into A9 44. The instruction A9 44 stood for 2 bytes of data, which equated to 1s and 0s the computer could understand: 10101001 01000100.
Writing the program this way took about two or three pieces of paper, using every single line.
I was barely able to squeeze what I needed into that tiny 256-byte space, but I did it. I wrote two versions of it: one that let the press of a key interrupt whatever program was running, and the other that only let a program check whether the key was being struck. The second method is called “polling.”
During the day, I took my two monitor programs and some PROM chips over to another HP building where they had the equipment to permanently burn the 1s and 0s of both programs into the chips.
But I still couldn’t complete—or even test—these chips without memory. I mean computer memory, of course. Computers can’t run without memory, the place where they do all their calculations and record-keeping.
The most common type of computer memory at the time was called “static RAM” (SRAM). My Cream Soda Computer, the Altair, and every other computer at the time used that kind of memory. I borrowed thirty-two SRAM chips—each one could hold 1,024 bits—from Myron Tuttle. Altogether that was 4K bytes, which was 16 times more than the 256 bytes the Altair came with.
I wired up a separate SRAM board with these chips inside their sockets and plugged it into the connector in my board.
With all the chips in place, I was ready to see if my computer worked.
The first step was to apply power. Using the power supplies near my cubicle, I hooked up the power and analyzed signals with an oscilloscope. For about an hour I identified problems that were obviously keeping the microprocessor from working. At one point I had two pins of the microprocessor accidentally shorting each other, rendering both signals useless. At another point one pin bent while I was placing it in its socket.
But I kept going. You see, whenever I solve a problem on an electronic device I’m building, it’s like the biggest high ever. And that’s what drives me to keep doing it, even though you get frustrated, angry, depressed, and tired doing the same things over and over. Because at some point comes the Eureka moment. You solve it.
And finally I got it, that Eureka moment. My microprocessor was running, and I was well on my way.
But there were still other things to fix. I was able to debug—that is, find errors and correct them—the terminal portion of the computer quickly because I’d already had a lot of experience with my terminal design. I could tell the terminal was working when it put a single cursor on the little 9-inch black-and-white TV I had at HP.
The next step was to debug the 256-byte monitor program on the PROMs. I spent a couple of hours trying to get the interrupt version of it working, but I kept failing. I couldn’t write a new program into the PROMs. To do that, I’d have to go to that other building again, just to burn the program into the chip. I studied the chip’s data sheets to see what I did wrong, but to this day I never found it. As any engineer out there reading this knows, interrupts are like that. They’re great when they work, but hard to get to work.
Finally I gave up and just popped in the other two PROMs, the ones with the “polling” version of the monitor program. I typed a few keys on the keyboard and I was shocked! The letters were displayed on the screen!
It is so hard to describe this feeling—when you get something working on the first try. It’s like getting a hole-in-one from forty feet away.
It was still only around 10 p.m.—I checked my watch. For the next couple of hours I practiced typing data into memory, displaying data on-screen to make sure it was really there, even typing in some very short programs in hexadecimal and running them, things like printing random characters on the screen. Simple programs.
I didn’t realize it at the time, but that day, Sunday, June 29, 1975, was pivotal. It was the first time in history anyone had typed a character on a keyboard and seen it show up on the screen right in front of them.