Features —

A History of the GUI

Have you ever wondered about the genealogy of the graphical user interface you …

Introduction

Today, almost everybody in the developed world interacts with personal computers in some form or another. We use them at home and at work, for entertainment, information, and as tools to leverage our knowledge and intelligence. It is pretty much assumed whenever anyone sits down to use a personal computer that it will operate with a graphical user interface. We expect to interact with it primarily using a mouse, launch programs by clicking on icons, and manipulate various windows on the screen using graphical controls. But this was not always the case. Why did computers come to adopt the GUI as their primary mode of interaction, and how did the GUI evolve to be the way it is today?

In what follows, I?ll be presenting a brief introduction to the history of the GUI. The topic, as you might expect, is broad, and very deep. This article will touch on the high points, while giving an overview of GUI development.

Prehistory

Like many developments in the history of computing, some of the ideas for a GUI computer were thought of long before the technology was even available to build such a machine. One of the first people to express these ideas was Vannevar Bush. In the early 1930s he first wrote of a device he called the "Memex," which he envisioned as looking like a desk with two touch screen graphical displays, a keyboard, and a scanner attached to it. It would allow the user to access all human knowledge using connections very similar to how hyperlinks work. At this point, the digital computer had not been invented, so there was no way for such a device to actually work, and Bush's ideas were not widely read or discussed at that time.

However, starting in about 1937 several groups around the world started constructing digital computers. World War II provided much of the motivation and funding to produce programmable calculating machines, for everything from calculating artillery firing tables to cracking the enemy's secret codes. The perfection and commercial production of vacuum tubes provided the fast switching mechanisms these computers needed to be useful. In 1945, Bush revisited his older ideas in an article entitled "As We May Think," which was published in the Atlantic Monthly, and it was this essay that inspired a young Douglas Englebart to try and actually build such a machine.

The father of the GUI


Douglas Engelbart in 1968

Douglas Englebart completed his degree in electrical engineering in 1948 and settled down in a nice job at the NACA Institute (the forerunner of NASA). However, one day while driving to work he had an epiphany: he realized that his real calling as an engineer was not to work on small projects that might only benefit a few people. Instead, he wanted to work on something that would benefit all of humanity. He recalled Bush's essay and started thinking about ways in which a machine could be built that would augment human intellect. During the war he had worked as a radar operator, so he was able to envision a display system built around cathode ray tubes where the user could build models of information graphically and jump around dynamically to whatever interested them.

Finding someone to fund his wild ideas proved to be a long and difficult task. He received his PhD in 1955, and got a job at the Stanford Research Institute, where he received many patents for miniaturizing computer components. By 1959 he had earned enough recognition to receive funding from the United States Air Force to work on his ideas. In 1962, Douglas published his ideas in a seminal essay entitled "Augmenting Human Intellect." In this paper, Douglas argued that digital computers could provide the quickest method to "increase the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems." He envisioned the computer not as a replacement for human intellect, but a tool for enhancing it. One of the first hypothetical examples he described for this technology was of an architect designing a building using something similar to modern graphical CAD software.

This was a huge leap in thinking for 1962. The only computers that existed at the time were giant mainframes, and typically users would interact with them using what was called "batch processing." A user would submit a program on a series of punch cards, the computer would run the program at some scheduled time, and then the results would be picked up hours or even days later. Even the idea of having users enter commands on a text-based terminal in real-time (called "time-sharing" in the jargon of the day) was considered radical back then.

Douglas and his growing staff worked for years to develop the ideas and technology that finally culminated in a public demonstration in front of over a thousand computer professionals in 1968.

Download the PDF
(This feature for Premier subscribers only.)

Channel Ars Technica