I was recently looking through a stack of old photos and came across one from my days as a scientist at Bell Communications Research (Bellcore, the arm of Bell Labs owned by the regional Bell operating companies, which was one of the last Grand Labs). In 1992 we conducted research on the usability of tablet computers, and the photo shows me using a tablet with the PenPoint system from GO Corp:

1992 photo of Jakob Nielsen using a tablet running the PenPoint operating system
PenPoint tablet computer in 1992

GO was a stylus-operated device, complete with handwriting recognition. As you can tell from the photo, it was definitely a clunkier device than current tablets, weighing 4.5 pounds (5 times as much as a current iPad). My research with the GO didn’t discover most of our current tablet-design guidelines — mainly because this early device had virtually no applications. Instead, we studied the implications of handwritten data entry and stylus-controlled graphical user interfaces (GUI).

In one of our research studies with PenPoint tablets, Susan T. Dumais (now a Microsoft Distinguished Scientist) and I found that we could compensate for the inaccuracies of handwriting recognition by using a meaning-dependent form of search (LSI). This allowed us to retrieve handwritten notes about as well as typed documents, despite the higher frequency of misspellings in the former.

(As an aside, it’s sad to note the abysmal lack of progress in personal-information retrieval and personal-information management since Bellcore’s research in the early 1990s. Virtually all efforts and all the ensuing progress have been directed toward searching huge information spaces such as the entire web. Our own personal information has been left behind, and the productivity of knowledge workers has suffered accordingly.)

Using a stylus instead of one’s finger to control the tablet allowed for a wider range of interactions, in particular in the form of more precise gestures. With a pen, the user can precisely touch a very small point on the screen, thus avoiding the read–tap asymmetry that’s one of the main usability concerns with modern touchscreen user interfaces. (RTA = you can see small things, but only touch big things.)

Pen precision allowed for a rich set of editing gestures: for example, one could boldface a word by drawing a small “B” gesture on top of that word. Yet other gestures would result in other actions: for example drawing an “X” on top of a character would delete that character whereas drawing a proofreading “pigtail” gesture would delete a full word. As the deletion example shows, the concept of generic commands wasn’t fully realized, partly because of the difficulty of defining the scope of a command when it was drawn on top of the content. This definitely increased the learning time needed to use the system.

An interesting interaction design innovation followed from drawing the operations (commands) on top of the operands (the objects on which the commands operated): PenPoint was one of the few computer systems to unify the command syntax by avoiding the distinction between operator and operand. Usually, interactions have one of two possible syntax formats:

  • Verb-noun syntax: first the user specifies what should be done (the verb) and subsequently follows this by specifying the item that should be acted upon (the noun). For example, classic command-line-based operating systems would use a command format like DEL foo to delete a file named “foo.”
  • Noun-verb syntax: first the user specifies what item should be acted upon (the noun) and subsequently the user tells the computer what it should do to that item (the verb). For example, most graphical user interfaces use the following sequence of actions to delete a file: first select the icon representing the file. Second, drag this icon into the trash (or touch a trash icon).

Both formats have their own problems, including the basic point that they require the user to specify two things in order to have one action performed. In contrast, writing a squiggle on top of the thing to be deleted means that there is no ordering requirement for the operation and operand: both are specified in that single gesture. (Remember, the operand depended on the gesture: for example, if the gesture was an “X” a character would be deleted; a “pigtail” would erase the whole word.)

In retrospect, while GO Corp. pioneered many innovative interaction techniques and allowed us to do interesting early research on tablet user experience, it went nowhere commercially. The machine cost $3,920 in 1992 which is equivalent to $6,660 in 2015 dollars or 11 times the cost of an iPad Air 2. (The iPad has 64 GB of storage, whereas the most popular PenPoint model — the NCR 3125 — had 20 MB.) While price didn’t matter to the lavishly funded elite research lab where I worked, there was no way an average company could afford tablet computers in 1992.

Reflecting back on my experience with GO in 1992 does show some areas where current tablet UX could be improved:

  • A pen or stylus (as included with a few Android devices) allows for many richer interactions than are supported by current touchscreens due to the fat-finger problem. (Update September 2016: Apple introduced the Apple Pencil as a stylus for its high-end tablets in September 2015 — 8 months after this article was published — we've now bought a few and as I predicted we do find it useful for many applications.)
  • An emphasis on personal productivity instead of consumption of  entertainment and social media would lead to more useful UX designs.

Tony Hoeber, Robert Carr, Jerry Kaplan, and the remaining team responsible for GO Corp’s pioneering work should not be forgotten. Neither should the many other early tablet computers that had the misfortune to be ahead of their time. The tablets we’re currently enjoying didn’t come out of nowhere.