IBM 1620

The IBM 1620 was one of the earliest commercial computers to allow for interactive use. Most earlier models were tended by machine operators who would schedule jobs on tape or punch cards and return the results to the users. The 1620 had a typewriter console that allowed the programmer to interact with their program while it ran.

IBM 1620 System

From the IBM 1620 Data Processing System Reference Manual

It was originally developed at IBM’s Poughkeepsie Development Laboratory as a cheap machine for scientific use, with the 1400 series filling the corresponding rôle on the business side. The first minicomputer, the PDP-8, was not launched until 1964, but machines like the 1620 and 1401 were part of the trend that lead from colossal mainframes towards minicomputers and eventually to personal workstations.

Memory, Tape and Disk

Like most IBM machines of the era, the 1620 supported a wide variety of upgrades. The basic model had 20,000 digits of memory, with upgrades to 40,000 and 60,000. The architecture supported up to 100,000 digits, but an expansion containing this much storage was never built. Basic input and output was accomplished via a typewriter.

The IBM 1621 Paper Tape Reader and IBM 1624 Tape Punch were sold as companion units, allowing programs to be prepared on cheap machines (typewriters with tape output) and results from program runs to be examined away from the computer. The 1622 Card Read-Punch was also available and served a similar purpose.

Although not available when the machine was launched, the IBM 1311 Disk Storage Drive, released in 1962, provided significantly more storage space. It could store two million characters on a removable—-although at 4.5Kg not readily portable—-disk pack. Each pack contained six 14” platters and spun at 1500RPM. For comparison, modern disks typically have one or two platters, are 2.5-3.5" in diameter, spin at between 5,400 and 15,000RPM and store several hundred billion characters.

Can’t Add, Doesn’t Even Try

The original project which evolved in to the 1620 architecture was known as Computer with ADvanced Economic Technology (CADET). The team’s aim was to build a comparatively cheap version of existing IBM machines.

In order to keep costs low, the designers tried hard to minimise the amount of hardware the system used. Complex operations like multiply and divide were replaced with subroutine calls in a manner similar to later Reduced Instruction Set Computing (RISC) chips.

The 1620 went even further and omitted hardware for addition and subtraction as well. These operations were instead conducted via a lookup table. The machine shipped with a table of the results of single-digit in a lookup-table in core memory. Performing addition of two variable-length words was performed by reading one digit from each, looking up the result in the lookup table, writing the result and then proceeding to the next pair of digits. Any results where the sum was greater than 9 contained a carry bit which was added to the next pair of digits. Since it didn’t even have addition hardware, the CADET acronym was often taken as meaning “Can’t Add, Doesn’t Even Try.” The Model II included hardware for addition and subtraction and so was able to free up the 100 digits of storage used for the lookup table on the original 1620.

Since the machine had a variable word length, this loop would continue until it encountered a character indicating the termination of the word. The 1620 aimed to store data in a form with which most humans would be familiar, and so it used a big-endian notation—-one in which the most significant digit of each number was written first. This complicated addition for variable-length quantities. When performing additions, you start with the least-significant digits (the units), add them together, and see if you need to carry a one when you get to the next column. With a little-endian machine, where the least significant digit is written first, you can do this easily.

The solution the designers of the 1620 used was to address words by their least-significant digit. When performing calculations, the machine would then read backwards to get to the most significant digits.

Since the units digit had to be read first, it also contained the sign bit, the bit used to represent whether a number was positive or negative. If the flag bit is set on the rightmost digit of a number, it is treated as negative.

In the documentation, a bar over a digit is used to indicate that its flag bit is set, so 1 would be the digit 1 with the flag bit set. The machine would store -100 as 100. The flag bit on the 1 indicates that there are no more digits before it and the flag bit on the last 0 indicates that it is negative.

A Cheap Machine

The 1620 was cheap in comparison to mainframes available at the time. Something like the IBM 709 would cost half a million dollars for just the central processing unit. The core of the 1620 was listed at $64K, with another $39.5Kfor 20,000 digits of core storage and $30K for the card reader and punch, giving a total of around $133.5K for a complete system. Opting for paper tape instead of punch cards could save around $20K, however paper tape was a lot harder to edit than punch cards. In today’s money, that works out at a little under a million pounds. Certainly not cheap in absolute terms. Another option was to rent the machine from IBM, with the base unit costing $1,375 per month (around 2% of the purchase price). IBM were also known to offer quite significant discounts to educational customers at this time.

Programming the 1620

FORTRAN was the first high-level computer programming language. The first implementation was created in 1957, only two years before the 1620 was announced. FORTRAN II was created in 1958 and was state-of-the-art in terms of programming languages for the 1620. The FORTRAN II compiler taxed the 1620’s resources. It required 40,000 digits of memory, while the cheapest version of the 1620 only had 20,000 and even then could not be loaded in one go. Instead, the compiler was split into two passes. The first would be loaded, and then then program source. The machine would then spit out an intermediate representation. The operator then had to load the second pass of the compiler, then the intermediate tape, and finally get the program.

This was a very time consuming process. Any errors in the program required the entire compiler to be reloaded and the intermediate and final program tapes produced again. For complex programs this, combined with the compilation time, could take a very long time. For simpler programs, reloading the compiler accounted for most of the time.

To address this, the GOTRAN system was developed in 1961. This was a simple subset of FORTRAN containing only twelve statements, enough for a lot of student projects and even some larger tasks where the computer was used to perform relatively simple calculations on large sets of data. GOTRAN was known as a “load and go” FORTRAN system. It stayed in memory and interpreted program tapes as they were entered.

The alternative to programming in FORTRAN was to use the native machine language, typically via the Symbolic Programming System (SPS), a symbolic assembly language. SPS was a traditional assembly language where instructions were mapped to mnemonics which could be entered on a typewriter easily and then converted by the assembler to machine code which was executed directly.

SPS also used a two-pass system. The initial program could be either entered directly from the typewriter or the from a paper tape. The first pass would then check that all of the mnemonics used were valid and that all referenced labels existed (and where they existed). The second pass would take the output from this, followed by another tape containing any subroutines called by the program, and output a final tape containing the executable program.


Adding two 10-digit numbers together on the 1620 took 960µs, including loading the operands and storing the results. A significant fraction of this time was spent accessing the lookup table in core memory. Each memory access took around 20µs, so the table lookups accounted for around a fifth of the total time.

At this speed, the machine could perform just over a thousand such additions per second; more with shorter numbers. A ten digit decimal number is in roughly the range of a 32-digit binary number. Most modern microprocessors can do at least one such addition in a single clock cycle, taking under 1ns.The time taken for multiplication is even longer, around 17,700µs for two ten digit numbers. This was done using a form of long multiplication, where each pair of digits would be multiplied and the results all added together to give the final result.This slow speed meant that jobs would often run for several hours.

The operators of the Swansea University 1620 were known to take advantage of this fact to attend the dances held in the building behind the computer centre while long jobs ran.

The End of an Era

By the late ’60s, IBM had become aware of the idea of legacy software. This was a relatively new idea created by two factors. The first was that program memory had been gradually increasing. When programs were only a few hundred instructions long, rewriting them for a new computer didn’t take long or cost very much. A machine like the 1620, with up to 60,000 digits of memory could easily run programs several thousand instructions long and some organisations built up large libraries of such programs. The other issue was the relative cost of the computer and the programmer. When computers cost millions of dollars, programmer time was barely worth factoring in to the total cost of ownership. The 1620 was designed as a ‘low cost’ computer (this being a relative term in the ’50s). It, and similar machines, begun to swing the balance in the opposite direction.

In order to address this, IBM began to develop the System/360 architecture. This was unique for the time in that it was a single software interface (instruction set) implemented on a wide range of machines. As their marketing said, ‘On April 7, 1964 [the System/360 launch date] the entire concept of computers changed’. Organisations finding their System/360 minicomputer was not powerful enough could upgrade to a mainframe and run the same programs. This proved to be a very lucrative model for IBM. System/360 controlled over 70% of the market for much of the ’60s and is still sold today under the System z brand.

In 1964, IBM published a brochure entitled “Converting to the IBM System/360” extolling the virtues of this new platform as a migration path from the old 1620.

Compatibility features and associated emulator programs are designed to protect your investment in 1620, 1400- and 7000-series programs. Emulators make it possible for you to execute your current programs on System/360 with little or no reprogramming. Using these features, the appropriate model of System/360 will normally execute your 1620,1400- or 7000-series programs as fast or faster than they run on your present system.

From Converting to the IBM System/360

The newer machines could emulate the older ones instruction-by-instruction for full backwards compatibility and came with tools that would examine FORTRAN programs for assumptions about the 1620 and convert them into code that could be compiled for the newer architecture.

While the 1620 bears little relationship to modern computers, two features live on in IBM’s latest generation PowerPC chips. The optimisation for leaf functions (subroutines that do not call other subroutines), described by Edsger Dijkstra“a beautiful example of such a superfluous feature,” is somewhat similar to the mechanism of avoiding overhead in leaf functions on PowerPC chips. Dijkstra’s criticism, that it required the programmer to decide whether a function would call others too early on in the development process, became obsolete when compilers started generating code. The other feature, hardware for performing calculations on binary coded decimal values, was not present in the early PowerPC chips, but is found in the latest generation and, of course, in all of the System z (formerly System/360) machines in the last few decades.

David Chisnall

Further Reading: Colin Evans’ reminiscences on the 1620 at Swansea; David Wise’s 1620 restoration; IBM Archives on the 1620; Review of the 1620 by E.W. Dijkstra