Introduction

In 1961, I was awarded a research grant by the DSIR (Department of Scientific and Industrial Research) to carry out work on electrical discharges for a PhD degree, under the supervision of Professor P M Davidson. It will be useful to have a short background of this work to see the role that computing was going to play. Readers who are not interested in these details can jump to the point where the hardware is introduced.

Growth of ionisation in gases

Steady state equations

Gases are normally good insulators a property that is of huge importance in the power distribution network. If the electric field in a gas is sufficiently large, however, then a rapid transition to a conducting state can occur (an electric spark) and currents of thousands of amps can flow, usually limited only by the ability of the power supply and the connecting leads to maintain the current.

In the simplest case, we can assume that the current in the gas is carried by free electrons and positive ions. One electron, travelling through an electric field, can gain enough energy to ionise neutral atoms, so producing an additional electron and a positive ion. This mechanism is called the primary ionisation process (although in the treatment of detectors like Geiger-Müller tubes, it is sometimes called secondary ionisation, to contrast it with the ionisation produced by the incident high-energy particle). The primary process is described mathematically by the equation

n(x) = n(0)eax, (1)

where n(x) is the number of electrons reaching a distance xn(0) is the initial number of electrons at position 0, and a is the primary ionisation coefficient, sometimes called the Townsend primary coefficient in honour of J S Townsend who developed this theory in about 19031. [Superscript numbers refer to the publication list at the end of this document.] A group of electrons arising from one initiating electron is called an avalanche. (We note that, if electrons are taken to travel in the positive x direction, then the electric-field vector points in the negative x direction, because of the negative charge of the electron. It is convenient, however, to consider the electrons to move in the positive x direction, and this should not cause any problem.) The exponential growth of electron numbers described in equ. (1) cannot continue indefinitely and in a well-controlled experiment the growth is terminated when the electrons reach the positive electrode (anode) at x = d. Similarly the positive ions generated by the primary ionisation process travel in the negative x direction, and end up at the negative electrode (cathode) at x = 0.

If only the primary process was active, then the current would stop flowing as soon as all the charged particles were collected at the electrodes. Townsend introduced several secondary processes to explain why the current can, in practice, continue to flow. The simplest one describes how new electrons can be generated at the cathode by the impact of positive ions there, and the secondary coefficient g is defined as the probability that a new electron is produced following the arrival of each positive ion. Putting these two processes together, Townsend derived the equation

I(d) = I0ead/[1 - g(ead - 1)], (2)

where I(d) is the current arriving at the anode at a distance d from the cathode, while I0 is a small initiating current supplied at the cathode by some external means, e.g. by irradiating it with ultra-violet light. (If this initiating current was not supplied, then we would have to wait for the random appearance of an electron, from cosmic rays for example.) Equ. (2) has been the basis of many experiments. Keeping I0 constant, I(d) is measured as a function of d (i.e. for different anode-cathode separations), while the voltage is increased in proportion to d, so as to keep the electric field, and therefore a, constant. A typical graph of I(d) against d is shown in Fig. 1. The initial part of the graph is exponential, and therefore appears as a straight line when a logarithmic scale is used on the y-axis. For large d, however, the graph becomes increasingly steep, and tends to infinity when the denominator of equ. (2) becomes zero, i.e. at the value of d = db given by

g(ead - 1) = 1. (3)

This is known as the breakdown criterion, and describes a transition to a regime that is not accessible to steady state experimental measurement.

A graph showing evaluation of equ. (2) for the parameters I0 = 1, a = 5, g = 0.0065

 Fig. 1. Evaluation of equ. (2) for the parameters I0 = 1, a = 5, g = 0.0065

Temporal growth

Up to this point in the theory, the mathematics does not appear sufficiently complicated to require the assistance of a computer. There is some scope for computation in applying non-linear curve-fitting to data described by equ. (2), so that the experiments yield values of a and g, together with estimates of their uncertainty (errors). This is an interesting topic in its own right, but is not the main computational project that we are interested in. We ask what happens when the electrode separation is greater than db, or when the voltage is increased until it is just greater than the value needed to satisfy equ. (3). In this case, a steady state cannot exist. As soon as the initiatory current I0 is switched on (or as soon as the voltage is applied between the electrodes), the current grows extremely rapidly, eventually exceeding the capacity of the power supply to keep the voltage constant. To study this process (temporal growth of ionisation), we need to introduce time into the equations, together with the average velocities of the electrons, we, and positive ions, wp. The equations basically describe the steady motion of particles in the positive and negative directions, together with the primary ionisation process, which increases the numbers of both types of particles. Representing the number densities of electrons and ions by ne(xt) and np(xt), the equations take the form

(/t)ne = (/x)(wene) + awene, (4)

(/t)np = (/x)(wpnp) + awene. (5)

In addition, we have a boundary condition at the anode:

np(dt) = 0, (6)

indicating that no positive ions are emitted from the anode, while at the cathode we have, for example

wene(0, t) = gwpnp(0, t), (7)

(or a similar equation representing secondary emission due to the arrival of photons at the cathode). There is also a need for some initiatory mechanism, which could be as little as a single electron, otherwise equ. (4) - (7) have the trivial solution ne º np º 0.

Equ. (4) and (5) show a partial symmetry between the electrons and the positive ions, but the ionisation term is the same in both equations, which breaks the symmetry and makes the solution more difficult. Various attempts2,3,4 were made in the 1930s and 1940s to solve these equations. We can obtain a rough estimate as follows5: Equ. (3) can be regarded as a replacement condition. If one "primary" electron leaves the cathode, then m, defined by the left-hand side of equ. (3) gives the average number of secondary electrons produced by the resulting avalanche. Clearly, if m > 1, subsequent generations of electrons will increase in the ratio 1 : m : m2, … The time between generations, t, is of order d/we, when the secondary emission is due to photons, and d/wp when it is due to positive ions. (It is not difficult to obtain more accurate expressions, but these are near enough for the present discussion.) At a time t, the number of generations will be t/t, and the current will be proportional to mt/t = elt where l = (lnm)/t. This is an "exponential" increase (with the correct meaning of the word!). Inserting typical values of the parameters into this formula, we can easily see how a significant increase in current can occur in times of order µs to ms.

The definitive work in solving equ. (4) - (7) was done by P M Davidson in a series of papers beginning in 19476,7,8,9,10. He used the method of Laplace transforms, and obtained the solution in the form of a contour integral. This could be evaluated by summing the residues at all the singularities of a complex function, leading to an infinite series of exponential terms. Evaluating the solution numerically is a significant problem and needs the help of a computer, but this has generally not been done. The reason is that the exponential terms contain one real, increasing term, and all the others are damped relative to this one. In the experiments being done at the time, it was not possible to observe small, rapidly changing, currents. By the time the current had increased to a measurable value, the solution was described to a good approximation by the real exponential term alone, and the evaluation of this is a much simpler problem.

Space Charge and Field Distortion

Where numerical computation becomes important is for currents greater than a few milliamps. The numbers of electrons and ions present are then sufficient to produce an appreciable electric field of their own - a phenomenon known as field distortion. To describe this, we need to add another equation to the above set, namely Poisson’s equation

ÑE = k(ne - np), (8)

where E is the electric field, and k is a constant involving e0, the permittivity of free space. (It is not equal to e0 because ne and np are numbers of particles per unit volume, rather than the charge density in coulombs per m3, and the standard sources of data on ionisation coefficients and velocities make it more convenient to use cm rather than m.) Coupled with this, we have empirical relations expressing awewp and g as functions of E.

The whole set of equations (4) - (8), plus the empirical relations, form a non-linear system, and methods like Laplace transformation no longer work. A computational approach was therefore required if we were to have any hope of predicting the growth of current over the six orders of magnitude from ~mA to ~kA.

Preliminary work using calculators

In 1961 it was not easy to gain access to a suitable computer, so some preliminary investigations were carried out, by myself and A J Davies, to see how equ. (4) - (8) would best be treated. We divided the space between the cathode and the anode into 20 intervals, and tried various finite-difference forms of the equations, to advance an arbitrary initial distribution of charges through a small time-step. We followed the development of the solution through one or two time-steps only, since the work had to be done on calculators. There was no intention of doing all the work this way, because thousands of time-steps would have been necessary, but the calculations very quickly gave us a feeling for truncation errors, instabilities, etc., and forced us to write out the steps systematically, which would prove invaluable in the eventual programming. Working in any more than one dimension would have been inconceivable at this time, so that the equations were expressed in terms of x and t only. This is not too significant for equations (4) - (7), but Poisson’s equation (8) is another matter. Put simply, a one dimensional version of the current-flow equations is permissible because the current flows mainly in one direction, while a one dimensional version of Poisson’s equation is not permissible because the field around a charge points outwards (or inwards) in all directions. Overcoming this difficulty was dealt with rather later, when some computational experience had been obtained.

We quickly discovered that simple forms of finite difference methods were often inaccurate and unstable, and that a higher order of accuracy could be obtained using the method of characteristics. This effectively is no more than following the tracks of the charged particles and working out how their density varies as they move. For example, the electrons at a position x at time t + Dt must either have been present at position x - weDt at time t, or have been generated somewhere along the path between these two points. This method yielded the higher accuracy without leading to instability. When access to a proper computer eventually became available, we were well placed to write a program that had a reasonable chance of success.

Available calculators

For this early work, we had machines like the Marchant calculator11 (made in USA). We never used the hand-operated version, but Physics had some of the electric version, which unfortunately, had to be shared between all the research groups. The Marchant was a "full keyboard" machine, in which an 8×10 matrix of input keys also acted as a storage register, while the results were accumulated in a long carriage at the top of the machine, similar to that of a typewriter. Later we had a couple of Anita12 calculators, which were ground-breaking in that they were fully electronic and made in the UK. Their layout was similar to the Marchant, without the moving carriage. Various other machines were used, like those by the Swedish firm Facit13, which had a reduced keyboard (i.e. it used serial input), and an interesting printing calculator, which was rather slow and worked on a reciprocating principle. The electric Facit was forever getting jammed (and it was not because we were dividing by zero, or trying to find Ö-1). The Anita was very nice to use, and very silent. The display consisted of neon tubes with the digits 0 - 9 made out of wire profiles. Each of these was made to glow by applying a voltage to the appropriate pin of the tube. These machines were not error-free, often because ageing of the tubes could make one of the numbers jump to zero. This error was not obvious, in contrast to a mechanical machine in which it was obvious when the mechanism was jammed. The instruction book recommended, for example, that following a division (and after writing down the answer), the multiply sign should be pressed and the register checked to make sure that the original dividend reappeared. When the Vivian building was constructed (and the plan was that this would be exclusively for Physics!) a computing room was included at the back of the 7th floor. As only mechanical calculators were envisaged, the room was provided with sound insulation. By the time the room was in use, calculators had become silent, and it was not much longer before everyone had their own basic calculator and did not need to go to a special-purpose room to do their calculations.

Computing experience at Swansea

All this preparatory work was done in 1961-62. There was some talk about the College getting a mainframe computer, but no guarantee that this would be available during the three years of my PhD project. Help came from the direction of the Department of Chemical Engineering. For many years, they had been modelling industrial processes using an analogue computer. This type of machine uses high-stability electronic amplifiers to carry out the mathematical processes of summing, inverting and integrating. [If you would like to try out the concepts of analogue computation, a digital simulation can be downloaded.] Typically one amplifier would be needed for each stage of a process (in simulating a chemical process, the output of the amplifier was a voltage proportional to the amount of a substance present at that stage). Such a machine is not ideal for solving a problem like ours, because we would need to model the equations (4) (5) and (8) at every one of the 21 points into which the discharge region is divided, and 21 is about the smallest feasible number. At each of these points, the value of ne would be present at the output of one amplifier, another would be needed for np, another for E, and at least three more would be needed to compute awe and wp as functions of E. Therefore a very large number of amplifiers would be required. Taking the digital approach on the other hand, the number of variables would be limited only by the memory of the computer, and a few kilobytes would be enough to do some useful calculations.

The Oxford Computing Service

The computing expert in Chemical Engineering was Dr R Wood, and he had acquired some experience of running engineering problems on a digital machine in the Computing Laboratory14 at Oxford University. This was the Ferranti Mercury computer, and was programmed using Mercury Autocode, developed by Ferranti and the Computing Department at Manchester University. Its memory consisted of magnetic signals on the surface of a rotating drum, and the format of the drum was particularly suited to arrays with a length of ~21, which is what we needed. Dr Wood gave a series of seminars and introduced members of other departments to the Oxford computing service, which was overseen by Professor L Fox. By now, our calculations were well formulated and it didn’t take long before we had a working program. Starting with a smooth distribution of charge, we were able to follow its growth over a large number of time-steps. The total current showed the expected exponential increase with time, and the distribution of charge remained smooth, not showing uncontrolled oscillations, which are the indication of instability.

When the current reached sufficiently large values, the concentration of charge produced a significant perturbation of the electric field, as a result of which the current started to increase faster than exponential15. This was the transition to the electric spark, exactly what we had been looking for. Programming the Mercury computer involved writing out the instructions on coding sheets and sending them off by post to 9 South Parks Road, Oxford. The results would come back a few days later as a roll of print-out in a moulded cardboard box. We would then have to return the box with a message like "Please change line xxx to xxx and run again".

Some results at last

The first real calculations were intended to provide a theoretical explanation of measurements of formative time-lags, i.e. the times tf that elapse between applying a voltage and the subsequent "breakdown". It is not critical how breakdown is defined, as the final stages of current growth occur very quickly, so we could use the point where the current reaches 10 mA, or when the voltage on the power supply drops by say 10%, or several other criteria. Good agreement16 was obtained, but only when a suitable value was assumed for the radius of the discharge. The reason for this is that equ. (8) involves numbers of charged particles per unit volume, while the total current flowing can be calculated by integrating numbers of charged particles per unit length. Therefore, for a given value of the current, the distortion of the field can be made as large as we like simply by reducing the assumed radius. This is not really satisfactory, and the situation was improved by going to a "1½-dimensional" method. Combining the ideas of a finite radius and the division of the discharge space into ~20 steps, we realised that the charge distribution could be considered as a series of discs. The electric field due to a disc of charge could be calculated at any distance along its axis (tending to the inverse-square behaviour at large distances), and summing all the contributions gave the resultant field. This technique resulted in the occurrence of breakdown being much less dependent on the assumed radius, so that we could be more confident about the significance of any theoretical-experimental agreement.

Interlude in PhD work

It was not far into the academic year 1962-63 before we found that the volume of work was too much for remote programming at Oxford (and of course other users had also increased the demand on the service). It was becoming more important to do the work on-site, but there was no prospect of an imminent arrival of the College’s own computer. In 1963-64, Professor John Parry, the current Principal of the College, was taking a sabbatical year, and the head of Physics, Professor F Llewellyn Jones, was appointed acting principal for a year. It was suggested that I should apply for a one-year temporary appointment in Physics so as to make up the staff numbers for this year, and another incentive was that, when I returned to my PhD work full-time in October 1964, the long-expected computer would be available. In addition, my supervisor, Professor Davidson, was appointed head of the department of Physics for this year and would not have been able to devote so much time to research.

Swansea’s first digital computer

In preparation for the new computer, interested staff and research students attended a series of lectures on the design of the machine and the chosen programming language, which was Fortran. The computer was an IBM 1620, which was unusual by today’s standards in using decimal rather than binary arithmetic, and using discrete transistor circuits. It had a ferrite-core memory storing 20,000 decimal digits, and did addition and multiplication via look-up tables, not in the hardware arithmetic unit. Because of the limited memory, compilation and execution had to be done in two separate operations. The source code, on punched cards, was inserted into the machine with the compiling program in operation, and another, larger, deck of cards was produced. In a subsequent run, the compiled code was inserted and results hopefully came out. While a program was being developed, each compiled card deck would often be used just once before being put in the bin. Since the compiled program was not in binary (remember the machine was decimal-based) it was not too difficult to read it, and simple modifications like changing a sign in a formula or changing the value of a constant could sometimes be made without scrapping the whole deck. Later, a compile-and-run version of Fortran, called Gotran, was obtained, speeding up the operation and generating much less scrap paper.

The IBM 1620 was installed in a single-storey building situated where there is now a car park to the north of the Taliesin Annexe (then known as the New Arts Theatre). The computer building had some previous history, as it was the main refectory before the construction of Fulton House. In fact, everyone knew it as the "New Refectory" because the "Old Refectory" was in the Abbey building - the long room along the west side of the Abbey, which had been the Orangery in a previous era. When the new Mathematics building (now known as Glyndwr) was built, the computer centre was moved to the ground floor, where Health Science’s Video-Conferencing room is now housed.

The output on the 1620 was via an electric typewriter. This was an IBM office machine (the traditional design, not the IBM "golf-ball" type) but rather more rugged, with a long carriage and a feed for continuous stationery. Our work, being a sort of simulation, generated a lot more output than input (in contrast to data analysis, where a lot of data are input and relatively few results come out). The nature of the problem also made scheduling rather difficult. We wanted to continue the calculations until the current growth became faster than exponential, but we did not know when this would occur (obviously, because the whole object of the calculation was to find this formative time lag). Therefore, it was possible for our allocation of time for a "run" to run out just before the interesting stage was reached. The whole of that run would be then be wasted. The typewriter output turned out to be rather limiting. We tried to reduce the amount of output by, for example, printing the charge density distribution at selected times, rather than at every time-step. It would have been very good if we had been able to output the results on to a magnetic disc or tape file, from which selected results could be printed. But reading a disc would have needed another machine, which we didn’t have (and if we had owned another machine, we would have been doing the computations on it anyway!).

Back to remote programming

Clearly, this long awaited central College computer was rather feeble by today’s standards - imagine how much demand there would be now for a machine with about 8 kB of memory costing nearly £1,000,000 in today’s money. It was able to provide the results for my PhD thesis, but the gas-discharge research was capable of extensions in several different directions, which called for something more powerful. The experimental work too had changed. Instead of measuring time-lags, a new technique of streak-photography had been introduced. The electric discharge emits light, and an electronic image intensifier is able to record this well before the bright spark occurs. By including deflecting plates in the intensifier17, it was possible to sweep the image across the viewing plane, resulting in a photograph with distance plotted vertically and time plotted horizontally. Electronic sweeping allowed movements occurring in less than a microsecond to be resolved. For the purpose of experimental-theoretical comparison, a graphical output would have been ideal - i.e. the program should provide a simulated streak photograph for direct comparison with the experimental photograph.

We had an opportunity to use the Ferranti Atlas I Computer18 at the Rutherford Laboratory at Chilton near Oxford. This computer, programmed in Atlas autocode (similar to Algol) and later in Fortran, had the ability to output results on microfilm (intended among other things for photo-type-setting). We calculated the light output at a position x and time t, and used random numbers to decide whether a point should be plotted at the position (xt) the probability of doing so being proportional to the light output. The average density of points was therefore proportional to the light output, and the picture had a very realistic "grainy" appearance19, just like the experimental one! (One should not be too impressed by this, however - someone with a shaky hand might draw a very realistic-looking map of the coastline of Norway, but it would be very unwise to rely on it for navigation.) Atlas also had a low-speed back-up memory, which enabled us to dump all the variables periodically. If a computer run terminated just before it started to get interesting, then we could re-start it from the last dump.

Subsequent work on gas discharges was carried out on Atlas until improved facilities20 became available at Swansea. In 1968 and 1969 I used the CDC6600 at CERN, Geneva21, while a Visiting Scientist there, and carried out the first computations22 involving three space dimensions. These calculations clarified the mechanism by which an electric spark could form along the track of a high-energy particle, even when this track was not parallel with the applied electric field. If we wanted to repeat these calculations now, then any desk computer would cope with them with no difficulty at all. As would be expected, as the power of computers improved, so our expectations23 were enlarged, and the technological interest now is for discharges in complicated geometry (such as sparking over the insulators of high-voltage transformers and power lines), or with no electrodes at all24.

Personal Calculators

The situation had changed enormously between about 1958 and 1968. At the beginning of this decade, calculations in one's own room or research lab were done with mathematical tables and slide-rules; mechanical calculators were departmental facilities. At the end of this decade, electronic calculators had become cheap enough for members of staff to have one each, although paying about £200 for a basic four-function calculator now seems very expensive. The first calculators had l.e.d. displays and therefore used a fair amount of battery power; they could be used while plugged in to a mains adapter, or could run on their rechargeable batteries for a couple of hours. Later ones had l.c.d.'s, and drained their batteries much more slowly.

As departmental facilities, Physics had a succession of computers. The LSI 11 was obtained for the automatic recording of experimental results. It had a 4 kB memory, of which 1 kB was non-volatile ferrite-core memory, and the other 3 kB was dynamic memory. When we started to use it, we found that the "refresh" process applied to the dynamic memory interfered with the acquisition of real-time data so that, for these experiments at least, we had to do any programming in 1 kB. The machine was started by entering the first instruction on a row of switches. This told it to read in a punched paper tape. The instructions on the tape were loaded into the memory locations immediately following those controlled by the switches. Therefore, when the computer had obeyed the instruction on the switches (to read an instruction from the tape), its next instruction was the one that had just been read in. In this way a short program could be input, which was able to do something more ambitious, like reading a longer program from a tape. The process was likened to "lifting oneself up by one’s own bootstraps", which is why we still often talk about "booting up" a computer. Paper tape was comparatively fragile and was frequently misread. Towards the end of the life of the machine, magnetic tape units became available. We would have found these a lot better than paper tape, but subsequently became well acquainted with the problems of locating specific programs, data, or blank regions on a magnetic tape.

We then obtained a ComCen machine with two 7-inch floppy disc drives. The disc-operating system (DOS) made a great difference to the ease of use of the machine. It was used for simple numerical work, for example using spreadsheets, some automatic data-acquisition via an analogue/digital converter, and a bit of word-processing (printing out on a daisy-wheel typewriter). A lot of the construction was done by ourselves. The processor chip was a Zilog Z80, and we gained some skills in programming this in assembly code. The same chip was used in the Sinclair computers, which were the first to be successfully marketed for home use. The BBC computer (later the Acorn) used MOS Technology’s 6502 chip (designed by former Motorola employees), which seemed to incorporate a very similar design philosophy to that of the Z80. The Dragon computer used the Motorola 6809, which was better than the 6502 in that it made more use of 16-bit operations. All these machines suffered from the disadvantage of using cassette-tape memory, although disc drives did become available subsequently. The Dragon computer was manufactured in Port Talbot and had a loose connection with the University. A few were bought by the Physics department, and a few members of staff bought their own to use at home. Although marketed as a 32 kB machine, it had a total of 48 kB memory. The first 32 were RAM and the top 16 kB were ROM containing the operating system and the Basic interpreter. This design proved to be an unwise choice when the need arose for a larger memory to accommodate a disc operating system, because the available RAM was then fragmented by having the ROM area in the middle of it (and changing this would have made existing software incompatible). This must have contributed to the delay in introducing the upgraded Dragon 64, plus some problems with sourcing the disc drives, and the machine never sold in the numbers required to keep the company going. With more timely development, the Dragon might have become as good as, but considerably less expensive than the early Apple computers, which also used the Motorola family of chips.

The use of graphics has always been desirable. At the time of the IBM 1620, a program was available to scan a photograph and produce output on the typewriter, producing a half-tone picture by choosing characters with different amounts of ink - from a full stop for the lightest grey up to M, $ or @ for the darkest grey; this was a great attraction on Open Days. I even saw someone trying out an early version of Space Invaders using the typewriter output. The use of microfilm graphics on the Atlas computer has already been mentioned. The first graphics machine in Physics was a Tektronix display, connected to the LSI 11. This produced higher resolution than a tv screen by using vector drawing hardware. The whole computer unit including the Tektronix was connected to the College mainframe and had a user name Pytektronix. As most of the users were real people (my user name for example was Pyevanscj i.e. putting the initials at the end), the Tektronix machine occasionally had mail sent to it addressed to "Dr I X Tektron"; we never discovered where he had graduated. Coupled with this machine we had a graph plotter, which we could control from software using "pen down" "pen move" and "pen up" instructions. One project that used these graphics facilities was an early study of computerised radiotherapy planning, in collaboration with Singleton Hospital.

PC's, as for most of the other computers mentioned, appeared first as Departmental machines, before becoming inexpensive enough to be as "personal" as the name implies. An important feature of course is the ease of networking these machines. We have now gone through four or five types of processor chip and even more versions of the operating system, from DOS and Windows v3 upwards. Processor speeds have increased by more than a factor of 50 but, at every upgrade, exactly the same complaint is heard "Why does this machine take so long to boot up?" Graphical display is now the rule rather than the exception, and one hardly ever sees a DOS screen unless an error occurs.

Programming languages

For programming the mainframe computers, Fortran has been the language of choice. Various dialects of Fortran are also available on PCs, but the early machines were supplied with Basic, originally conceived as a "cut-down" version of Fortran. This, being an interpreter rather than a compiler, resulted in very slow execution of the programs and, with a view to speeding things up, I used Pascal for a few tasks, as this was the only compiler-based language I could easily obtain at the time. It was easy to see why this was a favourite language for computer education, as it encouraged the programmer to write nested structures and avoid the use of "go to" instructions. This was Borland’s Turbo Pascal, running under DOS. Later we were able to use Turbo Basic from the same company, and we found that this achieved excellent speeds in both compilation and execution. It also facilitated good practices in program design and layout. Since the introduction of Windows, Visual Basic has been adopted for many programming tasks. Although one can criticise the rather bulky operating system, VB avoids the need to re-invent commonly-used routines, e.g. for opening a file-saving menu, and gives programs a very familiar professional appearance.

References

  1. J.S.Townsend Phil.Mag. 6, 1903, 358 & 598
  2. M.Steenbeck Wiss.Veroff.Siemens-Kons. 9, 1930, 42
  3. W.Bartholomeyczyk Z.Phys. 116, 1940, 235
  4. H.L.Von Gugelberg Helv.Phys.Acta. 20, 1947, 250 & 307
  5. W.Legler Z.Phys. 140, 1955, 221
  6. P.M.Davidson Brit.J.Appl.Phys. 4, 1953, 173-5 (appendix to ‘Formative time lags in the electrical breakdown of gases’ by J.Dutton, S.C.Haydon and F.Llewellyn Jones 170-5)
  7. P.M.Davidson ‘Growth of current between parallel plates’ Phys.Rev. 99, 1955, 1072-4
  8. P.M.Davidson ‘Growth of current between parallel plates’ Phys.Rev. 106, 1957, 1-2
  9. P.M.Davidson ‘Theory of temporal growth of ionization between parallel plates in the inert gases’ Proc.Phys.Soc. 80,1962, 143-50
  10. P.M.Davidson ‘The statistics of ionization currents in gases between parallel plates’ Proc.Phys.Soc. 83, 1964, 259
  11. Marchant Calculator
  12. Anita Calculator
  13. Facit Calculator
  14. Oxford Computing Laboratory
  15. A.J. Davies, C.J. Evans and F.Llewellyn Jones ‘Electrical breakdown of gases: the spatio-temporal growth of ionization in fields distorted by space-charge’ Proc.Roy.Soc. 281, 1964, 164-83
  16. A.J. Davies, C.J. Evans, C Grey Morgan and W.T. Williams ‘Space-charge controlled ionization growth in hydrogen’ Brit.J.Appl.Phys. 16, 1965, 1797-803
  17. Image-intensifier sweep camera
  18. Atlas I Computer
  19. A.J. Davies and C.J. Evans ‘The mechanism of high pressure gas breakdown’ Proc.10th Int.Conf.on Phenomena in Ionized Gases, Oxford, 1971, 172
  20. A.J. Davies and C.J. Evans ‘The computation of the growth of a gaseous discharge in space-charge distorted fields’ Computer Physics Comm. 3, 1972, 322-33
  21. CDC6600
  22. C.J. Evans ‘The development of inclined sparks in a track-following spark chamber’ Nucl.Instr.Meth. 69-1, 1969, 61-9
  23. A.J. Davies, C.J. Evans and P.M. Woodison ‘Simulation of the growth of axially symmetric discharges between plane parallel electrodes’ Computer Phys. Commun. 14, 1978, 287-97
  24. C.J. Evans and Yosr E. E-D Gamal ‘Laser induced breakdown of helium’ J.Phys.D: Appl.Phys. 13, 1980, 1447-58