ZZZ - The podcast that helps you sleep

The ZZZ podcast is designed to help you fall to sleep. Today we are learning how to program the Z80 microprocessor by Zilog.

Show Notes

Z80 microprocessor by Zilog
 
https://zzz.media/15

Facebook - Instagram - YouTube - Patreon

Background Music Provided by THE SLEEP CHANNEL from Spotify
https://sptfy.com/MNb7

Looking for a Headset you can wear while you sleep?
https://www.sleepphones.com/
Try out SleepPhones today!
★ Support this podcast on Patreon ★

What is ZZZ - The podcast that helps you sleep?

The ZZZ podcast is the podcast to help you sleep. We will read to you something not very interesting that you can listen to help you fall asleep each night.

 Welcome to today's triple Z..... The triple Z podcast is a daily recording that you can use to help you fall asleep each night. Just turn down the volume, lay back and enjoy as you fall asleep.

Today you will be learning about how to program the Z80 8-bit microprocessor introduced by Zilog. It was the startup company's first product. The Z80 was conceived by Federico Faggin in late 1974 and developed by him and his 11 employees starting in early 1975. The first working samples were delivered in March 1976, and it was officially introduced on the market in July 1976. With the revenue from the Z80, the company built its own chip factories and grew to over a thousand employees over the following two years.

The Zilog Z80 is a software-compatible extension, and enhancement of the Intel 8080. And, like it, was mainly aimed at embedded systems. Although used in that role, the Z80 also became one of the most widely used CPUs in desktop computers and home computers from the 1970s to the mid-1980s. It was also common in military applications, musical equipment such as synthesizers, like the Roland Jupiter-8, and coin-operated arcade games of the late 1970s and the early nineteen-eighties, including Pac-Man.

PRACTICAL MICROCOMPUTER PROGRAMMING:

THE Z80

W.J. WELLER

This third volume of the PRACTICAL MICROCOMPUTER PROGRAMMING series is concerned with detailed assembly language programming procedures for the Z80 microcomputer. In 18 chapters and four appendices it supplies everything necessary to write and debug Z80 application programs, including an assembler and debugging monitor. Paper tape object copies of this software are supplied free to the purchaser of this book with the return of the coupon in the back.

The 18 chapters of text cover all of the fundamental assembly level programming techniques, reinforced by more than 100 formal, tested examples which illustrate the techniques being discussed.

About the Author

Walt Weller is an application software consultant specializing in industrial, medical and educational uses of small computers. In addition to this book he is the author of Practical Microcomputer Programming: The M6800 and Assembly Level Programming for Small Computers, Lexington, 1975, and coauthor of Practical Microcomputer Programming: The Intel 8080 and An Editor Assembler System for 8080 8085 Based Computers. He currently resides in the Chicago area.

PREFACE

This third volume of the Practical Microcomputer Programming™ series is concerned with detailed assembly language programming procedures for the Z80® microcomputer manufactured by Zilog, Incorporated. of Cupertino, California. Its purpose is to provide the reader with the necessary information and software tools to make effective use of the Z80. The software, an assembler and debugging monitor, is given in full source form in appendices, and the purchaser of this book will be granted license to make copies for his personal or academic, but not commercial, use when the coupon at the back of the book is fully filled out and returned. To save the tedium of retranscribing the programs, paper tape object copies of both assembler and debugging monitor will be sent to the purchaser at no further cost upon receipt of the filled out coupon.

From one viewpoint or another every computer design contains flaws or features which might be viewed as awkward. The Z80 is no exception to this. Its weak points notwithstanding, a detached observer must conclude that, from a programming point of view, the Z80 is the most powerful eight-bit microcomputer yet to appear. Designed as a superset of its popular predecessor the 8080, it offers enhancements over the 8080 which remove almost all of the objections to the programming characteristics of the earlier machine. It has a flexible, powerful instruction set which allows it, in skilled hands, to perform tasks of any significant size and complexity in substantially less memory than competitive microcomputers. With a single minor exception, its binary instruction set is a superset of the eighty eighties, which allows software developed for the 8080 to run unchanged on the Z80, an important consideration when contemplating a machine upgrade.

Having produced this powerful device, capable of exploiting the useful programming features of the 8080 and circumventing the clumsy ones, Zilog took the surprising step of making the Z80 completely incompatible with the 8080 at the symbolic level. A bizarre and verbose language was chosen to describe the machine in manufacturer's literature. While this language has undoubted merits in terms of similarity to formal notation used in system software design, there are overpowering objections to it from the point of view of applications programming. What makes a good formal design notation does not make a workable programming language. The needs of the two environments are completely different. Further, this choice of language voids the body of expertise built up so slowly and painfully with the 8080, forcing Z80 users to undergo a completely unnecessary and wasteful relearning task. Finally, as practical objections, the language requires about a third again as many keystrokes per line as conventional languages, and processors capable of assembling this language are not generally available. The consequence of this last fact is that most small system users are employing the Z80 as an 8080, the increment in power going mostly unexploited.

It can be argued that the language specified by the hardware manufacturer forms in some sense a standard. In a field of arbitrary "standards" the manufacturer's arbitrary "standard" might be considered the least arbitrary. The originator of the hardware has the right to describe it in any terms he sees fit. Still, if one chooses to send a shipload of Swahili bibles to Sweden, one ought not to be astonished if the books do not rivet the attention of the Swedes.

In a practical way, the real standard is that which exists, that which is already in common use and with which the largest number of potential users is already familiar and comfortable, in other words, the language of the 8080 family machines. Arguments can be made, many valid, that this language is a hodgepodge of different notations having very little internal consistency. So does the English language, but it is a useful, flexible tool, and attempts to replace it with something closer to the heart's desire of a grammarian would be foolish, as Esperanto proponents have found out.

Obviously, the greatest service to the greatest number of users and potential users of the Z80 can be performed by allowing the machine to be programmed in a language compatible with that of the 8080. For this reason, the language chosen for use in this book is an extension of the 8080 language, and the assembler in appendix B processes this extension. Using this system, those familiar with the 8080 can continue to program in the language with which they have become comfortable, extending their expertise into the Z80 superset as they master each new class of instructions. The choice of language is therefore entirely pragmatic. A cross index of the mnemonics used here with those of Zilog is supplied in appendix D.

The assembler will run on any 8080 family or Z80 machine, being written entirely in the 8080 subset. A Z80 based machine is not required. The program requires somewhat less than 10 Kay of RAM, the precise amount depending on the length of the user supplied input output routines. It will process source text into object code at about 1000 lines per minute on a system running at 2MHz. As some readers may wish to make changes in the assembler, the methods used in it are the simplest possible, at the cost of some space and speed. While no program of this length and complexity can be certified to be free of error, the assembler and debugging monitor have been subjected to extensive use and testing and contain no known errors. If any are discovered by readers, either in the software or the text, the writer would greatly appreciate being informed so they may be corrected in future printings. Suggestions for improvements and additions are also welcome.

A number of individuals and organizations have contributed to this book, either through discussions and suggestions, testing and criticism of software or provision of materials. They are,

alphabetically:

Mr. Ralph Hayford

Mr. Guy Hobart

Mr. Larry Leske

Mr. Harvey Nice

Mr. William Powers

Mr. Harold Scoblow

Mr. Albert Shatzel

Mr. Ted Singer

Mr. Mel Thomsen

Victor Comptometer Inc., Components Division

Mr. Karl Weller

Chicago

July 1978

Z80 is a registered trademark of Zilog, Inc.

THE NATURE OF THE PROGRAMMING TASK

"The idea of non-human devices of great power and great ability to carry through a policy, and of their dangers, is nothing new. All that is new is that now we possess effective devices of this kind. In the past, similar possibilities were postulated for the techniques of magic, which forms the theme for so many legends and folk tales. In all these stories the point is that the agencies of magic are literal-minded; and that if we ask for a boon from them, we must ask for what we really want and not for what we think we want. The new and real agencies of the learning machine are also literal-minded. If we program a machine for winning a war, we must think well what we mean by winning. We can fail in this only at our immediate, utter, and irretrievable peril. We cannot expect the machine to follow us in those prejudices and emotional compromises by which we enable ourselves to call destruction by the name of victory. If we ask for victory and do not know what we mean by it, we shall find the ghost knocking at our door."

To say that programming is a difficult activity is to utter a platitude. The great amount of literature and diverse opinion about methods and languages are testimony enough to the exact and exasperating character of an activity which is almost never performed correctly the first time. It is the purpose of this introductory chapter to attempt to explain to the reader why this is so, and by so explaining to help him avoid some of the pitfalls inherent in the task.

If the essence of the problem has to be condensed into a single sentence, the sentence would read something like this: programming computers is an alien psychological task. It is communication of a sort, but of a sort to which our associative mental processes are badly adapted. Learning to do it can be compared in some ways to learning a foreign language. It involves a vocabulary and a set of rules which must simply be memorized, along with the exceptions to the rules. When this has been accomplished a long and tedious period of practice follows, involving bungling and false starts. This period of practice leads by small increments to fluency in the use of the new language. If a second foreign language is to be learned the task is a bit easier, since some of the knowledge is "portable". Similarities in structure and vocabulary are noted and remembered, these similarities working as "bootstraps" in beginning with the new language.

The similarity ends here, however. Learning a foreign language is simply learning an alternate form of human communication. While the various forms of human communication look very different, they are really much more similar than different. The nuclear ideas of human communication are universal, concerned as they are with things which concern the human beings using the communication system known as language. Fundamental to human communication is the notion of implication, e.g., the remark by a Soviet politician that leaving the West to police itself in certain matters was "leaving the goat to guard the cabbage". Mr. Khrushchev did not have to say that he distrusted the West; the implied parallel with the goat and the cabbage did it for him.

Human communication with a machine is another matter entirely. There are no implications or nuances of expression. The device is, to use Wiener's expression, totally literal-minded, and dealing with a literal-minded agency is a task for which we are not by nature equipped, and our folk lore is rich in stories which illustrate our awareness of it. There are many such stories but perhaps the most vivid of them is called "The Monkey's Paw" by W.W. Jacobs. In this story an elderly English couple comes into possession of a magic piece, brought back from India by a member of the British Army stationed there. The magic piece, a monkey's paw, allows them three wishes, the first of which is for a large sum of money. This wish is granted when their son is mangled to death in machinery and the insurance company pays off his life policy. The second wish is for the return of their son to life. This is granted when they are tortured by the sound of his mangled body hammering at their door in the middle of the night. The third wish is used to return the son to his grave.

The point of this and all such stories is that the magic agent is completely literal-minded, as is an automatic device like a computer. This was the characteristic of computers that bothered Wiener in the quote at the head of this chapter. Does victory mean that one more of us is left alive than of them? The logical outcome of this definition of victory is a single human left alive. Is this what we really mean by victory? I think not, and become very uneasy at the idea of offensive and defensive weapons systems under the control of programming about which I know nothing.

As Wiener says, the only difference is that we now possess effective devices of this kind. We must learn to deal with them, learning to instruct them in a totally literal way, and in this completely literal character of the instruction lies the psychological alienness of the task. If this is only well understood the reasons for the failures to solve the problem of programming difficulty become immediately obvious. While one computer language form may differ from another in its superficial aspects (compiler, assembler, etc.) all of these language forms without exception require that we conceptualize the problem in totally literal, discrete steps. If the problem to be solved can be conceived of in this way then the difficulty vanishes. After this has been done the language in which the problem is programmed is pretty much irrelevant. Different languages offer facilities for convenient expression of problem solutions of different classes. Some of these languages succeed and others fail. The most long lived and successful of the higher level languages, FORTRAN, succeeded not because it relieved the programmer of the job of correctly conceptualizing the problem to be solved, but because it addressed a class of problems (mathematical) which had been reduced to procedural form long before the invention of the first computer. Relatively speaking, they were the easy problems. FORTRAN did not perform well for the problems of business, and other language forms were invented for these applications, eventually converging into COBOL and RPG. COBOL and RPG succeeded because they addressed a set of problems which were already well defined and reduced to procedural form. All of this is by way of saying that a programming language is an effect, not a cause. It exists as a response to the development of the ability to define a class of problems in precise procedural terms. It is the confusion of cause and effect which has led to the appearance and disappearance of so many programming languages over the years. Most of them were cures for diseases which did not exist, or which existed in such minor form than the language which addressed them had negligible application. The central nub of the

Practical Microcomputer Programming problem of programming and the nucleus of the difficulty of the activity is the need to define the problem to be solved in discrete, totally literal steps, and any system of programming which purports to relieve the programmer of this responsibility misrepresents itself.

This is most eloquently put by Norbert Wiener:

"No, the future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence. The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves."

Norbert Wiener

God & Golem, Inc. 1964

The problem thus reduces to finding some way to match a human mind which operates in a fuzzy, associative way, with a robot which will tolerate no fuzziness and is incapable of association. A fundamental adaptation must be made by one or the other. Since the computer cannot adapt because we do not yet know how to build a machine capable of adaptation, it is the human mind which must stretch to bridge the gap. That this is possible is testified to by the many successful existing applications of computers, but it is not easy, and the road to any successful computer application is littered with potholes into which the unwary will fall. Computer programming, like any other significant skill, is built heavily on experience. Good programmers are made, like good brain surgeons and engineers, by experience, for example, by making mistakes, finding the reason for the failure, fixing it and remembering the mistake so it isn't repeated.

It is tempting, when dealing with a particularly exasperating mistake, to claim machine failure. This may save the ego for the moment, but it is almost never true. In the writer's experience, which includes some of the early vintage vacuum tube machines, only once has an error turned out to be traceable to machine malfunction. While this may be partly due to good fortune, it remains true that errors caused by the programmer far outnumber those which might be caused by the machine.

Having so described the dimensions of the programming problem, there remains the question of getting the necessary "hands on" experience to acquire the skill. The reader is urged at this point to follow the directions in appendix A for initializing the assembler and debugging monitor. Using these two programs, the various example programs in the text can be tried out as they are studied. Only in this way can a real understanding of their functions be achieved. The operating instructions for the assembly program are in chapter 4, and those for the debugging monitor in chapter 17. Those already familiar with operations on binary quantities and the general organization of a computer can skip directly to chapter 4 and begin. If you are fuzzy on these topics by all means do not skip these chapters. Understanding their contents is vital to successful programming of the Z80 or any other computer.

"We have seen that the symbols of logic are subject to the special law x2 = x. Now of the symbols of Number there are but two, viz, 0 and I, which are subject to the same formal law. We know that 02 = 0 and that 12=1; and the equation x2 = x, considered as algebraic has no other roots than 0 and 1. Hence, instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with symbols of quantity admitting only of the values of 0 and 1. Let us conceive, then, of an Algebra in which the symbols x, y, z, etc. admit indifferently of the values of 0 and 1, and of these values alone. The laws, the axioms, and the processes of such Algebra will be identical in their whole extent with the laws, and axioms and the processes of an Algebra of Logic. Difference of interpretation will alone divide them. Upon this principle the method of the following work is established."

George Boole

An Investigation of the Laws of Thought 1854

The fact that computers are built of components which can exist in only two electronic states makes itself felt in almost all transactions between programmer and computer. It is the purpose of this chapter to give the beginning programmer an understanding of the workings of a two state system of arithmetic and other operations which take place within the computer. The two states may be given any desired set of names, on an off, yin and yang or Romeo and Juliet, but they are conventionally referred to as one and zero. Sometimes the zero state is called reset and the one state set.

A system for the expression of information which uses only two states may seem strange at first, but if the reader reflects on the matter he will realize that he has encountered such systems before. Traffic is controlled by a system of lights that have two states, green and red. The intermediate yellow is not a state at all, merely a warning that a change of state is about to take place. Morse code is a system of communication in which information is transmitted by means of combinations of two states, dots and dashes. An earthier form of two state communication is implied in the statement: "If the shade is up don't come in. My husband is home". Paul Revere's "one if by land and two if by sea" is another example. We deal with two state codes all the time almost without realizing that we are doing so. What is unfamiliar is the idea of expressing numerical information in a two state code.

This is the natural consequence of being born with ten fingers. There is a certain subjective "rightness" about counting by tens and computing in base ten. This feeling of "rightness" has no objective base, of course, as a look into history quickly shows. In ancient times twenty was commonly used as a number base in parts of the world in which the climate allowed people to go barefoot or wear open sandals. Some Europeans of the middle ages and later used twelve as a number base, the traces of this number base still showing in modern language in such terms as dozen and gross.

This feeling of familiarity is quite superficial, and this can be easily shown with young children who have not had enough time to accumulate the necessary prejudice. As we will see in a few pages, it is possible to count up to 31 using only five digits in base two. The five digits can be represented by the fingers of one hand. The letters A through Z can thus be set into one to one correspondence with base two numbers, A being one, etc. Even very young children, as young as seven, catch on to this system of communication very quickly. All that is required is to know how to spell the necessary words and count. The message is spelled out as a series of base two hand signals, an upraised finger being a one and a lowered finger being a zero. A seven year old can be conversing quite fluently in this system in only 15 minutes or so, though the representation of the letter "D" has caused some problems by its unfortunate coincidence with another type of hand signal.

As with decimal and all other number systems the position of a digit in a binary number has place value, the values of successively higher digit positions being the successive powers of the number base. The decimal number 2937, for example, really means:

the value of 10° being one. Similarly the base two or binary number 1 0 1 0 1 has the meaning:

the value of 2° again being one. If the reader is unfamiliar with this zero power business it is fairly easy to show from the laws of exponents. The result of the division:

If the two exponents are identical, the division becomes:

But since the divisor and dividend are equal the result of the division must be one, since dividing anything by itself results in a quotient of one. Any number to the zero power is therefore one.

The decimal value of the base two number shown above is easy to compute when written in this expanded form. The terms containing zeros are ignored, just as in decimal, and the values of the remaining

terms are added, namely the 2\ 22 and 2° terms. Since the only possible multiplier for these terms is one, not 1 through 9 as in decimal, this amounts to simply adding:

As a convenient reference, the first few powers of two are shown in tabular form below.

Binary Operations

The conversion of a longer binary number is shown in example 2-1.

Using the above table convert the binary number 1 1 0 0 1 0 0 1 to decimal. The number is first written in the expanded form shown above, for example:

Dropping the terms which contain zero multipliers and substituting the values of the powers of two from the above table, we have:

This process of conversion will be tedious at first but with a little practice it will become familiar.

Before leaving example 2-1 take note of the fact that the binary number 1 1 0 0 1 0 0 1 required eight digit positions, while its decimal value, 128, required only three. Binary numbers are long, tedious things and working with them in this form is a quick way to intro- duce errors. For this reason programmers use shorthand methods for writing binary. These are called the octal (base 8) and hexadecimal, base 16, systems. Computation is almost never done in these systems but it is necessary to know how they are represented.

Conversion between binary and octal is simply a matter of grouping the binary digits (bits) and then reading off the decimal equivalents of each group. The binary number in example 2-1, for example, is converted to octal by splitting it into groups of three bits, beginning at the right, like this:

1 1 0 0 1 0 0 1

The decimal equivalent of each group is then read off. The leftmost group is a 3, and the other two groups are both Vs. The result is then written:

the subscripts indicating the number bases. A longer conversion of this type is shown in example 2-2.

Example 2-2

Convert the binary number 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 to octal. The number is divided into groups of three bits, beginning at the right, the decimal equivalent of each group written underneath, like this:

0,0 0 1, 1 0 0, 1 1 0, 0 1 1, 0 0 1

0, 1, 4,6,3,1

the result being:

0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 2 = 0 1 4 6 3 1 8

Note here that since the number of bits in the binary number was sixteen, not a multiple of three, the leftmost group has only one bit.

The conversion from octal to binary is equally simple. Each of the octal digits is simply expanded to three bits, as shown in example 2-3.

Convert the octal number 72,613 to binary. This is done by spreading out the octal digits and writing the binary equivalent of each underneath.

7, 2, 6, 1, 3

1 1 1, 0 1 0, 1 1 0, 0 0 1, 0 1 1

The note at the end of example 2-2 about the dangling lefthand bit brings up a point about the octal number system which must be considered before using it as a shorthand for binary in programming.

Computers in general group bits into units whose size is an integral power of two, like, 4, 8, 16, or 32 bits. Since none of these groupings is an even multiple of three there will always be a leftmost bit group containing less than three bits if octal is used. It is this fault of the octal system which leads to consideration of an alternate shorthand for binary, the hexadecimal system.

Conversion of binary to hexadecimal is accomplished by grouping the bits into fours, then reading off the value of each, as with octal. The only problem is that four bits can represent a number whose value is greater than nine. Since we have no single symbols to represent these numbers in the decimal system, we simply invent symbols the first six letters of the alphabet. The symbols used for hexadecimal conversion are shown in table form below.

Conversion between binary and hexadecimal can be done by referencing the above table. The binary number of example 2-1 , 1 1 0 0 1 0 0 1, is converted by dividing it into two groups of four bits each, as:

1 1 0 0, 1 0 0 1

The rightmost group is simply a nine, an eight bit plus a one bit. The left group is a twelve, however an eight bit and a four bit. Looking in the table above, we see that the symbol for twelve is C. The result is then:

1 1 0 0 1 0 0 1 2

While the alphabetic symbols may give you trouble for a while the system becomes familiar with a little practice and it offers important advantages over octal when dealing with computers like the Z80, as we will see. A more elaborate conversion is shown in example 2-4.

Convert the binary number 0 1 1 1 1 1 1 0 1 0 1 1 0 1 0 0 to hexadecimal. The number is written in four four bit groups, like this:

0 1 1 1, 1 1 1 0, 1 0 1 1, 0 1 0 0

The leftmost group can be read immediately as a 7. The second group from the left has value 14 for which the symbol in the table is E. The third group has value 11, symbol B. The rightmost group can be read directly as a 4.

The simplest possible operation performed by a computer, at root the only operation of which it is really capable, is the comparison of one bit to another, setting another bit or bits to reflect the result of this comparison. All of the more complex operations are built up out of these simple bit comparisons. One type of bit comparison operation is known as the Inclusive OR. In this operation two bits are compared and the result set to a one if either or both of the bits being compared is a one. Like the operations of arithmetic, the Inclusive OR operation has a sign, V. To indicate the Inclusive OR of two quantities they are written with the "V" between them. All of the possible cases are:

The result of an Inclusive OR can be zero if and only if both of the bits being OR’ed are zero. In the Z80 and other computers bits are OR’ed in groups rather than singly. In the Z80 eight bits can be handled in parallel at one time. This operation is shown in example 2-5.

Form the Inclusive OR of the eight bit binary quantities 1 0 1 1 0 1 0 1 and 1 1 0 0 0 1 0 0.

The numbers are written one below the other:

1 0 1 1 0 1 0 1

The work may go from left to right or right to left but we will go from right to left to preserve consistency with the direction used in addition and subtraction operations to come later. To avoid excess verbiage in describing the bit positions within the eight bit group we will number the bits 0 through 7, 0 meaning the rightmost and 7 the leftmost bit. The operation is begun by ORing the 0 positions. The top bit is a one and the bottom a zero. The result is therefore one, written below, like this:

1 0 1 1 0 1 0 1

Next the bits in the 1 position (second from right) are OR’ed. They are both zero so the result is a zero, again written below, as:

1 0 1 1 0 1 0 1

This right to left sweep is continued until all eight positions have been ORed and the results written below. The final result looks like this:

1 0 1 1 0 1 0 1

The numbering of the bits from right to left in example 2-5 was not randomly chosen. Notice that the number of the bit position corresponds to the power of two represented by that position. The operation performed in example 2-5 is also known as the logical sum.

When the single word OR is used from this point on in this book it is the Inclusive OR function which is meant. This is to distinguish it from another type of OR operation which we will consider in a little while.

Another type of bit comparison operation is known as the AND, also sometimes called the logical product. In the AND operation a pair of bits is compared and the result set to one if and only if both of the bits being compared are ones. If either or both bits are zero the result is a zero. It is fairly easy to see why this is called the logical product. The result is the same as if the two bits were multiplied. The AND operator is an inverted "V". All possible cases for AND are shown below.

As with the OR function, the AND is performed on the Z80 eight bits at a time. The operation is shown in example 2-6. Form the AND of the two binary quantities in example

Note that ones appear in the result if and only if both of the bits above were ones. If either is a zero the result is a zero.

The AND operation is usually used in an operation called masking. The purpose of masking is to isolate some group of bits within a larger group by setting the irrelevant information to zeros. This is done by forming a mask to be AND’ed with the information, this mask containing ones only in the bit positions to be preserved and zeros elsewhere. To isolate the lower four bits of the 1 0 1 1 0 1 0 1 above a mask is constructed with ones in the low bits and zeros in the remaining bits. The operation looks like this:

Do not leave this example without understanding it. This operation will appear many times in this book. The AND operation is also sometimes called the logical product.

Yet another type of comparison operation is the Exclusive OR, also called the logical difference. In this operation the result is a one if and only if the bits being compared are different. The symbol for Exclusive OR is a crossed V. All possible cases of Exclusive OR are shown below.

Again, the result is a one only if the bits being Exclusive OR’ed are different. Notice here that the Exclusive OR function is the same operation performed on algebraic signs of numbers when multiplying or dividing. If a one represented minus and a zero plus, the result of Exclusive OR’ing will correctly predict the sign of a product or quotient, the algebraic rule being that the result will be minus only if the signs of the operands are different.

The Exclusive OR operation is an efficient way to test two sets of bits for equality. If the two quantities are identical the result of an Exclusive OR will be all zeros.

A logical operation which is performed on a single bit rather than a pair of bits is known as complementation. Complementation means the simple inversion of the value of the bit. If the bit is one its complement is zero. If the bit is zero its complement is one. Complementation is usually indicated by a bar over the quantity to be complemented.

As with the other logical operations, complementation is performed on eight bits at a time in the Z80. This operation is shown in example 2-8.

Form the complement of the eight bit number 1 0 1 0 1 0 1 0. The complement is formed by simply inverting the value of each of the bits, ones becoming zeros and zeros becoming ones. The complement of 1 0 1 0 1 0 1 0 is therefore 0 1 0 1 0 1 0 1. Note that if this result is itself complemented the original number comes back.

The complementation process just described is also known as one's complementation to distinguish it from another type of complementation which we take up a bit later in this chapter.

Another operation performed on a group of bits is known as shifting. To picture what goes on in a shift imagine a section of conveyor which is divided into eight discrete spaces separated by barriers. A space may contain a box (a one) or no box (a zero). The conveyor is capable of being moved only by integral spaces, i.e., it cannot be moved half a space or one and a half spaces, only one space, two spaces, etc. Now move this imaginary conveyor one space to the left. If the leftmost space contained a box (a one), the box falls off the end, and a new empty space enters from the right. If the conveyor spaces were numbered 0 through 7 as in example 2-5 , the content of space zero would move to space one, that of space one would move to space two, etc.

If the conveyor had been shifted to the right just the opposite effect would have taken place. The content of space zero would fall off the end and an empty space (a zero) would have been introduced on the left, with the new content of space seven moving to space six, etc.

Perform a right shift of the eight-bit number 0 0 1 1 0 0 0 1. This shift can be done easily by adding a zero to the left-hand end of the group and omitting the rightmost bit, 0 0 1 1 0 0 0 1 becomes 0 0 0 1 1 0 0 0, the bit shifted out having been a one. Now look at the values of the original number and the shifted number.

The effect, in this case, is to divide the original number by two, ignoring any remainder. This will not be strictly true when we come to deal with negative numbers. The general statement of the effect of a right shift is this. Shifting right one bit has the effect of dividing by two and rounding in the direction of the next more negative number. This effect will also be used many times in the remainder of this book.

The operations so far discussed can now be combined to perform some simple arithmetic. The sum of two binary digits, called the addend and the augend, can have only four outcomes. These are:

0 + 0 = 0 with no carry

0+1 = 1 with no carry

1 + 0=1 with no carry

1 + 1 = 0 with a carry into the next higher order bit

Ignoring carries into higher order bits for a moment, these sums are exactly the same result which would be gotten by performing the Exclusive OR of the bits being added. It is for this reason that the Exclusive OR operation is also sometimes known as half add. In the single case in which a carry can be generated, both of the bits being added must be ones. This corresponds exactly to the result obtained with an AND operation. Thus the sum column can be generated by an Exclusive OR of the bits, while the carry into the next higher order column can be generated by an AND. In the higher order columns the carry out of lower orders must be accounted for. This changes things somewhat, since now there is the sum of three bits to be considered.

This is the general case of addition of addend and augend bits including carry in from a lower order. In the case of the rightmost bit, of course, the carry in is always zero for a simple add.

The above rules make it possible to perform eight bit addition using Exclusive ORs to form the sum bits, ANDs to form the carries and shifts to move the carries into the next higher order columns. The carries are then again Exclusive ORed with the half sum produced by the previous Exclusive OR. This is repeated until the carries are all zero.

Since the result of the AND is not zero, there is a carry. The carry or carries are shifted into the next column by performing a left shift.

The result of the Exclusive OR becomes the new addend and the shifted result of the AND becomes the new augend and the process is repeated.

Since the result of the AND is not zero, it is shifted left again and the process is repeated.

The result of the AND is again nonzero, so we shift it left and try again.

Still no zero in the AND result, so we shift it left and try again:

The AND result is now a zero, so the final result of the process is the outcome of the last Exclusive OR. The value of this number, 0 0 0 1 0 0 0 0, is 1610, the correct sum of the original numbers, 9 and 7.

Subtraction can be done by a process implemented in a similar way. It is performed on a pair of bits called the minuend, the number from which something is being subtracted, and the subtrahend, the number which is being subtracted from the minuend. The four cases of simple subtraction are:

0- 0 = 0 no borrow

1- 0=1 no borrow

0- 1 = 1 borrow from higher order

1- 1=0 no borrow

The result bit, called the difference, is again a one if the minuend and subtrahend bits are different, just as in addition. This means that the Exclusive OR can again be used to form this partial result. The borrow bit becomes a one only if the minuend bit is zero and the subtrahend bit a one.

As subtraction is usually done on paper, the borrow is effected by decreasing the next higher order minuend digit by one, i.e., decrementing it. The same effect can be gotten by incrementing the next higher subtrahend digit before proceeding to that column, and this is indeed what is done. The reader might find it an instructive exercise to go through a binary subtraction on paper using this method.

Negative numbers are represented on the Z80 computer by means of their two's complements. The two's complement of a number is formed by first taking the one's complement as shown earlier in this chapter, and then adding one.

The reader should note something about the number used and its two's complement. The positive or negative character of a number in the two's complement system can be determined by looking at the leftmost bit of the number. If this bit is a zero the number is positive, if a one, negative, but the difference runs deeper than this. In the positive number, 0 0 0 0 1 0 1 1, the highest significant digit was in the 23 position, for example, the eight's bit. All the bit positions above this were filled with nonsignificant zeros. In the negative number, however, a significant bit is a zero, not a one. The highest significant bit in the negative number is the zero in the eight's position. The positions above this are filled with nonsignificant ones. In both cases the leading nonsignificant bits are filled with copies of the sign bit, zero for positive and one for negative. While we will speak in this book of the leftmost bit being a sign indicator, it must be understood that the sign fills all leading nonsignificant bits, whether positive or negative.

In dealing with any computer, but particularly with machines like the Z80 which handle data eight bits at a time, the generation of a result too large to be held in the allotted space must be considered. This condition is known as overflow. It is distinct from and does not have the same meaning as a carry out of the high bit. Overflow occurs when two signed numbers are added or subtracted to give a result which cannot be held in the allotted space. It is most important to understand this, so do not skip over the remainder of this chapter. You do so at your peril.

The largest number which can be held in N bits is 2N-1. For eight bits this means 255, or 28-l. If the eight bit group has an algebraic sign one bit must be allotted to it, leaving only seven to contain the magnitude of the number.

Which, if considered only as a magnitude, is correct, 128. But since the leftmost bit contained an algebraic sign, this is not correct as a signed result. We have added two positive numbers and gotten a negative sum. This is the overflow condition. Overflow can only occur when numbers of like sign are added or when numbers of unlike sign are subtracted.

Here two negative numbers have been added to produce a positive result, and overflow. The Z80 computer, unlike its predecessor the 8080, has hardware means for detecting the overflow condition. The exact means for using this hardware feature will be explained in a later chapter. The means by which it is detected are of some interest, however.

Overflow is detected by monitoring the carry into the highest bit, for instance., the sign bit, and out of the highest bit. If these two carries are different, then an overflow has occurred. The Z80 has an overflow flag or overflow bit which is set to one when this happens.

Recall that the result of an Exclusive OR is a one if and only if the bits are different, and the meaning of this will be clear. The two conditions:

One, An add or subtract has produced a result too large to be held in the signed space allotted; and, Two, The carry into the sign and the carry out of the sign which resulted from the add or subtract were different, are equivalent.

The subject of the maximum negative number touched on above requires a little expansion before we move on. This number cannot be negated (two's complemented) since it has no positive counterpart.

The result is the maximum negative number, but there is something else. During the addition of the one to the one's complement a one was carried into the sign but a zero was carried out. This is the overflow condition we have just discussed. The Z80 hardware will detect this and set the overflow bit.

"Once the characteristic numbers for most concepts have been set up, the human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes and which will be as far superior to microscopes or telescopes as reason is superior to sight."

The purpose of this chapter is to present a conceptual picture of a computer which will allow a programmer to work with it in an efficient way. That the organizational ideas presented here do not correspond to the hardware organization precisely will be obvious. The concepts to be understood in this chapter must be viewed in the same way as that in which a navigator, for purely computational purposes, views the stars as being projected onto a sphere of infinite radius. The model does not correspond to the physical truth but provides great operational convenience. The computer model presented here should be viewed in the same way — a useful artifice.

It is productive for a programmer to view the computer as being composed of three distinct logical components; a memory, an arithmetic/logic unit, and a controller or central processing unit (CPU). Memory consists of a device into which information may be placed and from which it can be repeatedly retrieved without destruction, in much the same way that words may be written on paper and read back repeatedly without vanishing from the paper. Writing information into computer memory erases or clears the previous contents. The act of writing information into memory is called storing. The meaning of this word is quite specific and unique.

Memory is organized into groups of digits of equal length, each of these groups being known as a word. The number of digits or bits which constitutes a word is known as the word length. For the Z80 the word length is eight bits. By coincidence, eight bits is also called a byte, but we will adhere to the more general nomenclature in this book. Each word of memory has associated with it a unique number

which identifies it and it alone. This number is known as the address of the word. Addresses usually begin with zero and run to the highest number memory word available, without gaps, but this need not be so. The address of a word functions like the number of a post office box. The post office box number has nothing to do with the contents of the box and the address of a memory word has nothing to do with the contents of that word. It is quite important to understand the difference between the address of a memory word and its contents.

The word size is determined by the hardware designer. The programmer can exercise no control over it. This does not means that the Z80 is limited to processing information of length eight bits or less. The operations of the computer allow words to be "chained" end to end, so that arithmetic may be performed on data of any required length.

The arithmetic/logical unit resembles a pocket calculator in its function, except, of course, that its calculation is done in binary. It contains one or more registers, temporary storage devices capable of holding one or more computer words. In the Z80 the main operational register is the A register. It is in the A register or accumulator, that most of the arithmetic and logical work of the computer is done, though some arithmetic can be performed in other Z80 registers. Accompanying the arithmetic/logical registers is a group of bits known as the flag word. The individual bits of the flag word are set by the

Z80 to reflect the outcome of operations performed in the arithmetic/logical registers, e.g., was the result a zero or did an addition or subtraction cause an arithmetic overflow. These flag bits can be individually tested, but more of this a little later.

The computer components discussed so far have no unique properties. The function of memory is the same as that of paper and pencil, while that of the arithmetic/logical unit (ALU) is the same as that of a pocket calculator. What is required to make the paper and calculator work is an intervening "intelligence" which can execute the steps necessary to get a useful result. This set of steps is known as the program. The implementation of the individual steps is the function of the controller or CPU. It is the CPU which exercises the supervisory function, driving the other components in such a way as to perform the required task.

The program steps, called instructions, reside in memory along with the numbers upon which these instructions are to operate. The CPU fetches the instructions from memory one at a time and supervises their execution. To perform this task it contains two principal registers, the program counter or P register and the instruction or I register. The program counter contains the memory address of the instruction which follows the one currently being executed. How this comes to be is not important just now, but the fact should be noted and memorized. The program counter always contains the address of the instruction following the one currently being executed. In this function the program counter can be said to "point to" the next instruction. This notion of a pointer is fundamental to the use of the Z80 or any other computer.