Building a computer from scratch part III: Memory

univac

“The power of the memory is great, O Lord. It is awe-inspiring in its profound and incalculable complexity… The wide plains of my memory and its innumerable caverns and hollows are full beyond compute of countless things of all kinds.”

Confessions of St. Augustine, Book 10

Part I here.
Part II here.

So far, I’ve learned how to make combinational chips – chips designed to combine two or more inputs in different ways. In this section of the Nand2Tetris course, I focused on sequential chips – chips that control state within the computer. State fluctuates, it can be in one state at a moment in time, and in another state at another moment. The ability to recall this state, and change it at will is called the “memory” of a computer.

Human memory isn’t so much a collection of facts so much as impressions. These we can recall with decreasing accuracy as a function of time. Knowing this, if you could build a machine that could calculate anything and store its results, wouldn’t you want it to store things with perfect recall? Doing so would improve on our shortcomings and enable to do useful things with them. This is what a computer is able to do.

The simplest way to create and store finite values is through electromagnetic charge. We can turn the current on a chip to signify “on,” and turn the current off to signify “off”. Flipping in between these two states, we can create digital gates, called flip flops, that can store its previous state. The naming comes from how these chips behave internally – Nand gates flip in between true and false values thereby storing incoming values and outputting the previous value it received. Similar to how the ALU behaves by combining true/false statements and building complex operations from them, a computer’s memory is based on storing previous outputs and combining them into bits, registers, and eventually memory devices such as RAM, ROM, etc.

A flip flop is able to either “remember” its previous state, then output that, or output the current incoming signal. It can do this through a very simple digital gate design that combines a Multiplexer chip to control which signal gets stored in the flip flop:

A Neural Network design for emulating a 1-bit register
Source: Rick Dzekman

This is a 1-bit register, because it stores one value, 0 or 1 -or Bit. This is the fundamental building block of memory devices. We can easily chain multiple registers together to form registers that can hold inputs of arbitrary bus sizes. This creates a RAM, or Random Access Memory unit. So called because an input is able to access any register in constant time (it takes the same amount of time for the input to be stored in any register on the computer, even if there are 10, 1000, or a billion registers). To sum, we can visualize a unit of RAM memory through the following diagram supplied by the Nand2Tetris book:

The RAM unit has two inputs, a 16-bit “word” you want to store, and the address where you want to store the data in. There is also a “load” input, that controls whether the incoming signal will be stored or not. Finally, “out” will output the current signal it received (if load is false) or the previous state of it (if load is true).

Finally, there is a Counter chip that keeps track of the computations done in a computer. Because chips are located in different parts of a computer, and some calculations take more time than others, signals come to the ALU at different rates; if one signal arrives but another is still pending, the ALU will be outputting gibberish. To prevent this from happening a Counter chip is used to keeps all calculations done in a computer in sync with each other. The passage of time in a computer therefore, is measured in “cycles.” For each cycle, every gate in a computer will perform an action, then nothing more. Only after the counter chip increments time, or moves on to a new cycle, do the gates open up again and allow computations to be made. The cycle then repeats.

This sums the chapter on memory devices. Onwards now to machine language -the place where hardware and software meet.


Thank you for reading through this article. I haven’t stated what’s the purpose of these posts yet, so I’ll clarify now. The primary objective is to write out my learnings and thoughts in as clear way as I can about the subjects I’ve learned in the chapter. By writing and recalling what I learned, I hope to gain a deeper, more solid understanding of the material. The secondary object is simply to share with you all these learnings, and perhaps inspire you to explore on your own the questions you might have about computers in a way that is useful for you.

Building a computer from scratch part II

Part 1 is here.

After reviewing basic boolean operations and seeing how to implement them in their corresponding digital gates, we can now make boolean arithmetic operations. In chapter 3 of Nand2Tetris, we take the digital gates we created (And, Or, Mux, Xor, etc) and create an ALU, or Arithmetic Logic Unit. This is the heart of a computer.

But before going into the implementation of an ALU, it’s helpful to demonstrate how we can do basic arithmetic using boolean numbers.

Our numbering system is based on the decimal system, which was handed down to us by the Greeks, Roman and as far back as ancient Egypt. One way to represent a number, say 13, is by thinking about each decimal place as a digit times a multiple of ten, in increasing order. So, 5 could be thought of as 5 plus 10 to the power of 0 (since it’s the first position). 13 is 3 plus 10 to the power of one. 125 can be thus represented as follows:

1 * 102 + 2 * 101 + 5 * 100 = 100 + 20 + 5 = 125

Hieroglyphs of numbers carved into a temple wall in Egypt | Ancient history  archaeology, Ancient world history, Ancient egyptian
Egyptian glyphs for the number 1 million, three hundred thousand, thirty thousand thirty three hundred and thirty three.
AtoZChallenge X for X - that's ten in Roman Numerals - TravelGenee
Are those numbers or roman numerals?

In a generalized form:

(x_{n}x_{n-1}…x_{0}) = \sum_{i=0}^{n} x_{i}b^{i}

Where b is the base system we want to use (10, 2 or even 16!), x is the number we want to represent and i is the index.

Adding two numbers in binary is easy:

1 + 0 = 1
0 + 1 = 1
0 + 0 = 0
1 + 1 = 10

When two ones are added, we get zero and move a carry to the next place, just like we do in our own decimal numbering system. What happens if we get 5 – 3? What about subtraction? To calculate this, we have to include negative numbers in our computer. To do it, we use the 2’s complement system:

  1. We use the last bit as a sign operator (0 denoting positive, 1, denoting negative)
  2. We represent negative numbers by taking the 2’s complement of the number.

The number 4 in a 4-bit computer is 0010, and -4 is 1101, which also represents the number 13. We know this represents a negative number because the first sign bit is 1, meaning negative. To get the 2’s complement, we reverse the numbers so that 0s become 1s and 1s become 0s, and then add 1 to it.

Dropping In on Gottfried Leibniz—Stephen Wolfram Writings
Gottfried Leibniz’s explanation on binary numbers is unfortunately not much better than mine.

A special feature of using the 2’s complement system is that subtraction can be performed by using addition. For example, let’s say you want to do the following calculation, 5-3. To do it, we can represent the operation as 5 – 3 = 5 + (-3). To solve, we can just add five to the 2’s complement of 3 and get our result:

5 in binary: 0101

3 in binary: 0011
1’s complement of 3: 1100
2’s complement of 3: 1101

To subtract 5 from 3, add the 2’s complement of 5 and -3, then drop the overflow:
0101 + 1101 = 0010.

Which is 2. The implications of these results are significant, it means that a single chip will be able to encapsulate all the basic arithmetic operators on a hardware unit. This unit we call the ALU (Arithmetic Logic Unit).

The course was smart, in my opinion, in having the student create incremental chips that make up the ALU, rather than diving head first into it. Like many things in engineering, solving a really hard problem usually starts by breaking it into smaller, more manageable problems. Thus the following chips were incrementally implemented:

  • Half Adder: a chip that can add two bits
  • Full Adder: a chip that can add two bits plus a carry
  • Adder: a chip that add two binary numbers up to n bits (in our case, 16 bits)
  • Incrementor: Adds a binary number by 1, takes care of carry.

Compared with the rest of the chips that I previously implemented, the ALU is a monster of a chip:

Elements of Computing Systems

Let’s break it down. On the input side:
zx: zero the x input
ny: negate the x input
zy: zero the y input
ny: negate the y input
f: function that computes the output to be x + y (if f is 1) or x & y (if f is 0)
no: negate the output

On the output side:
out: output of the computation
zr: 1 if out is 0, 0 otherwise
ng: 1 if out is less than 0,  0 otherwise

Conceptually, we can think of the ALU as a chip that takes two input numbers and applies a series of boolean functions to it depending on whether those “control bits” are 1 or 0. The making of the ALU took quite some time to figure out, but it ended up being, as the previous lesson proved, an exercise in breaking down complex problems into smaller, more manageable problems. One thing to note is that this ALU was designed specifically for the Nand2Tetris course and the professors cautioned that this is a very simple version of an ALU, yet it is completely functional.

A small part of the logic gate design for my ALU

This was a great lesson where I was able to my continue learnings on how to make digital gates. It’s humbling to think, as a software engineer, that all operations and fancy code libraries we create are reduced to simple addition operations between two binary numbers – done billions of times over. I think it’s a testament to the rock-solid mathematical principles that underly these systems. The early programmers did not have a bunch of code libraries to help them: they had to rely on themselves to create these extremely complex systems at very small scale and I think the only way they were able to do it was by relying on mathematical certainty and reliable hardware engineering to enable their creations to work.

After this lesson, I wonder what methods or processes I can use to enable me to write code that accomplishes the task well and does it in a way that takes advantage of the hardware beneath it.


Building a computer from scratch

“People who are really serious about software should make their own hardware.”

Alan Kay

I grew up learning about software like what imagine many developers do nowadays: doing tutorials online and building cool apps in my spare time. I learned about algorithms and data structures and got to learn a LOT more in my job by seeing “how the sausage is made.”

While browsing though online forums, I came across a video of Alan Kay, the creator of the GUI interface and object-oriented programming – ubiquitous inventions that in their absence would make the current world unrecognizable. In this video he mentioned something that blew me away. To paraphrase:

Most ideas don’t scale well, they merely provide incremental change. What’s needed is a change in perspective, an opening up of the world, to look at something in totally new ways, that can provide order of magnitude improvements.

In my day-to-day job, I look at incremental improvements in my program by using shorthand notation, eliminating redundant dependencies, and using design patterns to solve common problems. But if I want to really make a change, if I want to become a really good programmer, I had to change my perspective at a fundamental level. More fundamental than software. More fundamental than the operating system. I needed to understand how a computer works.

Isn’t it amazing? As a software engineer, I have no idea how a computer works. I can say that I know how it works, such as “instructions are sent to the CPU which calculates stuff, and those results are in binary which get translated to assembly which get translated into code by a compiler.” But I don’t really know how that works or even what I am saying when I state these things.

That is why I decided to start the course, Nand2Tetris. It’s a freely given course with an accompanying textbook that teaches you, step by step, on how to make a computer from scratch.

The first lesson of this course focused on Boolean algebra. It’s essentially a branch of mathematics that deals with truthy statements. This is what computers boil down to. Statement are either true or false, 1 or 0. Combined together, they can form complex operations which, almost miraculously, give rise to the computer itself.

There are many different kinds of such statements, or Boolean functions. These include:

  • And
  • Or
  • Not
  • Nand
  • Xor
  • Mux
  • DMux

Each of these can be implemented through physical chips, called digital gates. These gates implement a a boolean function, which provide different outputs depending on their inputs. These input/output combinations are described by Truth Table, as shown below:

AND Gate | Digital Logic Gates | Electronics Tutorial
Credit: electronics-tutorial.net

The first week of Nand2Tetris involved creating the digital gates described above through a single, primitive gate, the Nand gate. Based on the truth table representation of the chip, and its API, I had to create the chip based only on Nand gates and other gates which I previously made. Thus, brick by brick, a house is starting to be built. Some of the gates were straightforward to implement. For example, a Nand gate is simply an And gate connected to a Not gate:

logic nand gate
Credit: electronics-tutorial.net

Another gate, the Xor gate was not so trivial. To come up with it, we can look at the desired truth table we want to achieve with the gate:

Credit: electronics-tutorial.net

Then, we can create a boolean function that could represent the inputs and output, as long as the output is true. One such function is the following:

Thus by creating a statement that fulfills the truth table’s conditions, and simplifying it to its canonical representation, we can create a digital gate diagram that corresponds to that.

These digital gates don’t necessarily have to take a single number input, they can take a group of them, called a “bus.” The course specifies creating a “16-bit” computer, meaning that it is able to compute binary numbers up to 16 bits in length at a time.

This was an excellent exercise in flexing my logic muscles and brushing up on my Math skills. The next week will be a big step forward: building an ALU and Memory that will be able to both compute and store the processes that my computer will perform.

Hamilton’s Quill

In Ron Chernow’s Alexander Hamilton, Eliza Hamilton recounts her husbands work ethic as follow: he wakes up early morning before sunrise, takes a strong drink (understood here to be coffee), and works on his letters and manuscripts until 3 in the afternoon, whereupon he would eat, spend time with his family, read and go to sleep again. This wouldn’t happen everyday of course, but the essential fact is that Hamilton had a-near obsessive predisposition for work. It’s important to note there was no social media or internet to distract him, only his paper and quill were in front of him. Work became the means to entertain his mind. In other places throughout his biography we also see his superhuman work ethic in action. He once delivered a 30,000 word report on the commercial and financial state of the United States to congress in a mere month or two (It was a Herculean feat that Democratic-Republicans didn’t think he could accomplish). I believe that a singularly focused mind dedicated wholeheartedly to a task can achieve great things in this world. I also believe it would attain some special recognition, as the world has become connected and exceedingly distracted by the myriad of communicative opportunities.

In work, we can find meaning. If we can toil away our bodies and minds for the purpose of achieving a higher end, we are becoming fuller in our Christian identity. The ultimate toil of course, being that of achieving heaven -no greater reward compares to this, and there is no greater work required than the sacrifice of oneself for this purpose.

Let us work then, in dedication, focus, and earnest confidence that what we do can serve a higher purpose.

Introduction

Before St Francis Xavier departed on an evangelizing journey that would take him to India, China and the far reaches of Japan, he received a farewell that became the dominant focus of the mission he was about to partake in.

Ite, inflammate omnia – “Go, set the world on fire”

St Ignatius of Loyola, the founder of the Jesuit order, proclaimed this to all his missionaries, and its the same tagline I’d like to use to start this blog. The aim of this space is to become a place where ideas and mental processes can help us arrive towards a fuller understanding, a more truth-filled outlook, on the whole word. Yes, I do profess there is something as truth, and I also profess for there to be an absolute truth. And through it all it accompanies us as we dwell in this world of tears and joys, of sorrows and praise. Join me, so that we too, might set the world on fire.