NOT is the simplest operator we have. It's a unary operator, meaning it affects one value. All it does is that it evaluates true if its input is false or false if its input is true.
One way that we can understand boolean operations is by creating something called a truth table, which just displays all possible inputs and outputs, representing true as 1 and false as 0.
Let’s look at the table for NOT
x | !x
1 | 0
0 | 1
As this table shows, if X is true, !x returns false, and the reverse also applies. It gets a little more complicated when we start talking about AND, or OR, which are binary operators (which take two values and produce a single result).
AND is fairly straightforward. It simply checks if both values are true and returns true if that's the case, but will return false in any other case. You can think of it as asking “Are X AND Y true?”
x AND y
x y | x&&y
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
Here’s how you read the table: On the left side, we have values for x and y, which are either 0 ( false) or 1 ( true). On the right side, we have the resulting value of x&&y ( x AND y), given our aforementioned values of x and y.
OR, like AND is a binary operator. It'll take two values, and return true if one, the other, or both are true.
Here’s the truth table:
x OR y
x y | x||y
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
So, logical operators take in one or more boolean value, and output a single boolean value. This means that we can actually combine them. We can design operations such as !(x && y) (known as NAND), or (!x&&y)||(x&&!y) (known as XOR), or (x||y) && z (which doesn't have a formal name).
We can even write out truth tables for these derived operations. Let’s have a look at XOR. The way that we figure out what XOR returns is by resolving our operators one by one until we get a final result. Let's look at the case where we have false and false as our x and y.
(!x && y)||(x && !y)
Let’s break this down. The whole expression can be read as a big OR statement, where our first term is (!false && false) and our second is (false && !false). We know how OR behaves, so once we figure out what our a and bare, we can just look at our truth table.
So, let’s resolve our (!false && false). In order to do this, we'll similarly try to figure out the values that go into it.
Here’s where we start to get some answers. !false is true, and false is just false, so we can solve this part of our equation as (true && false). If we look at our truth table for AND, true && false results in a false.
Now we can go all the way up to the initial OR statement and have another look. Substituting our false for (!false && false), we now have false || (false && !false).
We can do the same thing to resolve the (false && !false). false is just false and !false is true, so we've got (false && true), which will resolve to false.
Finally, we’ll have false || false, which, if we look at our table for OR, resolves to false.
We can repeat that process with every possible value of x and y, and we'll have a table that looks something like this
x XOR y
x y | XOR
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 0
There are a multitude of reasons to care about booleans. If you’ve read my colleague Nisha’s post, Variables, Control Flow & Looping, you’ve seen expressions like myGrade < 80, or myGrade === 100, which actually evaluate to boolean values, and are quite useful for control flow.
However, the justifications for studying booleans and boolean algebra run much deeper than that. When we dig deep enough into our programs (quantum computing excluded), we find that they all data and all of our functionality can be reduced to a series of interconnected logic gates, passing 1 and 0 back and forth at dizzying speeds. In fact, the very hardware on which our software runs can be represented as such. With enough time and dedication, it is possible to implement an entire computer (from hardware to operating system to compiler to programming language to program) by combining a sufficient number of NAND gates (the publicly available Nand2Tetris actually sets out to do so).
Ultimately, our goal is to go beyond implementing software solutions without understanding their underlying functionality. What we aim to do is to delve deep into the systems that we build upon, to strip away enough of the abstraction that we can develop robust mental models that will allow us to write truly durable code.