Question

I am a Software Developer but I came from a non-CS background so maybe it is a wrong question to ask, but I do not get why logic gates/boolean logic behave the way they do.

Why for example:

1 AND 1 = 1  // true AND true
1 OR 0 = 1   // true OR False
0 AND 1 = 0  // false AND true

And so on..
Is it a conditional definition for example, like it is like that by definition?
Or there is a logical/intuitive explanation for these results?

I have searched google, also looked at the Wiki page of logic gates for an explanation of 'why' but I can only find 'how'.
I would appreciate any answer or resources.

Était-ce utile?

La solution

As stated by user120366, 16 possible 2-input logic gates exist, I've tabulated them for you here:

A|B||0|1|2|3|4|5|6|7|8|9|a|b|c|d|e|f
-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
0|0||0|0|0|0|0|0|0|0|1|1|1|1|1|1|1|1
0|1||0|0|0|0|1|1|1|1|0|0|0|0|1|1|1|1
1|0||0|0|1|1|0|0|1|1|0|0|1|1|0|0|1|1
1|1||0|1|0|1|0|1|0|1|0|1|0|1|0|1|0|1

A and B are the inputs, 0 through f are the possible permutations of outputs.

These gates have been named:

0 = FALSE 
1 = AND
2 = A NIMPLY B (A AND NOT B)
3 = A
4 = B NIMPLY A (B AND NOT A)
5 = B
6 = XOR
7 = OR
8 = NOR
9 = XNOR
a = NOT B
b = B IMPLY A (A OR NOT B)
c = NOT A
d = A IMPLY B (B OR NOT A)
e = NAND
f = TRUE

6 of these (0,3,5,a,c,f) discard one or both inputs. The IMPLY and NIMPLY gates are rare, though they are certainly used in formal logic. AND, OR and XOR are easiest for humans to reason with, but for physical hardware, NOR and NAND are also used heavily, because they can be simpler to implement and make smaller circuits. The same probably holds for XNOR.

So, as stated earlier, it's not so much that we decided that the gates should behave this way, but that 16 possible gates can be defined, and we came up with descriptive names for them.

Autres conseils

It's easiest to think of $1$ representing a true statement and $0$ representing a false statement. The logic gates then act as truth functions.

Say you put two statements, $p,q$, together to form a new statement, $r$.

In the case of and (logical conjunction), both $p$ and $q$ must be true for $r$ to be true. In the case of or (logical disjunction), $r$ is true if at least one of $p$ or $q$ are true.

I think the questioner has it backwards. If we have a logical function such that

    A | B | result  
   ---+---+-------  
    0 | 0 | 0  
    0 | 1 | 0  
    1 | 0 | 0  
    1 | 1 | 1  

then we decide to call that function and because it is obvious that the result is 1 only when A and B are both 1.

Similarly for or, exclusive-or, etc.

There are 16 possible logical functions of 2 operands --it's easy to list them all, it's just the permutations of the 'result' column above. Some have obvious names.. The field probably dates from George Boole. As far as circuits are concerned, any of the 16 could in principle be built, but some are more useful than others.

The why of it actually comes from the development of logic, which is a philosophical study of what is true and what is not true. Logic was originally a study of human language with the assumption that if you can reason about how human language works you can maybe reason about how reason works.

Since the language I'm answering you in English let's use English as an example even though the original study of logic was probably on the Greek language. First let's differentiate the study of logic (or what is true and what is not true) from the other study of language: grammar. Grammar involves interpreting the meaning of words put together involving elements of syntax, morphology, semantics etc., that is not what we are studying. We assume that grammar is a given and you and I understand it. What we are studying is if a statement is true.

So let's take a look at AND in English.

Example 1:

My nickname is slebetman AND I am a stackexchange user

Ok, so is my nickname slebetman? Yes, so the first part of the sentence before AND is true.

Am I a stackexchange user? Yes because I'm writing this answer, so the second part of the sentence after AND is true.

In the English language, is the above sentence truthful? Yes. So we can see the first behavior of AND:

true sentence = something true AND something true

Or we can simplify:

true = true AND true

So let's look at another sentence:

My nickname is slebetman AND I don't understand English

In this case the first part is unchanged so it is still true but the second part is obviously false because this entire answer is in English. So we have true AND false. Is the entire sentence true? Would you say I'm not lying if I said the sentence above? Of course not, the entire sentence is false. So here we see the second behavior of AND:

false = true AND false

Since we can rearrange the sentence and still keep it false: I don't understand English AND my nickname is slebetman we can say that AND is commutative. Which gives us:

true  = true  AND true
false = true  AND false
false = false AND true

Finally it shouldn't be hard to convince you that if I said two false things and joined them by AND the entire sentence would still be a lie. Which defines what AND does:

true  = true  AND true
false = true  AND false
false = false AND true
false = false AND false

The same reasoning can be applied to OR and NOT.

In a sense, this definition is just giving an operator a name. What the operator really does is make sure all conditions are true. We could have named AND something different, maybe ALL (short for all must be true) and we could have named OR something different like ANY (short for if any of the following is true) and the logic would still work:

XANY(a,b) = ANY( ALL(a, NOT(b)), ALL(NOT(a), b))

But we have decided to call the operators AND and OR so instead we write:

a XOR b = (a AND (NOT b)) OR ((NOT a) AND b)

It's a bit arbitrary what we call the operators but if you need to know where they come from they come from our language (originally probably Greek not English) and over time and the application of formality gets adopted into mathematics and engineering.

The boolean operators (and, or) are functions that map two inputs to an output, just like any other binary operator (i.e. +). Their exact behavior (the why question) is an axiom of boolean logic, just as the behavior of addition is an axiom of mathematics, which is to say that we agree that these operators do what they do. It is therefore the bedrock of the system and it is meaningless to try explain why they work in this way, just as it is meaningless to explain why a rose is a rose: it is because we agree it is.

How the operators are implemented in a computer (the how question) is a matter of engineering and there are may other ways of implementing these operators other than using logic gates built from transistors (there is a video floating around of an adder built from dominoes for example).

To answer this, I think it is best to go back to those early 'Truth Tables' you probably saw in algebra. The first ones you see are 'and' & 'or'. We have two statements #1 & #2 (usually called p & q) which can either be true or false. Then, when we test them we have a result (usually called r).

For example,

#1 p = I like red

#2 q = I like dogs

NOW let's test the affirmative and negative statements using 'and'.

If #1 is True AND #2 is True then the result(r) is True.

  • I like red AND I like dogs. This is True. [1 and 1 then r = T]

If #1 is False AND #2 is True then r = F

  • I don't like red AND I like dogs. This is false. [0 and 1 then r = F]

If #1 is True AND #2 is False then r = F

  • I like red AND I don't like dogs. This is false. [1 and 0 then r = F]

If #1 is False AND #2 is False then r = F

  • I don't like red AND I don't like dogs. This is false. [0 and 0 then r = F]

NOW look at 'OR'

If #1 is True OR #2 is True then the result(r) is True.

  • I like red OR I like dogs this is True. [1 or 1 then r = T]

If #1 is False OR #2 is True then the result(r) is True. [0 or 1 then r = T]

If #1 is True OR #2 is False then the result(r) is True. [1 or 0 then r = T]

If #1 is False OR #2 is False then the result(r) is False. [0 or 0 then r = F]


NOW Your questions

1 and 1 = 1, I like red and dogs. True
1 or 0 = 1, I like red or I don't like dogs. True
0 and 1 = 0, I don't like red and I like dogs, False ... Voila

You can look at it as two stages.

The first stage is that we want some particular outcome, such as wanting a 1 if either of the (2) inputs is a 1. We call this an “OR” gate. Another outcome that people want is to get a 1 only if both inputs are a 1. We call this an AND gate.

The final stage is that we want a particular gate for every possible combination (of two inputs and one output). Now, the only problem is having meaningful names for them all.

This gives us confusing things such as IOR (Inclusive OR) — two 1’s gives a 1 — and XOR (exclusive OR) — two 1’s gives a 0. It gives us amusing things such as NAND — the opposite of AND. …But in the end we have every possible gate, and a more or less intuitive name for each and every one.

I interpret question 'why' as asking for the purpose.

So imagine OR, AND and NOT as the simplest building blocks from which you can build almost anything else. (Just remember it as some kind of simplification: NAND gate seems to be more universal and simpler from electronic engineering point of view but is less intuitive for learning purposes)

As an example, let think of building a device for adding numbers (a sumator).

01010b + 00110b = 10000b
10d + 6d = 16d

Notice that we can build it from simpler blocks. Each one computes single binary digit at each position and 'carry out'.

In order to compute digit at each position we need:

(A XOR B) XOR C_in

Where C_in is a carry remaining from computing previous digit. To build XOR gate from simpler gates, we can:

A XOR B = (A AND NOT B) OR (NOT A OR B)

Another possibility:

A XOR B = (A OR B) AND NOT (A AND B)

I will stop explaining sumator here. For more information please visit: Full adder circuit or Adder and subtractor. My objective here was to only describe an idea of logic gates as building blocks.

There is a class of electronic circuits called 'combinational circuits'. It simply takes some N bits of input and computes M bits of output. We can prove that every possible combinational circuit can be made of only NOT, AND and OR gates. Let see an example:

ABC | XY
000 | 00
001 | 11
010 | 00
011 | 00
100 | 10
101 | 00
110 | 01
111 | 10

To compute X output, we can simply take each row, where X is 1, construct condition that must be fulfilled by NOTs and ANDs and connect each conditions with ORs.

X = (NOT A AND NOT B AND C) OR (A AND NOT B AND NOT C) OR (A AND B AND C)

Similarly for Y:

Y = (NOT A AND NOT B AND C) OR (A AND B AND NOT C)

For more information: Disjunctive Normal Form

Please note that it is only for proof purpose, to show that it is possible. It is obviously not optimal. Finding circuits for minimum number of logic gates is NP computing problem. Please also see: Logic Optimization

But combinational circuits are not only things we can build using AND, OR and NOT gates. If we add some kind of feedback loops we can gain an ability to generate clock signals (Multivibrators) or to remember things (Latches).

AND
I like red AND ripe apples.
If the apple is red and the apple is ripe, then the result is true
1 and 1 makes 1 (true)
The apple is green, so I don't like it
0 and 1 makes 0 (false)
The apple is red but not ripe
1 and 0 makes 0 (false)

OR
I'll eat an apple if it's ripe OR if it's red
The apple is red, but not ripe so I'll eat it
1 or 0 makes 1 (true)
The apple is green but it's ripe - I eat it
0 or 1 makes 1 (true)
The apple is not ripe and it's purple - no thanks
0 or 0 makes 0 (false)

Is it a conditional definition for example?

Yeah, that's a good way to look at it.

One way of writing the definition of the AND operator is:

The AND operator is that operator that takes two bits as input and gives one bit as output, such that

  • 0 AND 0 = 0;
  • 0 AND 1 = 0;
  • 1 AND 0 = 0;
  • 1 AND 1 = 1.

Likewise, one way of writing the definition of the OR operator is:

The OR operator is that operator that takes two bits as input and gives one bit as output, such that

  • 0 OR 0 = 0;
  • 0 OR 1 = 1;
  • 1 OR 0 = 1;
  • 1 OR 1 = 1.

So, the reason that 1 AND 1 = 1 is that the definition says so. Likewise, the reason that 1 OR 0 = 1 is that the definition says so. And likewise for the other 6 equations; they're true because the definition says so.

$\def\N#1{{\left[\text{#1}\right]}}$tl;dr There're only 3 possible gates that satisfy these properties:

  1. depend on both of 2 binary inputs to produce 1 binary output;

  2. don't discriminate between the inputs;

  3. aren't just another gate $\texttt{NOT}\text{'d} .$

These 3 gates are usually described as $\texttt{AND} ,$ $\texttt{OR} ,$ and $\texttt{XOR}$ as the $3$ possible gates, along with their $\texttt{NOT}\text{'d} .$ variants, $\texttt{NAND} ,$ $\texttt{NOR} ,$ and $\texttt{NXOR} .$


For each possible input, there's $1$ possible function for each possible output. This means $$ \N{functions} ~=~ {\N{possible outputs}} ^ {\N{possible inputs}} \,. $$

Here, there're two possible outputs, $\left\{0,\,1\right\} ,$ and there're four possible inputs, $\left\{\left\{0,\,0\right\}.\, \left\{0,\,1\right\}, \,\left\{1,\,0\right\}, \, \left\{1,\,1\right\} \right\} ,$ so $$ \N{functions} ~=~ {2} ^ {4} ~=~ 16 \,, $$ meaning that there're 16 possible gates. as shown @AI0867's answer.

If we exclude both of the inputs, that'd have been $2^1=2$ possible gates. These are the $\texttt{true}$ and $\texttt{false}$ gates, which always return $1$ or $0 ,$ respectively, without regard for their inputs.

If we exclude one of the inputs, that'd have been $2^2 = 4$ possible gates. These would be the identity-gate (return the input), the NOT-gate (invert the input), plus the {true, false}-gates again. So, that's $2$ gates that depend only on one of the inputs.

Since there're 2 inputs, there're:

  • 2 gates that don't care about either input;

  • 2 gates that don't care about the first input; and

  • 2 gates that don't care about the second input;

for a total of 6 gates that don't care about both inputs, reducing the 16 possibilities down to 10 gates which can be meaningfully described as having 2 inputs.

Then, we can note that of these 10 gates, half of them just return the inverted input $(\text{e.g.},$ $\texttt{AND}$ vs. $\texttt{NAND}).$ So we can say that it's just 5 gates, plus their inversions. @AI0867's answer covered this, too.

The point I'd add is that this analysis assumes that inputs are ordered; for example, $\left\{0,1\right\}$ is distinct from $\left\{1,0\right\} .$ However, in practice, we don't tend to discuss gates which discriminate between the inputs.

So, instead of having $4$ distinguishable inputs, we have $3 .$ Then, that's $2^3 = 8$ possible gates. We again remove the $2$ that don't depend either gate; we don't remove the other $2+2=4$ that didn't depend on just one gate because we no longer distinguish between the two gates, so those cases already don't exist. This leaves $2^3-2=6$ possible gates that:

  • depend on both inputs; and

  • don't discriminate between the inputs.

Then, again, we can note the inversion of the outputs, so instead of saying $6$ different gates, we can say that there're $3$ gates with their inversions.

This leaves $\texttt{AND} ,$ $\texttt{OR} ,$ and $\texttt{XOR}$ as the $3$ possible gates, plus their inversions, $\texttt{NAND} ,$ $\texttt{NOR} ,$ and $\texttt{NXOR} .$

In short, when we talk about gates as though only 3 binary-gates exist (plus the unary $\texttt{NOT}$ gate), that's because there're only 3 that:

  1. depend on both inputs;

  2. don't discriminate between the inputs;

  3. aren't just another gate with the output inverted.

Licencié sous: CC-BY-SA avec attribution
Non affilié à cs.stackexchange
scroll top