What is the actual difference between x86 floating-point and integer instruction types?
-
08-03-2021 - |
Question
There are two fundamental types of microprocessor instructions: integer and floating-point.
Accordingly, they are executed on an Integer Processing Unit and on a Floating-Point Processing Unit. That makes sense, right?
But what tells the processor to send an instruction to IPU or FPU? How does it know which instruction is which kind?
Maybe an instruction have a bit / flag / LUT or something to differenciate one from another?
La solution
Each CPU instruction has an opcode. The CPU looks at the opcode to determine which execution unit the instruction should be despatched to. For floating point instructions on x86 the opcode typically starts with 1101 1....
i.e. the first hex digit is D
and then the MS bit of the next digit is set. E.g. FADD
(floating point add) starts with D8
or DC
, depending on what arguments follow. By contrast the opcode for the integer instruction ADD
typically starts with x000 0...
(x
can be 0
or 1
), i.e. the first digit is 0
or 8
and the second digit has its MS bit clear. Depending on the arguments it can be 01
, 02
, 03
, 04
, 05
, 80
, 81
or 83
.