In terms of CPUs, there are three possible sources of mistakes which seem to be in the scope of your question:
- Floating point rounding errors. This seems to be what you are getting at with your division example. This type of error is completely deterministic in practice, not random at all! However, if the programming language you are using leaves floating point behaviour underspecified, you may get different errors on different computers.
- Design mistakes in a CPU, such as the infamous Intel Pentium FDIV bug. It's hard to put a probability on this, but fortunately modern CPUs are extensively tested, and even formal methods are used to mathematically prove their correctness to some extent.
- Hardware errors caused by radiation, such as cosmic rays. Unless you put your computer inside the reactor of a nuclear power station or something, the probability of errors caused by radiation should generally be negligible. Interestingly, this is actually relevant to certain programming techniques such as hashing in revision control systems. You can make the argument "Well, it's more likely that we get an error due to a cosmic ray, than a hash collision, so it's not worth worrying about the possibility of a hash collision".
Other components of a computer, such as storage devices and display devices, are much, much more likely to exhibit hardware errors leading to data corruption, than a CPU.