The C# type Decimal is not like the decimal types used in COBOL, which actually store the numbers one decimal digit per nibble, and uses mathematical methods similar to doing decimal math by hand. Rather, it is a floating point type that simply assumes quantities will not get so large, so it uses fewer bits for exponents, and uses the remaining the bits of 128 rather than 64 for double to allow for greatly increased accuracy.
But being a floating point representation, even very simply fractional values are not represented exactly: 0.1, for example, requires a binary repeating fraction and may not be stored as an exact value. (It is not, for a double; Decimal may handle that particular value differently, but this is true in general.)
Therefore comparisons still need to be made using typical floating point math procedures, in which values are compared, added, subtracted, etc., by accepting them only to a certain point. Since there are approximately 23 decimal places of accuracy, select 16 as your standard, for example, and ignore those at the end.
For a good reference, read What Every Computer Scientist Should Know About Floating Point Precision.