For primitive types, both operations take the exact same amount of time since both are actually determined regardless of which you ask for.
In short, whenever you make a basic comparison, <
<=
>
>=
==
or !=
, one side of the operator is subtracted from the other. The result of the subtraction is then used to set a number of flags, the most important of which are Z
(zero), N
(negative), and O
(overflow). Based on the names, you should be able to figure out what each flag represents. Ex: if the result of subtraction is zero, than the Z
flag is set. Thus, whether you ask for <=
or !=
, all the processor is doing is checking the flags which have all been set appropriately as a result of the initial subtraction.
Theoretically, <=
should take slightly longer since two flags (Z and N) must be checked instead of one (=
just cares about Z). But this happens on such a low level that the results are most likely negligible even on a microsecond scale.
If you're really interested, read up on processor status registers.
For non-primitive types, i.e. classes, it depends on the implementation of the relational operators.