SMT/Hyperthreading allows multiple threads (usually two), on the same physical core, to execute -- one is typically waiting for the other to encounter a stall, and then the thread which is executing will switch.
Stalls happen -- mostly with cache misses. Even if you are not traversing the same memory, there's no guarantee that said memory will already be in the cache (thus inducing a stall when it is accessed), or that it will not map to the same line of the cache that another thread is mapping memory to.
Thus, two threads will almost always benefit from SMT/hyperthreading, unless the data they traverse is already present in the cache. That's actually an unusual scenario -- an algorithm typically needs to prefetch its data, and additionally not use more than the cache can hold, or not overwrite memory other threads are trying to cache -- which requires knowledge of other threads on the core. That's not usually possible, because it's abstracted away by the OS.
Most algorithms are not tuned to that extent, particularly since its only usually console-exclusive games, or other hardware exclusive applications, which can guarantee a certain minimum spec for the cache, and more importantly, have intimate knowledge of other threads which are running concurrently on the same core. This is also one of the major reasons larger caches benefit modern CPU performance.