Question

All the time you hear about high frequency trading (HFT) and how damn fast the algorithms are. But I'm wondering - what is fast these days?

Update

I'm not thinking about the latency caused by the physical distance between an exchange and the server running a trading application, but the latency introduced by the program itself.

To be more specific: What is the time from events arriving on the wire in an application to that application outputs an order/price on the wire? I.e. tick-to-trade time.

Are we talking sub-millisecond? Or sub-microsecond?

How do people achieve these latencies? Coding in assembly? FPGAs? Good-old C++ code?

Update

There's recently been published an interesting article on ACM, providing a lot of details into today's HFT technology, which is an excellent read:

Barbarians at the Gateways - High-frequency Trading and Exchange Technology

Was it helpful?

Solution 2

You've received very good answers. There's one problem, though - most algotrading is secret. You simply don't know how fast it is. This goes both ways - some may not tell you how fast they work, because they don't want to. Others may, let's say "exaggerate", for many reasons (attracting investors or clients, for one).

Rumors about picoseconds, for example, are rather outrageous. 10 nanoseconds and 0.1 nanoseconds are exactly the same thing, because the time it takes for the order to reach the trading server is so much more than that.

And, most importantly, although not what you've asked, if you go about trying to trade algorithmically, don't try to be faster, try to be smarter. I've seen very good algorithms that can handle whole seconds of latency and make a lot of money.

OTHER TIPS

I'm the CTO of a small company that makes and sells FPGA-based HFT systems. Building our systems on-top of the Solarflare Application Onload Engine (AOE) we have been consistently delivering latency from an "interesting" market event on the wire (10Gb/S UDP market data feed from ICE or CME) to the first byte of the resultant order message hitting the wire in the 750 to 800 nanosecond range (yes, sub-microsecond). We anticipate that our next version systems will be in the 704 to 710 nanosecond range. Some people have claimed slightly less, but that's in a lab environment and not actually sitting at a COLO in Chicago and clearing the orders.

The comments about physics and "speed of light" are valid but not relevant. Everybody that is serious about HFT has their servers at a COLO in the room next to the exchange's server.

To get into this sub-microsecond domain you cannot do very much on the host CPU except feed strategy implementation commands to the FPGA, even with technologies like kernel bypass you have 1.5 microseconds of unavoidable overhead... so in this domain everything is playing with FPGAs.

One of the other answers is very honest in saying that in this highly secretive market very few people talk about the tools they use or their performance. Every one of our clients requires that we not even tell anybody that they use our tools nor disclose anything about how they use them. This not only makes marketing hard, but it really prevents the good flow of technical knowledge between peers.

Because of this need to get into exotic systems for the "wicked fast" part of the market you'll find that the Quants (the folks that come up with the algorithms that we make go fast) are dividing their algos into event-to-response time layers. At the very top of the technology heap are the sub-microsecond systems (like ours). The next layer are the custom C++ systems that make heavy use of kernel bypass and they're in the 3-5 microsecond range. The next layer are the folks that cannot afford to be on a 10Gb/S wire only one router hop from the "exchange", they may be still at COLO's but because of a nasty game we call "port roulette" they're in the dozens to hundreds of microsecond domain. Once you get into milliseconds it's almost not HFT any more.

Cheers

Good article which describes what is the state of HFT (in 2011) and gives some samples of hardware solutions which makes nanoseconds achievable: Wall Streets Need For Trading Speed: The Nanosecond Age

With the race for the lowest “latency” continuing, some market participants are even talking about picoseconds–trillionths of a second.

EDIT: As Nicholas kindly mentioned:

The link mentions a company, Fixnetix, which can "prepare a trade" in 740ns (i.e. the time from an input event occurs to an order being sent).

"sub-40 microseconds" if you want to keep up with Nasdaq. This figure is published here http://www.nasdaqomx.com/technology/

For what its worth, TIBCO's FTL messaging product is sub-500 ns for within a machine (shared memory) and a few micro seconds using RDMA (Remote Direct Memory Access) inside a data center. After that, physics becomes the main part of the equation.

So that is the speed at which data can get from the feed to the app that makes decisions.

At least one system has claimed ~30ns interthread messaging, which is probably a tweaked up benchmark, so anyone talking about lower numbers is using some kind of magic CPU.

Once you are in the app, it is just a question of how fast the program can make decisions.

These days single digit tick-to-trade in microseconds is the bar for competitive HFT firms. You should be able to do high single digits using only software. Then <5 usec with additional hardware.

Every single answer here is at least four years old and I thought I would share some perspective and experience from someone in the HFT / algorithmic trading field in 2018.

(This is not to say that any of these answers are poor as they most definitely are not however I believe it is necessary to provide insight regarding the topic that is more up to date).

To directly answer the first question: We are talking approximately 300 billionths of a second (300 nanoseconds). Recall this is latency introduced by the program itself.

There is always going to be some variance firm by firm regarding the latency of systems, however the numbers I am going to provide are the common values for internal HFT engine latency.

  1. On average, one third of this time (300 nanoseconds) is attributed to latency introduced by the program as you stated in your question.
  2. The remaining of the time is latency that exists due to co-location and other variables relating to the exchange, the matching engines, fibre optics, etc.

The question is about how fast high frequency trading systems are, and what the infrastructure looks like in terms of the hardware involved. The technology has advanced since 2014, however contrary to what a great deal of what the literature discusses in the field, FPGAs are not necessarily the go-to choice for the big players in the HFT space. Large companies such as Intel and Nvidia will cater to these firms with their specialized hardware to ensure they get everything they need from the trading system. With Intel obviously the system is going to be built more around CPUs and the kinds of computations best performed by CPUs, and with Nvidia the system will be more GPU oriented.

For systems on field programmable gate arrays (FPGAs), languages such as Verilog and VHDL are commonly used. However not everything is in assembly even for FPGA systems, most of it is highly optimized C++ with embedded inline assembly, this is where the speed often comes from. Note that this is the case for firms using all sorts of hardware (FPGAs, specialized Intel systems, etc.)

It is unfourtunate however that the top answer here states something completely false:

10 nanoseconds and 0.1 nanoseconds are exactly the same thing, because the time it takes for the order to reach the trading server is so much more than that.

This is completely false as the co-location aspect of high frequency trading has become completely standardized. Everyone is just as close to the matching engine as you are thus the internal latency of the system is of great importance.

According to the Wikipedia page on High-frequency trading the delay is microseconds:

High-frequency trading has taken place at least since 1999, after the U.S. Securities and Exchange Commission (SEC) authorized electronic exchanges in 1998. At the turn of the 21st century, HFT trades had an execution time of several seconds, whereas by 2010 this had decreased to milli- and even microseconds.

it will never be under a few microseconds, because of the em-w/light speed limit, and only a lucky few, that must be in under a kilometer away, can even dream to get close to that.

Also, there is no coding, to achieve that speed you must go physical.. (the guy with the article with the 300ns switch; that is only the added latency of that switch!; equal to 90m of travel thru optical and a bit less in copper)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top