IMO, the obvious solution to this madness is to quantize stock trades; i.e. to only allow stock prices to change once every 10 seconds.
As long as transactions weren't announced until the end of the period, there wouldn't be any benefit from microsecond advantages. And then stock trades would begin to go back to being based on the value of the stock, not what a model says it will do in the next 2 seconds.
That would result in less efficient markets (which means higher spreads and more expensive transactions) and the market makers, who at present trade at high frequency, would capture more wealth than they do now.
Really? Do you have any evidence for that or are you just spouting off the standard line that faster trading necessarily means better trading? I don't see how any economic conditions could change enough in three or four milliseconds to justify the amount of high frequency trading that goes on today.
If there is evidence to be found either way, you may want to look into studies of the Taiwan Stock Exchange.
In the past the TWSE had a similar mechanism to the one you suggest: 30-second call auctions all day long (as compared to other exchanges who typically only had auctions at opening and closing time, and did continuous trading the rest of the day). Basically they'd give one best bid/ask update on a stock, gather new orders for a while (it was slightly random, sometimes as little as 22 seconds IIRC), match them at the volume-maximising price, and send out the new best bid/ask update. (Within this structure there was still time priority --- i.e. you only had to wake up once every 30 seconds, but when you did you had to send your new order ASAP. Also there was no atomic price amendment message ... you had to send an order delete and then an order create message, which caused all sorts of hilarity when they decided to take more than 30 seconds to send you a cancel acknowledgement.)
I read a study from 1998 basically supporting your position ... the authors concluded that "The call market method is more effective in reducing the volatility of high-volume stocks than low-volume stocks. This contradicts conventional wisdom which suggests that the call market method is superior for thinly traded stocks, while the continuous auction method is preferred for heavily traded stocks. The call market method does not impair liquidity and price discovery in the call market appears more efficient than in the continuous auction market." http://www2.hawaii.edu/~rheesg/Belgrade/Taiwan/TSEfinal.pdf
At this point I don't know yet of any detailed studies of the effects of that change. Also it's been years since I traded TW stocks, and I never traded TW warrants, so unfortunately off the top of my head I can't offer you an anecdotal account of the effects of the change either.
First, 4ms is 3.4 orders of magnitude off 10 seconds.
Second, do you have any argument that isn't based on personal ignorance?
The stock market isn't even open at night, no one thinks that it is necessary to trade 20 times a second every second, they just want to. The stock market doesn't directly represent economic conditions, it represents investors assessment of economic conditions, thus it can change as fast as they can change their mind. If someone wants to only trade every ten seconds they can go ahead. If someone wants to open a stock exchange where everyone can only trade every ten seconds they can go ahead.
How does a machine taking advantage of small arbitrage opportunities that last fractions of a second represent an "investors assessment of economic conditions"?
It might help if you consider a worked example. Say you had a market for EURGBP which only updated once a second and every second the price moved an average of 10 pips.
Now if you're a market maker it's very risky for you to offer a spread below 10pips because if the market moves by 10pips you're suddenly off-market and someone can arbitrage you by buying from you and selling on the market (taking a risk-free profit and pushing the loss onto you).
Now consider if the market was updating every 100ms and the price of EURGBP only moved by an average of 1pip every 100ms, it means you as a market maker can offer a 5 pip spread comfortable in the knowledge you can pull your price before an arbitrage situation occurs.
This lower spread you're offering is now available to your customers (who are likely to be other financial firms, big companies, and retail FX brokers). Because the retail FX brokers are getting a better spread they can offer a tighter spread to their retail customers.
This isn't just theoretical, FX spreads for retail customers (like folks going abroad or importing goods) are a lot tighter now than they were even a few years ago primarily because of faster trading.
It's unhealthy that market makers with obligations toward the market are being displaced by these HFT parasites that appear to serve the same function—except when they don't, as during the flash crash.
And the market wouldn't be materially less efficient. The reason to have a market is to allocate resources towards effective producers by pricing them properly. Well, "no human could react quickly enough to buy the stock in New York and sell it in London before the prices reversed" means we know the market would inevitably correct that price even without the presence of that HFT. We are rewarding overinvestment in reaching price equilibria faster, when it already takes less time than it will for a producer to make any use of the money.
Flash crashes aren't an inherent part of HFT, they're generally caused by buggy software. Computer caused crashes have happened long before HFT became popular, and so for that matter have human caused crashes ("fat-finger crashes").
IMO. The main problem is that exchanges can't afford talented developers, they simply don't have the money to compete with the banks and hedge funds. Most exchanges have stop signals (i.e. a message they can broadcast that can stop computer trading, etc.) but often they fail to use them. Exchanges should be implementing algos to better detect abnormal behaviour (whether by human or computer) and prevent flash crashes before they happen.
Prices for block trades aren't necessarily the same as "normal" prices. I've noticed that quite a few banks when doing VWAP calculations will do separate calculations including and excluding block trades rather than treating them as identical.
Depends on how they have agreed to price the vwap. Presumably someone trying to hit vwap is trading on the normal markets and wouldn't interact with the block trades (which are happening elsewhere and only reported to the exchanges)
Say a negative announcement is made regarding a stock when does it impact the price, now or ten seconds in the future ?
If an exchange limits trading to every 10 seconds, then those trades will just happen privately between firms. So now the privately trading firms will know the "real" price of the stock, but the people on the exchange will be left it the dark with the "old" price.
Once you hit the the next allowed trading period, everyone is going to be selling exactly simultaneously, which will cause a nosedive in the price. Institutional investors like pension funds are legally obliged to sell shares if they go past certain trigger points, so if the nosedive cause them to get triggered suddenly you have a lot more selling and a market in panic.
Different people have different requirements for speed, and this naturally smoothes out trading, forcing everyone to trade at the same time is just asking for a disaster to happen.
Plus there's all sorts of other complications. For example order matching. There's one buy order and 100 sell orders. How do you decide which sell order to fill ? - you can't do it by time of ordering anymore, but you need a transparent way to ensure fairness. So you randomize it. But then your rewarding whoever manages to put the most orders in, so you need a way to fix that, and then fix the problems in that fix and so on and so on until you end up with a horribly convoluted opaque system.
I think you're conflating computational finance with geographical arbitrage, and its pursuit by some traders. Computational finance is a general discipline that's more concerned with modeling and prediction of future asset prices, whereas the problem discussed in the article/paper has virtually nothing to do with prediction or modeling. It's much more about the pennies made when taking related product's prices to equilibrium in global markets, and a novel strategy of doing that.
I guess the solution is more to combat computational finance, but if there is time for prices everywhere to propagate between trades, the opportunities for geographical arbitrage diminish greatly.
I think it's a bug in the system that this arbitrage can even exist, though. It's not like, say, rice, the location of your shares doesn't matter.
Unlikely. Very high speed trading of this kind is likely to lead to large uncontrolled fluctuations in the financial system, which is not efficient and results in boom and bust type behavior.
In reality the model is a bit more complicated than that. The "market price" authored by an exchange is actually more of a mark-to-model calculation as a function of recent trades. Remember that a market exchange doesn't function like a store with customers buying product from a vendor at a fixed price. In oversimplified terms: In reality it's people selling at some price they want to sell at, and others buying at some price which they want to buy at. Orders are fulfilled by the best available match at the time.
The market participant who's able to collect, process and act on related sercurities' price discrepancies the fastest will always make money this way. Your example would simply raise the stakes- instead of continually slicing time into ever smaller intervals during which trading decisions can be made, there would be 8,640 global auction events, resulting in the exact same amount of arbitrage monies flowing into fewer hands.
Quantization is one possible way. Another would be transaction taxes. Whatever the mechanism chosen, the intention should be to maintain the financial system within a steady regime which optimises overall efficiency.
The midpoint locations mentioned in this article are directly applicable to Seasteading. http://seasteading.org/
Many of these optimal trading locations are in the ocean. The spar buoy based structures designed by the Seasteading Institute are directly applicable, as they are about the only seagoing designs that are safe for permanent habitation. (Immune even to rogue waves.)
Eh, you don't really need habitation -- just a datacenter. It may be more cost-effective to design something specifically for this. At the minimum you're going to have to work out cooling and power anyway (hydroelectric and "just stick the heat sink in the ocean"?).
Eh, you don't really need habitation -- just a datacenter. It may be more cost-effective to design something specifically for this
At the minimum, there should be an expectation of the platform surviving the environment. Decoupling from wave energy is the point of a spar buoy. Even inanimate servers are perturbed by being smashed by walls of water. Did you actually read or search on anything mentioned, or did you just go with the "stead" in the name?
I actually looked at the engineering documents -- they weren't that detailed. It's not exactly what I'd expect from a build-ready project. That said, I'm not a civil engineer.
I'll also note, if you understood what those folks are about, that many of these folks would be happy to make a home out there if someone would pay them a normal-ish salary to maintain such a datacenter. I suspect this could end up economically advantageous both for the seasteaders and whatever company wanted to establish the datacenter.
(As opposed to the 2 weeks on, 2 weeks off, hazard-pay situation people are in for offshore drilling operations.)
It's funny to think that some of the physicists who've left academia to join these companies might end up being paid a fortune to work on the exact same problems that they used to be paid a pittance to work on!
There are a lot of quant funds that were started by professors. They hire a lot of PhDs in Math and Physics. And yes, a PhD in Math can make a lot of money if you end up working at a trading firm.
It also becoming a problem in semiconductor design: at 1GHz, light will only move 30cm between cycles. Thats not a lot if your memory isn't located close to your ALU.
Actually, out-of-order instruction pipelines are limited by this problem because the more instructions you allow in-flight, the more stages in the pipeline. The more stages in the pipeline, the more physical distance information much travel on the chip. So lengthening the instruction pipeline runs into fundamental problems.
Also, the number of transistors in the path to switching the right bits from memory (even cache) is also getting comparable to the length of pipeline sections.
It's sad to see fast fiber optics (and the expertise that goes into building and running it) wasted on gaming the system in this way when a change in the rules would remove the loophole giving people incentive to do it.
It's not really a loophole. It's a fundamental property of the universe.
Consider currencies for example. Say someone in Tokyo was selling USDJPY at 84 and this was the best price anywhere in the world, then naturally anyone buying would want to buy from them. But the guy in London who wants to buy won't see the price of 84 until 90ms later, so he ends up buying from someone in New York at a worst price.
So the guy in London got a worse price and the guy in Tokyo didn't get a sale. Despite the fact that in an ideal world they would have been matched together, the fundamental laws of the universe conspire against them.
Essentially what ends up happening is that for any one currency you end up with a bunch of local market (NY, Tokyo, Singapore, London) all of which have slightly different prices from each other, which isn't a great situation to be in. Reducing latency won't make this problem go away, but it helps flatten out the markets and ensures that people get the best price globally (as opposed to just locally) wherever possible.
You're assuming the conclusion, that continuous buying and selling is the only possible way to run a market. But another way to run it would be to have an auction every five minutes. That way there are more buyers and sellers to match up, so it's more likely that everyone gets a fair price. And don't talk to me about liquidity - nobody really needs to trade that quickly.
In a way we have what you're suggesting. Look at what happens overnight when exchanges are closed. You see a big jump from the closing price to the opening price the next day. Even though you can't trade over that period the price still changes - you just restrict trading and price discovery to people who can do OTC or pre-market trading.
It's not as if people haven't tried other models, EBS who run one of the major currency exchanges restrict price updates to once every 100ms. They're losing customers to other exchanges who allow people to trade faster.
People with deep backgrounds in algorithmic game theory have been studying exchanges and auction design for a long time now. If someone could figure out a better design for exchanges, they'd be building it.
I guess the article is optimistic in that sense. If you can engineer fiber that's 74% of c, or better yet, bore one of your mirror holes directly through the center of the earth, between nyc and shanghai, or something even crazier, you can make a lot of money.
If you have the capability to bore a hole through the center of the earth, you would be so much more advanced than the current state of the art, that our puny earthling money wouldn't buy you anything you couldn't get in easier ways.
I was just wondering if hair thin, hollow, reflective tubes could be made cheap enough for oceanic data cables. It would obviously be more expensive than regular fiber.
Light slows down in waveguides too. A heuristic explanation is that as it travels down the guide, it's also bouncing off the sides, and so effectively going at an angle.
The maximum signal speed loss due to inclination would be sqrt(1+(1/refraction_index^2))-1, or around 20% for fiber described above. Although, in practice, the fiber would not be even close to its maximum angle.
But potentially, a lower refraction index fiber with a reflective coating would allow signals to travel faster.
Most exchanges that trade "arbitragable" instruments are in the same timezone and geographic location (NY-Chicago is one exception). There is no point having the optimal colo between TSE and NYSE because they have no overlapping hours nor do they have anything that trades on both and is fungible.
This research is patently academic. This is what happens when two pointy heads in ivory tower with zero empirical trading experience dream up something. Then all geeks go ga-ga talking about fiber optics and sea steading and other bs.
"There is no point having the optimal colo between TSE and NYSE because they have no overlapping hours nor do they have anything that trades on both and is fungible."
There is tremendous overlap amongst various stock exchanges, OTC's, dark pools (12% of US trading), and of course there's Forex which is 24/7.
Maybe we could use quantum entanglement devices to allow for faster trades? Since the speed of light in the medium is 0.66c you could probably get as close to c as possible.
There's no known way to transfer information faster than the speed of light (and if there were, there are some relativistic tricks you can play to actually send information backwards in time).
I interpret his statement as meaning "we could use quantum entanglement to transmit information at c (or c minus epsilon) instead of .66c".
But (as far as I understand) quantum entanglement doesn't let you transmit information at any particular speed, so that still doesn't work. (Roughly speaking, QE can be thought of as "the universe transmits information at >c", but in order to extract this information, we have to send messages conventionally. So it has a theoretical upper bound of c, like everything else, but has the same practical limits.)
In fact, just writing the light cones for an event with two different frames of reference will yield the same result. One of those fun little first-year special relativity results.
Quantitative trading companies are spending more and more money to trade faster, because the barrier to entry continues to plummet. For example, they used to install software programs on computers, now they actually hard-wire programs directly to motherboards to shorten the execution time. As the article mentioned, trading firms are investing heavily into expensive fiber optic lines instead of traditional internet. For these reasons and many more, the fraction of a penny that each trade earns them also continues to grow smaller as the arbitrage that they trade on grows smaller because every firm is investing into the same expensive strategies.
As long as transactions weren't announced until the end of the period, there wouldn't be any benefit from microsecond advantages. And then stock trades would begin to go back to being based on the value of the stock, not what a model says it will do in the next 2 seconds.