Sie sind auf Seite 1von 15

Johnson School Research Paper Series #20-2013

High-Frequency Trading

Tarun ChordiaEmory University Amit GoyalUniversity of Lausanne Bruce N. LehmannUniversity of California at San Diego Gideon SaarCornell University

June, 2013

This paper can be downloaded without charge at The Social Science Research Network Electronic Paper Collection.

Tarun Chordia, Amit Goyal, Bruce N. Lehmann, and Gideon Saar High-frequency traders in financial markets have been making media headlines. As a relatively new phenomenon, much of the discussion is not backed by solid academic research. In this special issue of the Journal of Financial Markets on High-Frequency Trading, we present several research papers that aim to inform the discussion on this important issue. What are high-frequency traders (henceforth, HFTs)? The Securities and Exchange Commissions Concept Release on Equity Market Structure (2010) defines them as professional traders acting in a proprietary capacity that engage in strategies that generate a large number of trades on a daily basis. The SEC document lists several characteristics commonly attributed to HFTs including (1) the use of extraordinarily high-speed and sophisticated computer programs for generating, routing, and executing orders; (2) use of co-location services and individual data feeds offered by exchanges and others to minimize network and other types of latencies; (3) very short time-frames for establishing and liquidating positions; (4) the submission of numerous orders that are cancelled shortly after submission; and (5) ending the trading day in as close to a flat position as possible (that is, not carrying significant, unhedged positions over-night). The Commodities Futures Trading Commission (CFTC) Technology Advisory Committee released a draft definition of high-frequency trading in October 2012 that deliberately leaves out holding period and portfolio turnover frequency as attributes of HFTs. Instead, it classifies high-frequency trading as a form of automated trading that employs algorithms for decision making, order initiation, generation, routing, or execution, for each individual transaction without human direction, and that satisfies several criteria such as use of lowlatency technology, high speed connections to markets for order entry, and high message rates (orders and cancellations).

High-Frequency Trading

Tarun Chordia is from the Goizueta Business School, Emory University (Tarun_Chordia@bus.emory.edu). Amit Goyal is from the Swiss Finance Institute, University of Lausanne (Amit.Goyal@unil.ch). Bruce N. Lehmann is from the Graduate School of International Relations and Pacific Studies, University of California at San Diego (blehmann@ucsd.edu). Gideon Saar is from the Johnson Graduate School of Management, Cornell University, 455 Sage Hall, Ithaca, NY 14853, USA (gs25@cornell.edu, Tel.: 607 255 7484).

Some market observers emphasize that high-frequency trading is simply faster trading. According to this view, many trading strategies employed by HFTs are not new, and therefore nothing has changed in the economics of the market. Furthermore, the speed of trading has been gradually increasing for decades, and hence high-frequency trading may not represent a fundamental shift in the way markets operate. Other market observers view high-frequency trading as a game-changer. The combination of fragmentation like never before and a tremendous disparity in terms of speed between regular investors and those who expand vast resources on technology have joined to shape a new (and important) set of players with its own unique strengths and weaknesses. Low-latency trading Our first paper, Hasbrouck and Saar (in this issue), emphasizes the low latency that highfrequency trading algorithms require in order to profit from the trading environment itself (rather than from investing in financial securities). Hasbrouck and Saar define low-latency activity as strategies that respond to market events in the millisecond environment, and proceed to investigate it. Their paper provides a glimpse into the millisecond environment in which algorithms of all kinds interact. They find that some algorithms are so fast that the time it takes them to detect a market event (like the arrival of an order to the limit order book), analyze it, and respond by sending an order appears to be 2-3 milliseconds (where a millisecond is one thousandth of a second). At such speeds, human traders cannot accurately discern the current state of the book, and the market dynamics may be driven by the interplay between algorithms. The paper provides a couple of examples of interactions in the millisecond environment where algorithms appear to play with each other, submitting and cancelling orders dynamically, in an attempt to gain an edge. This new environment is unfamiliar to most investors, and to a large extent is literally unobservable both because market interaction occurs at speeds too fast for human comprehension and because analyzing historical evidence of such interactions requires access to large datasets and computing power that only few possess. Hasbrouck and Saar pose the question whether low-latency activity in the millisecond environment, mostly driven by proprietary high-frequency trading firms, helps or hurts the 2

market. They go about answering the question by developing a proxy for the activity of HFTs that is based on their definition of low-latency activity. Using widely-available NASDAQ data, Hasbrouck and Saar measure the intensity of dynamic low-latency order submission strategies inferred from the data and relate their measure to different aspects of market quality. Hasbrouck and Saars key finding is that higher low-latency activity lowers spreads, increases depth, and reduces short-term volatility. They conduct the analysis both on a normal month and on a month with heightened economic uncertainty in which stock prices went down more than 10 percent. In both cases, more low-latency activity was found to be beneficial. Hasbrouck and Saar findings are very relevant to the debate on the merits of high-frequency trading, but it is also important to note what aspects of the debate the paper does not attempt to address. The first issue that remains unanswered by the analysis in Hasbrouck and Saar is the nature of the mechanism by which the interaction of high-frequency trading algorithms improves market quality. In other words, while the paper opens our eyes to the interaction of algorithms in the millisecond environment, the manner in which these interactions aggregate to affect the market structure on a time scale that humans can comprehend is not yet well understood. Algorithms utilized by HFTs are heterogeneous in purpose and design, and some strategies may affect the market differently than other strategies. Hasbrouck and Saar demonstrate that the sum of all these effects is positive with respect to certain measures of market quality, but understanding the specific strategies and how they influence the market could give market operators and regulators the ability to limit harmful strategies and encourage more beneficial strategies, further improving market quality. Second, Hasbrouck and Saar look at average market behavior over a month. Nothing in their results rules out that there could be specific instances in which HFTs are detrimental to market quality. Investors and market regulators may care both about how the market operates most of the time and how the market behaves in extreme circumstances. Humans have a remarkable ability to deal with stressful times and to adapt to unforeseen market conditions. One of the strengths of human traders is their ability to slow down trading and look for additional information. The current generation of algorithms seems to have difficulty doing so: they either

operate and improve market quality (as in Hasbrouck and Saar), or stop operating and thus cause prices to disappear as demonstrated during the flash crash of May 2010 (CFTC/SEC (2010)). Kirilenko, Kyle, Samadi and Tuzun (2011) argue that while HFTs did not trigger the flash crash, their response to the selling pressure may have exacerbated market volatility. Since HFTs effectively replaced human market makers, much liquidity provision in todays market depends on their activity. If, at times of stress, HFTs stop providing liquidity and become demanders of liquidity, the provision of liquidity significantly deteriorates, which can cause substantial volatility. While large price drops occurred in manual markets as well (e.g., in May 1962 or October 1987) they were very rare and were not associated with the clearly unreasonable trade prices (like 1 cent or $100,000) that were observed during the flash crash (Madhavan (2012)). There is growing unease on the part of some market observers that such violent price moves are occurring more often in financial instruments in which HFTs are active, and that the era of isolated episodes every several decades is behind us. Perhaps the current algorithms are not yet sophisticated enough or their error tracking and correcting capabilities are inadequate, and the ability to slow down and look for additional information will evolve as HFTs mature and the regulation of HFT advances. The question of whether HFTs help or hurt market stability remains open. Market stability also relates to the issue of liquidity risk in financial markets. In other words, if liquidity risk is priced, then the HFTs impact on the variability of liquidity, not just the level of liquidity as in Hasbrouck and Saar, should be considered. Third, we would stress that Hasbrouck and Saar do not claim that a market dominated by HFTs is inherently better than a market dominated by human traders. While Hasbrouck and Saar find that more intraday low-latency activity brings about larger depth, smaller spreads, and lower volatility, these findings are limited to the current market environment in which HFTs dominate the scene. In other words, the paper shows that given a market dominated by HFTs, more of their activity is preferable to less. The question of whether financial markets before the advent of HFTs were better or worse than todays HFT-dominated markets remains unanswered.

Very fast money: high-frequency trading on the NASDAQ Why did algorithms replace human traders? One oft-cited driver for the ascent of algorithms is lower operating costs. These cost savings could be passed to investors via lower trading costs (e.g., smaller spreads) or end up as profits for the HFTs. Hence, assessing the profitability of high-frequency trading is of great interest. Our second paper, Carrion (in this issue), demonstrates that the question of profitability of HFTs and how it relates to their strategies is rather sensitive to the methods used in the analysis. Carrion studies a special dataset made available to academic researchers that identifies the trading of 26 HFTs on NASDAQ. The dataset does not include all HFTs, but is likely to include many of the larger ones. The NASDAQ data do not identify each HFT but treat all of them as one entity and flags trades in which there was any participation by the HFTs. Carrion finds that this aggregate HFT makes money on average when supplying liquidity and loses money on average when demanding liquidity. Interestingly, this appears to be exactly the opposite of the finding reported in Brogaard, Hendershott, and Riordan (2013) using the same dataset. Attempting to reconcile the conflicting findings, Carrion shows how these results are sensitive to the manner in which profits are defined and computed. For example, both papers do not observe each HFT firm individually, but rather construct an artificial aggregate HFT measure. However, Carrion nets trades that one HFT firm makes with another HFT firm because they do not provide information on whether HFTs in the aggregate buy or sell. Brogaard at al., on the other hand, keep them in the sample. Another difference between the two papers stems from the inability to observe the activity of these 26 HFTs off the NASDAQ trading platform. Since the U.S. market is highly fragmented, some or all of these HFTs are likely to trade the same securities on other trading venues as well. Both studies therefore need to decide how to treat imbalances in the net position of the aggregate HFT that are likely an artifact of only observing NASDAQ data (because these HFTs rebalance on other trading venues). Brogaard et al. assume that the imbalance is executed each day at the closing quote midpoint, while Carrion assumes that the HFTs close their positions at the relevant volume-weighted average price (i.e., they close a buy imbalance using 5

the VWAP of the HFTs sells). Unlike the midquote, this assumption imputes prices on the unobserved trades by taking into account the skill that HFTs exhibit on NASDAQ. The methodological choices made by both papers seem reasonable, and therefore the fact that such seemingly innocuous choices are sufficient to completely reverse the conclusions regarding profitability of liquidity supplying versus demanding trades deserves attention. Clearly, the inability to observe all trading in the market suggests that caution is warranted in drawing conclusions from these data. Perhaps more fundamentally, this observation begs the question of whether separating high-frequency trading into liquidity supplying and liquidity demanding is meaningful from an economic perspective. Each algorithm has the freedom to demand and supply liquidity depending on the parameters of the algorithm and real-time input from the market. The algorithm pursues an integrated strategy that might pay to take liquidity from the market at some times and get paid for providing it at others. Analyzing separately the profitability of these two components therefore does not necessarily reflect the economic decision that is programmed into the algorithm, and hence may be less helpful in terms of understanding the economics of highfrequency trading. Carrion also decomposes the performance of HFTs into an intraday market timing component, which is the profitability they would have realized by transacting at the 5-minute market VWAP rather than the actual trade price, and a short-term timing component, which is the profitability of their actual trading relative to the 5-minute market VWAP. He finds that, in the aggregate, intraday market timing performance is greater than short-term timing performance. Why would HFTs invest heavily in technology to reduce their latency to milliseconds (or less) when most of their performance is due to intraday market timing in terms of minutes? To understand this result it may be useful to think about the difference between analyzing a specific HFT firm as opposed to the hypothetical aggregate HFT that Carrion studies using the NASDAQ dataset. The arms race to achieve the lowest latency is meant to divide the pie among the HFTs. In other words, the focus is on relative speed and the fastest HFT can get a larger fraction of total 6

profits. The source of profitability of the aggregate HFT, on the other hand, has to do with the size of the pie or whatever it is that gives HFTs an advantage relative to non-HFTs. Carrions result suggests that one source for the performance of the aggregate HFT could be that HFTs have better models than other traders in the market for predicting intraday price evolution. Carrion goes beyond documenting trader behavior and profitability to investigate the relationship between aggregate HFT trading and one dimension of market quality informational efficiencythat is not investigated by Hasbrouck and Saar. He shows that days with a lot of trading by HFTs are also associated with better informational efficiency of prices in the sense that both lagged order flow and information about the market index have less predictive power for returns. His tests only show association, and it could be that both highfrequency trading and informational efficiency increase during those periods for other reasons. Brogaard, Hendershott and Riordan (2013), using the same dataset but a different methodology, also find that HFTs facilitate price discovery. The informational efficiency of prices is important to society because efficient prices can help better allocate scarce resources. It is an open question, however, whether the information that HFTs help incorporate into prices a few milliseconds faster enhances welfare once the costs HFTs impose (in terms of intermediation, technology expenditures, and the difficulty of regulating markets) are taken into account. High-frequency trading and the new market makers The impact of HFTs on informational efficiency depends on the particular strategies they pursue. To understand strategies, however, one may need to focus on the trading of specific HFTs. If trying to capture all HFTs in a single measure is on one end of the spectrum, focusing on just one HFT is on the other end. Our third paper, Menkveld (in this issue), is a case study that looks at just one HFT that follows a market making strategy. It showcases the symbiotic relationship between market fragmentation and HFTs in our financial markets. Chi-X, a new trading venue in Europe, did not capture much order flow until this single market maker started trading on it. HFTs are important for the success of new trading venues because, by posting quotes, they attract order flow to the market.

In addition, HFTs arbitrage activity across trading venues keeps prices aligned and prevents investors from observing that the same asset is being priced differently on different trading venues. While the asset may in fact exhibit pricing discrepancies, only HFTs can make money from the fleeting discrepancies, and their activity eliminates the discrepancies faster than humans can observe them. Given todays environment in which equity markets (especially in the U.S.) are highly fragmented, HFTs play an important role in keeping markets virtually consolidated. However, since they make money doing so, investors must be paying for the consolidation of the market. This arbitrage profit would not have been there had it not for fragmented markets. Hence, Menkvelds case study that documents the profits of this HFT serves to demonstrate to us that investors pay a tax for supporting market fragmentation. The same fragmentation, however, is also the outcome of enabling competition among trading venues. Such competition has proven to reduce fees charged by trading venues as well as to promote innovation. While Menkvelds paper is not specifically investigating the costs and benefits of fragmentation, it highlights that the interaction between fragmentation and highfrequency trading is an important dimension for future study. Menkveld also shows that the inventory control of this single market makerselling more aggressively when accumulating inventory and buying more aggressively when inventory becomes too lowaffects market prices. The HFT market maker is subject to capital constraints, which lead to unwillingness to carry large positive or negative inventory positions. We often think about stock prices as being determined by many investors who interact in the market, but when market makers play such a dominant role in trading, their constraints can affect prices. Menkveld highlights one source of capital constraints that is often neglected in academic studies: the dual nature of fragmentation and competition in clearing. When a new clearing house enters the market for Dutch stocks and competition intensifies, the most visible outcome is a reduction in clearing fees (of both the incumbent and the new clearing house) by 50%60% within a year. Less visible is the implication for required capital of having a fragmented rather than a consolidated clearing process. Menkvelds analysis suggests that if the HFT were able to net its positions, the average capital margin required would be lower by a factor of 100 compared

to the fragmented situation with the two clearing houses. Therefore, a fragmented market for clearing worsens the capital constraints of the HFT, which could contribute to the finding that the HFTs inventory control affects prices. The bulk of Menkvelds paper examines the profitability of this specific HFT that pursues a market making strategy. He shows that the market maker HFT makes money by earning the spread (i.e., supplying liquidity), and loses money on positioning (i.e., buying when prices are low and selling when prices are high) for horizons longer than five seconds. In other words, this specific HFT is not great at forecasting how prices would evolve during the day beyond the very short horizon. This result is similar in spirit to findings in the literature on human market makers (e.g., Hasbrouck and Sofianos (1993)), and suggests that, perhaps in terms of the economics of market making, the use of algorithms does not alter the basic tradeoffs. Notice that Carrion finds significant intraday market timing profits for the aggregate HFT in the NASDAQ dataset. The difference between these two sets of results could mean that some HFTs in the U.S. are different from the single market maker studied by Menkveld and are, in fact, able to forecast intraday prices. The profit analysis also demonstrates how significant trading fees (or rebates) and clearing fees are to the profitability of the HFT. In fact, trading and clearing fees can make one trading venue (Chi-X) more than twice as profitable for the HFT than another trading venue (NYSE-Euronext). In the current environment in which trading algorithms dominate the market, fees have a first-order-effect on where trading takes place, which explains the proliferation of new trading venues. If a trading venue can attract HFTs by creating a more attractive fee structure, these large traders will bring liquidity with them, yielding significant influence over the structure of the overall market. However, as much as the new market makers (as Menkveld calls the HFT he studies) appear economically similar to the old market makers, there are important differences. For example, Menkveld is analyzing the spread component of the HFTs profits using the midquote from the incumbent as the reference price. One reason is that despite a Chi-X market share of 13.6% of all trades and the presence of an HFT market maker, one side of the Chi-X book is 9

missing at times and hence the midquote in this market can be less meaningful. In other words, the HFT does not maintain a continuous presence with actionable quotes on Chi-X the way we were used to seeing market makers with stronger affirmative obligations operate. The lack of continuous actionable quotes could also contribute to lack of stability in a fragmented market environment where HFTs are the market makers. The diversity of high-frequency traders Of course, market making is not the only strategy HFTs pursue. In fact, it is reasonable to assume that there is heterogeneity in the strategies implemented by HFTs, which complicates the task of figuring out how they impact the market. Our fourth paper, Hagstrmer and Nordn (in this issue), uses data that identify the trading of specific HFTs to create two categories of strategies and obtain insights into how they affect the market. Specifically, they use a dataset from NASDAQ-OMX Stockholm that enables them to separate HFTs into those that follow mainly market making strategies, similar in spirit to the market maker that Menkveld analyzes, and others that follow opportunistic strategies. Hagstrmer and Nordns definition requires that the HFT be a member firm on NASDASQ-OMX Stockholm and that it predominantly engages in proprietary trading. They find 29 firms that satisfy this definition. Other HFTs that trade on NASDAQ-OMX Stockholm use the services of exchange members that function as brokers to connect to the exchange using sponsored access (where the HFT firm uses its own infrastructure but trades under the brokers ID) or direct market access (where the HFT firm uses the infrastructure of the broker). Hence, Hagstrmer and Nordn observe only a subset of active HFTs. Even within this subset, however, they find that various HFTs trade very differently. In particular, they define market maker presence as the fraction of 10-second periods in which an HFT has quotes either at the best bid or at the best offer, and show dramatic divergence between market making HFTs and other HFTs. The market making HFTs are present about 60% of the time, while the other HFTs have an average presence of only about 5%. Various other metrics, among which are the amount of liquidity demanded and supplied as well as the extent of inventory control, suggest diverse trading styles. Even market makers, however, trade 10

about 30% of the time by taking liquidity, which highlights the fact that algorithms use both limit and marketable orders to carry out their strategies even if they are predominantly designed to make markets. One of the concerns often mentioned with respect to HFTs is that they overload the exchanges with submissions and cancellations of limit orders, increasing the need for expensive technology upgrades and making regulation and supervision of financial markets difficult. In other words, HFTs may be imposing negative externalities on other market participants (Gai, Yao, and Ye (2012)). Hagstrmer and Nordn find that for market making HFTs, the median ratio of limit order submissions to trades is less than 15. Is that high or low? In the U.S., NASDAQ implemented in July 2012 an order fee for traders who excessively load the systems with orders without much trading. The fee applies above 100 orders per trade (particularly for orders that are not submitted close enough to the National Best Bid or Offer). It appears that on NASDAQ-OMX Stockholm, HFTs whose business is market making can operate successfully at much lower ratios of orders to trades. Still, it is important to stress that submitting multiple orders that are subsequently cancelled may be fundamental to modern markets and need not be solely viewed from a negative perspective. Baruch and Glosten (2013) provide a theoretical model in which traders in equilibrium submit and cancel multiple orders at seemingly random prices (i.e., play mixed strategies) as a way to manage the risk that other traders would undercut their orders. While it could be that some of the message activity we observe is a byproduct of a healthy equilibrium in our markets, how much randomness (or submission and cancellation of orders) is required to sustain an equilibrium is an open question. The main result in Hagstrmer and Nordns paper concerns the impact of market making HFTs on volatility. They use an interesting test designcombining periods before and after the activity of market making HFTs began on NASDASQ-OMX Stockholm together with changes in tick size of stocks that cross price-level boundariesto show that an increase in the activity of market-making HFTs causes a decrease in short-term volatility. This raises the question of whether market making HFTs are somehow superior to other HFTs from a societal 11

perspective both because they may help lower volatility and because they help investors trade whenever they want to trade. In fact, market makers have provided valuable services to financial markets for many years. Proposals to curb HFTs at times face criticism for the potential to harm good HFTs (i.e., the market makers that provide liquidity) together with putative bad HFTs (whose nature often remains unspecified). But can there be too much of a good thing? In other words, it is unclear how much market making is optimal for society. The Securities Exchange Act of 1934 in the U.S. emphasized the virtues of order execution without the participation of intermediaries when possible because of the general belief that transaction costs to society are minimized when investors trade directly with other investors. From this perspective, market makers always impose costs on investors, and the relevant question is whether these costs constitute a reasonable compensation for the services they provide. Many investors would like to use limit orders to provide liquidity, and agency algorithms potentially make it easier for investors to utilize dynamic limit order strategies. If HFTs use their speed advantage to crowd out liquidity provision when the tick size is small and stepping in front of standing limit orders is inexpensive, it is unclear whether investors benefit even when HFTs make liquidity provision more competitive and spreads are narrower. Forced to demand liquidity because HFTs lower the likelihood that investors limit orders would execute, the aggregate costs to the investor population could increase. If the value investors attach to trading faster (e.g., obtain trade executions in milliseconds) is lower than the potential increase in their aggregate costs, overall welfare may decline. It is also possible that the manner in which HFTs affect different populations of investors diverges. Improved spreads, as reported in Hasbrouck and Saar (in this issue), could be beneficial to uninformed retail investors. Buy-side institutional investors, on the other hand, may find that trading large positions is more difficult, and that the speed disadvantage relative to HFTs decreases their ability to provide liquidity, increasing their costs. Institutional investors may also feel the need to spend money on a more sophisticated trading technology just to keep

12

up with HFTs, further eroding their returns. Whether HFTs have differential impact on different investor populations is yet an unresolved question. The welfare implications of HFTs are important to both academics and regulators. The dramatic increase in the activity of HFTs that use sophisticated algorithms to automatically trade in and out of positions in milliseconds, coupled with the large investments in computer and communication technologies in order to reduce latency in markets, begs the question whether high-frequency trading adds value. Would a social planner be willing to spend the resources required by HFTs? The papers in this special issue both document the profitability of HFTs as well as point to some improvements in market quality brought about by their activity that may benefit investors. Several recent theoretical papers examine the welfare implications of HFTs (e.g., Biais, Foucault, and Moinas (2013), Hoffmann (2013), and Jovanovic and Menkveld (2012)), but it is fair to say that this is hardly a settled question. Perhaps the insights from these papers and from further research could be used to guide future empirical work. Is high-frequency trading just faster trading or is there a discontinuity at these speeds that makes high-frequency trading different in the way it affects markets? The papers in this special issue help shed light on the topic of high-frequency trading, but many questions remain unanswered. We hope that this special issue encourages others to pursue further research in this area.

13

References Baruch, S., Glosten, L.R., 2013. Flickering quotes. Working paper. Columbia University and the University of Utah. Biais, B., Foucault, T., and Moinas, S., 2012. Equilibrium high-frequency trading. Working paper. University of Toulouse. Brogaard, J., Hendershott, T.J., Riordan, R., 2013. High-frequency trading and price discovery. Working paper. University of Washington. Carrion, A., 2013. Very fast money: high-frequency trading on NASDAQ. Journal of Financial Markets (in this issue). Gai, J., Yao, C., and Ye, M., 2012. The externalities of high-frequency trading. Working paper. University of Illinois. Hagstrmer, B., Nordn, L.L., 2013. The diversity of high-frequency traders. Journal of Financial Markets (in this issue). Hasbrouck, J., Saar, G., 2013. Low-Latency Trading. Journal of Financial Markets (in this issue). Hasbrouck, J., Sofianos, G., 1993. The trades of market makers: an empirical analysis of NYSE specialists. Journal of Finance 48, 1565-1593. Hoffmann, P., 2013. A dynamic limit order market with fast and slow traders. Working paper. European Central Bank. Kirilenko, A.A., Kyle, A.S., Samadi, M., Tuzun, T., 2011. The flash crash: the impact of high frequency trading on an electronic market. Working paper. CFTC and University of Maryland. Jovanovic, B., Menkveld, A.J., 2010. Middlemen in limit-order markets. Working paper. New York University. Madhavan, A., 2013. Exchange-traded funds, market structure, and the flash crash. Financial Analysts Journal 68, 20-35. Menkveld, A.J., 2013. High-frequency trading and the new market makers. Journal of Financial Markets (in this issue). U.S. Commodities Futures Trading Commission and the U.S. Securities and Exchange Commission, 2010. Preliminary findings regarding the market events of May 6, 2010. U.S. Securities and Exchange Commission, 2010, Concept release on equity market structure 3461358. 14

Das könnte Ihnen auch gefallen