Search This Blog


This is a photo of the National Register of Historic Places listing with reference number 7000063
Showing posts with label TRADER FORUM 2014 EQUITY TRADING SUMMIT. Show all posts
Showing posts with label TRADER FORUM 2014 EQUITY TRADING SUMMIT. Show all posts

Tuesday, February 11, 2014

SEC COMMISSIONER STEIN'S REMARKS AT TRADER FORUM 2014 EQUITY TRADING SUMMIT

FROM:  SECURITIES AND EXCHANGE COMMISSION 
Remarks before Trader Forum 2014 Equity Trading Summit
 Commissioner Kara M. Stein
Grand Hyatt, New York City

Feb. 6, 2014

I am joining you today to speak about something we all care deeply about: our capital markets.  Just a few miles south of here, over two hundred years ago, a collection of traders and financiers came together to lay the foundation for what became the crown jewel of American capitalism.  The Buttonwood Agreement, which led to the formation of the New York Stock Exchange, helped forge a new era of economic growth for a young country, and gave birth to New York City as the world’s financial center.

Since then, the markets have grown quite a bit, and have come to cover nearly every corner of the planet.  Today, global market capitalization is about 64 trillion dollars.[1]  Yet, the capital markets today serve the same purpose they did then: matching businesses in need of capital with investors in need of returns.  That might be the only thing that hasn’t changed since 1792.

As technology has transformed the way people socialize, it has also transformed the way people do business—including trade.  The sounds of equities trading are no longer the frantic cries and gestures of traders on the floor of the New York Stock Exchange, but rather the whir of servers stacked in windowless storage data rooms of non-descript buildings miles outside of the city.  Orders are placed and executed in millionths of a second, taking away direct human interaction, and some argue, human control.

But this evolution was not just driven by advances in technology.  It was driven by competition.  Many of you buy and sell stocks for some of the largest asset managers in the world.  You have to participate in the market on a daily basis.  You are acutely aware of the simple fact that it has always been an advantage to know when a large trader may need to buy or sell a large position before that order is filled.  You have to guard against brokers, and other market participants, from learning of your intentions before your trades are done.  Your execution quality, and your jobs, depend upon it.

The nature of the markets requires that those in the middle, like the old specialists, hold a special position of trust and confidence.  This role requires them to know who wanted to trade and how much.  Unfortunately, too often, they abused their positions.

Over the years, pleas for fairer competition and safer trading spaces for institutional and other investors ultimately led the Commission to adopt Regulation NMS.  The results have been dramatic.  Just a few years ago, the NYSE and Nasdaq dominated the US marketplace.

Today, counting the options markets, there are 16 registered securities exchanges, dozens of so-called “dark pools,” and hundreds of broker-dealer internalizers.[2]

While the birth, and growth, of crossing networks and internalizers had started years earlier, the Commission’s implementation of Regulation NMS seems to have provided the single largest impetus for change.  In 2005, the year Regulation NMS was adopted, nearly 80 percent of all trading volume in NYSE-listed stocks was done on the exchange.[3]  Four years later, that number had fallen to 25 percent.[4]  At the same time, trades executed in dark venues may now comprise over a third of a day’s trading volume.[5]

Clearly, market participants like you wanted competition, and you responded to the brave new world by sending your orders to a multitude of rapidly proliferating trading venues.  Each of these pools of liquidity, whether lit or dark, has come to play a role in the new national market system.  These execution venues compete in a variety of ways.  Of course, exchanges compete for listings.  But execution venues also compete on:

quantity and speed of information they provide about their order book;
fees;
the amount of information they make available;
the ways that traders can submit orders; and
any number of other variables.
At the same time, traders and trading strategies have evolved.  For over a decade, computers have scanned public information and placed orders based on pre-programed criteria.  While front-running used to occur over periods of minutes, hours, or even days, a well-positioned computer may now be able to process information and place orders in just milliseconds.

What isn’t entirely clear is the impact of all these different variables on the equities market as a whole.  While our capital markets have dramatically changed, we need to make sure that we do not lose sight of perhaps our most important and critical objectives:  robust, fair, and efficient capital markets.

With these objectives in mind, I want to focus you on a few questions that I think we should all be thinking about.

How do we make our equities markets more robust?  Today, we have more stocks available for trading at more venues at tighter spreads than ever before.  That said, volumes have remained largely off their pre-crisis highs, and have also fluctuated dramatically.

Our markets also face significant challenges.   They experience disruptions, including what some have called “mini flash crashes.”  Individual stocks at times gyrate wildly within fractions of a second, only to reset moments later.  One might mistakenly think that these shocks would occur in just thinly traded stocks.  The truth is far from it.  Last October, Walmart’s stock fell 5 percent in one second, with trades being executed in at least a dozen venues, before rebounding.  That followed Google’s mini-crash in April.[6]  These are some of the most heavily traded stocks in the world.  While these sharp movements may wreak havoc on the few unlucky investors with outstanding stop-loss orders, so far, they seem to be generally dismissed as inconvenient computer glitches or unwise traders.

We should not be so easily assuaged.  Rather, we should look at these mini crashes as pieces of a puzzle; symptoms of something larger.  What happens if, instead of a single issuer, the equity that is subject to a crash is a broad index?  On May 6, 2010, we all found out.  The Flash Crash was a seminal event for many of us.  It was a wake-up call to investors, regulators, and policy makers.  In just a few minutes, the markets demonstrated to the world how interconnected, complex, fragile, and fast they may be.  On a day already filled with fears of a European debt crisis, one relatively small, simple event triggered a cascade of steep price declines in interrelated products, traded at multiple venues, overseen by multiple regulators.[7]

One trader’s algorithm combined with selling pressures by high frequency traders and others pushed E-mini futures prices sharply down, which ultimately brought down the SPY, which in turn ultimately brought down individual stocks and ETFs, even as the E-mini futures and SPY were beginning to recover.[8]  When all was said and done, over the course of 20 minutes, 2 billion shares were traded for over 56 billion dollars.[9]  During that same time, 20,000 trades were executed at prices that were more than 60 percent away from their prices at the start of the twenty minutes.[10]  And then, almost as quickly as it started, it was over.  The futures markets reset and then the equities and options markets eventually followed suit.[11]

In the aftermath, we’ve learned quite a bit.  We learned that even the most heavily traded futures and equities products in the world were susceptible to computer-driven crashes.  We learned that the connections between the futures and equities markets were direct enough so that safety features in one market should be coordinated with those in the others.

There can be no doubt now that the markets, and their regulators, need to coordinate.  We learned that the Commission did not have easy access to the data it needed to quickly and effectively analyze and understand the event.  And we learned just how much investors’ confidence may be shaken by dramatic price swings, even if they are quickly corrected.

Clearly, we need to make sure that our markets can withstand computerized trading glitches, whether they arise from a Kansas City-based institutional investor seeking to sell E-mini futures, a wholesale market maker that had a problematic software installation, or a Wall Street bank with a malfunctioning options program.  One trader’s computer system should not be able to bring our capital markets to their knees.  By the same token, if one execution venue’s data system sends out bad data, another venue shouldn’t crash.

There has always been an emphasis on system reliability.  Some have focused on the fact that our trading venues may operate smoothly over 99 percent of the time.  That is obviously important.  But resiliency should also be important.  How do these systems respond when impacted by something that has never happened before.  Our market participants –  traders, venues, clearing firms and others – need to anticipate, and plan reactions to, the unexpected.

Firms with direct access to the markets and execution venues should be required to have detailed procedures for testing their systems to ensure that they don’t cause market failures.  Systems should be reliable, so that anticipated failures are rare.  Testing should be thorough.  Data should be verified.  But systems must also be resilient, so that they can adapt and respond to challenges.  Seamless backup systems should be established.  Firewalls and trading limits should be clearly defined and coordinated across markets.

A comprehensive review of critical market infrastructure, with a focus on points of failure, like the securities information processor, is essential.  The Commission must work with traders, brokers, exchanges, off-exchange execution venues, our fellow regulators, and others to better identify and address areas of risk.  The greatest capital markets in the world should be more than capable of protecting against and minimizing the damage inflicted by a bad trading algorithm, an unexpected stream of data, a hardware failure, or a determined hacker.

How do we make our markets fairer?  The answer often depends on whom you ask.  For retail customers, they receive confirmations that their orders have been filled within seconds.  What most of them don’t know is that their orders likely never went to a stock exchange.  Rather, the orders were probably sold by their broker to a sophisticated trader who paid for the privilege of taking the other side.

These retail customers are ostensibly better off because they got a fraction of a penny in price improvement from the National Best Bid and Offer (“NBBO”) price.  But, is a fraction of a penny per share enough of a price improvement to be meaningful?  Does it matter if the price improvement is measured against a NBBO, which might be stale by the time the trade is executed?  Would retail investors actually be better off if their trades were routed to the public execution venues?  Would that improve the quality of their executions or the value of the NBBO for the entire marketplace?  Some individual transaction costs may be cheaper, but what about others?  What about implied transaction costs?

For institutional traders, the questions get even more complex.  Institutional traders seeking to keep their trading costs low now have to scan dozens of execution venues in search of liquidity, and are increasingly at the mercy of broker-provided, smart order routers to slice, dice, and feed out their orders into the marketplace.  Do these routers send orders to the venues that are most likely to get them filled?  Or do they send the orders to the venues that have the lowest cost for the broker, even if it might not get the order filled, or get the best price?[12]  When will an institutional broker commit capital to take the other side of an order?  Will an institutional investor’s order be seen by third parties, who may trade ahead of it, or otherwise take advantage of that information?  How should a trader measure execution quality?

Unfortunately, these questions are difficult to answer, in large part, due to a lack of available comprehensive data.  The Consolidated Audit Trail (or “CAT”) is intended to help fill that void.  In the meantime, the Commission last fall unveiled the Market Information Data Analytics System (fondly known as “MIDAS”), which is intended to help answer some of these questions.  The MIDAS system captures vast amounts of market data from the consolidated tapes and proprietary data feeds, and has already been used to help study how odd lot trading transparency may impact their use.[13]  MIDAS collects over one billion records a day, and can help the Commission and the public better understand trends and market events.[14]

But we need the deeper information that only the CAT will provide.  And we also need help in getting it up and running as soon as possible.  All market participants should be involved in helping to develop the CAT—it is not, nor should it be, the exclusive province of the Commission and the SROs.  And we must also move quickly.  Until regulators, buy-side traders, brokers, consultants, and the academic community can pore over the data, we simply don’t know what we’re missing.

Another important question we should continuously ask ourselves is how can we make our markets more efficient.  As trading has become more automated, overall execution costs and nominal spreads have narrowed.  However, a growing body of research on datasets, both here and abroad, suggests that some of these potential efficiency gains may be overstated, plateauing, or even reversing.  For example, one study recently found that, when controlling for information asymmetry, increases in market share for dark pools’ non-block trading corresponds with increased market-wide transaction costs.[15]  On the flip side, other studies suggest that dark pool activity may be associated with narrower spreads, greater market depth, and lower volatility.[16]

From a trader’s perspective, is it efficient to have to check dozens of pools of liquidity in order to execute a trade?  What are the costs and benefits of monitoring and accessing these multiple pools?  Does an institutional trader risk tipping off other market participants by just seeking to access these venues?  Finally, does the complexity unnecessarily increase traders’ reliance on brokerage firms or consultants?

Again, good data and careful analysis is critical to answering these questions, which brings me back to the CAT.  We need to get it up and running as soon as possible.

As you may have guessed, I believe we should develop policy from the facts.  We should be gathering as much data as we can, and if we think an alternative approach should be considered, we should test and evaluate it.

For example, we should explore how the maker-taker pricing model impacts liquidity and execution quality.  Does the current rebate system incentivize or penalize investors?  I have heard from many investors, and even exchanges, who are worried about the incentives embedded in the current system, and if there are proposals to explore alternative approaches, we should consider them.  We should try to understand the various order types. Why would one exchange need 80-plus order types?  What is the purpose for each?  How do these order types interact with others, and how do they impact market liquidity and functioning?  We should be willing to re-examine the roles of these order types in the market.

We also need to gather data to better understand the impacts of different types of trading strategies on the markets.  Do high frequency traders add meaningful liquidity to the markets, or not?  Do high frequency trading strategies impact volatility?  If so, how?  We need to look at market maker privileges and burdens.

In each of these areas, we should be driven by the relentless pursuit of more robust, fair, and efficient markets.  And if we can make modest reforms that improve the markets now, we should consider them.

One example might be a tick size pilot.  Some have argued that, for micro and small cap stocks, often penny-wide spreads may be reducing trading profits for brokers so significantly that they are unwilling to provide research coverage and market making services in those stocks.  Supporters argue that widening displayed spreads may restore trading profits for firms, which would incentivize them to enhance research coverage and market making in stocks of micro and small cap issuers.  Others argue that there is likely to be no appreciable connection between the tick size and the amount of research coverage or market making in these issuers.  The Commission would benefit from hearing your thoughts on whether and how a tick pilot program might be helpful in answering a number of questions.  A carefully-constructed tick size pilot program might help inform this debate.  But a poorly-constructed one could easily harm investors and the markets.  

I will be working with my fellow Commissioners in considering the merits of proceeding with such a program, and ensuring that if we do proceed, we maximize its utility while minimizing its costs and risks to investors.

I also want to take a moment to assess the role of the self-regulatory organizations.  In a world where trading occurred predominantly on one or two venues, it made sense for those venues to have primary regulatory oversight over trading.  But, in a world where trading occurs in hundreds of places, which are for-profit enterprises, the exchange-based SRO model warrants significant reconsideration.

Does it make sense for firms registered with the SEC as exchanges to bear the bulk of the costs to oversee a market that is much larger than their respective portions?  Who should be determining and enforcing listing standards?  How should we rationalize the discrepancies in regulatory treatment between a dark pool and an exchange, given that they are expected to perform the same generalized function: serving as a place to match buyers and sellers?

And we must better understand and clarify the role of the FINRA, which has taken on more and more regulatory functions.  In recent years, through private contracts, FINRA has come to run many critical market surveillance functions, from monitoring for insider trading, to looking for cross-market manipulations.  While this may be one way to deal with increasing market complexity, it arguably has also created new challenges: including how to effectively oversee a very important, but private regulator.  We need to be thinking about the interactions between FINRA and its customers, other market participants, the Commission, and regulators and participants in related markets.

We at the Commission clearly have a lot of work to do.  Technology and competitive pressures have already moved our markets well past our relatively new regulatory regime.  A short time ago, an executive from a foreign exchange told me that he turns over his entire technology operation every two years.  In a world where a few millionths of a second can mean the difference between a good execution and a bad one, we need to make sure our rule structure and our surveillance apparatus can keep up.  In order for the US to remain the home of the premier capital markets in the world, we must relentlessly strive to keep them the most robust, fair, and efficient in the world.

That will require constant evaluation by both market participants and regulators, working together, in the midst of constant change.  I think we have a Commission that is eager to take on this task.

I know I am, and I hope you will help me.

***


[1] World Federation of Exchanges, 2013 WFE Market Highlights, 5 (2014).

[2] Concept Release on Equity Market Structure, Exchange Act Rel. No. 34-61358, 75 Fed. Reg. 3594 (Jan. 21, 2010).

[3] Id.

[4] Id.

[5] Id.

[6] Steven Russolillo, Google Suffers ‘Mini Flash Crash,’ Then Recovers, Wall St. J. (Apr. 22, 2013).

[7] Findings Regarding the Market Events of May 6, 2010, Report of the Staffs of the CFTC and the SEC to the Joint Advisory Committee on Emerging Regulatory Issues (Sept. 30, 2010).

[8] Id.

[9] Id.

[10] Id.

[11] Id.

[12] See, e.g., Robert Battalio, Shane Corwin, and Robert Jennings, Can Brokers Have it all? On the Relation between Make Take Fees and Limit Order Execution Quality (2013)  (finding that “routing limit orders in a manner that maximizes make rebates reduces fill rates and produces less profitable limit order execution.”).

[13] Staff of the Securities and Exch. Comm’n, Data Highlight 2014-01: Odd-Lot Rates in a Post-Transparency World (2014).

[14] Sec. and Exch. Comm’n, Market Information Data Analytics System, What is MIDAS?, available at http://www.sec.gov/marketstructure/midas.html.

[15] Frank Hatheway, Amy Kwan, and Hui Zheng, An Emprical Analysis of Market Segmentation on U.S. Equities Markets (2013).

[16] See, e.g., Sabrina Buti, Barbara Rindi, and Ingrid M. Werner, Diving Into Dark Pools (2011).