Despite Silicon Valley’s increasing importance to the delivery of financial services, people who work on financial regulatory policy rarely spend much time in conversation with those who study technology policy. My new internet serial, the cheekily named Fintech Dystopia: A summer beach read about Silicon Valley ruining things, was written to help bridge this gap – and bridging this gap is becoming increasingly urgent. For those worried about the stability of our financial system, there are aspects of Silicon Valley’s modus operandi that should make them particularly concerned about the rise of fintech. And for those worried about Silicon Valley’s power over our society, that power is only becoming more entrenched as Silicon Valley makes incursions into the financial services industry – with all the money, data, and government safety nets that come with that kind of business.
Unfamiliar forms of regulatory arbitrage
Silicon Valley is adept at weaponizing technological complexity to dissuade regulatory scrutiny and intervention. Wall Street also weaponizes complexity to similar ends, but policymakers, regulators, and scholars working in the field of financial regulation have learned enough about how finance works to see through industry talking points; they may not, however, have learned enough about Silicon Valley’s new technologies to do the same.
As is often the case with Wall Street innovation, much of Silicon Valley’s fintech innovation is simply regulatory arbitrage. In other words, fintech’s new technologies are often most useful as a sexy new cover story for artful lawyering around regulatory requirements; many a fintech business derives its competitive edge from finding a way around the law rather than from offering an otherwise superior product or service. However, if those who study finance have little understanding of the workings of technologies like blockchains and so-called artificial intelligence, that makes it difficult to challenge claims from Silicon Valley fintechs and their venture capital funders that new technologies deserve bespoke legal treatment.
Take, for example, claims that blockchains decentralize economic control and that that decentralization scrambles the application of existing financial regulations. It is true that, much like a corporation with lots of shares, a blockchain with lots of nodes affords a theoretical opportunity for decentralized economic control. But, just like the corporate form, the blockchain does nothing to push back against economic forces that drive the centralization of power. If someone controls a lot of a blockchain’s nodes, or if some nodes become de facto more important than others, then some people will exert outsized control over the operations of the blockchain. The same is true for projects that are built on blockchains using computer programs called smart contracts: governance participation is typically much lower than in publicly-traded corporations and founders and their VC funders tend to retain control. If people don’t appreciate that blockchains and smart contracts offer only theoretical opportunities for economic decentralization, they might be talked into supporting special legal treatment that encourages blockchain-based finance – notwithstanding that blockchains are clunky databases, inferior in almost every respect to preexisting technological alternatives.
Or take AI tools that might be used to make credit decisions, or for managing risks related to investment portfolios. Those who don’t understand the technology often fail to appreciate that AI tools are hamstrung by the limitations of their training data, or that there are many points at which human intervention skews the operation of AI tools. These tools are ultimately applied statistical engines, and historical data is neither neutral nor predictive. When it comes to portfolio management, AI tools will be plagued by the same data problems that have plagued other kinds of economic models. We only have one market history, a single timeline to demonstrate how credit and market and liquidity risks have interacted in moments of panic. That is not a big data situation, and so AI tools cannot come close to reliably predicting how tail risks will impact investment portfolios in the future. Regarding credit decisions, if members of some racial groups have historically had fewer opportunities to generate income and wealth, then that will obviously have impacted how they have accessed and used credit in the past. If members of those groups disproportionately struggled to repay and were extended less forbearance than others when they did default, all of that history will be reflected in the data now being used to train AI tools on how to assess future applications for credit.
When it comes to AI’s limitations and lack of neutrality, biased and insufficient data are not the only issue. Humans choose which data the AI tool will learn from; they label that data to help with the learning process. Humans program the AI tool that does the learning, and can tune and tweak it to pay more or less attention to some kinds of data. And yet many people see the output of these tools as automatic, neutral, and superior to anything a human might produce – and might deduce that less regulation and oversight are needed as a result. Regulatory capital requirements and anti-discrimination laws like the Equal Credit Opportunity Act could be arbitraged in this way.
Disregard for regulation and domain expertise
Traditional financial institutions certainly engage in regulatory arbitrage and lobby heavily for regulatory relief, but Wall Street has generally accepted that some regulation is part of the landscape – in part because there is some understanding from past scandals, abuses, and panics of what can go wrong. Margaret O’Mara, author of the Silicon Valley history The Code, has observed that Silicon Valley businesses tend to disdain learning about the history of the industries they plan to disrupt. In this context, regulation is seen through an ahistorical lens as something that exists simply as an impediment to innovation and exponential growth. Financial regulatory experts may not appreciate how little some fintech businesses know about what might be considered fundamental financial vulnerabilities. In particular, we are witnessing the crypto industry learn in real-time about problems like runs and rampant fraud that we already knew would plague any unregulated financial system.
Until very recently, problems in the crypto markets were largely walled off from most of the rest of the financial system, and so most of us have been spared their fallout (although crypto market volatility did have repercussions for several banks that failed in the regional banking crisis of 2023, and the bailout of Silicon Valley Bank’s uninsured depositors served to prop up the USDC stablecoin). But with Congress now poised to permanently carve crypto out of existing regulatory regimes while regulatory authorities are permitting increased integration of crypto and traditional finance, more paths for contagion are developing. We are probably doomed to learn the same lessons about finance all over again.
To avoid that fate, thoughtful regulatory policy informed by both financial and technological expertise is necessary. We often hear the mantra, “same activity, same risk, same rules.” As a rule of thumb, that mantra is a good place to start. For example, if we understand leverage, we understand that it is highly problematic to create an unlimited supply of crypto assets out of thin air to borrow against. And if we understand that things like bank deposits and money market mutual funds are vulnerable to runs, then we understand that stablecoins have similar vulnerabilities. In these circumstances, existing securities and banking regulatory frameworks should be applied; it is unwise to carve out new exemptions simply because a new technology is involved.
But the mantra has its limitations: those familiar with financial regulatory policy may not appreciate that even the same activity can pose different operational risks if a different kind of technological plumbing is being used. For example, far too little attention is paid to the operational fragilities of the public blockchains supporting the crypto markets. There is no one who can be held accountable for ensuring that blockchain software is maintained, protected from cyberthreats, or relaunched after an outage. Application programming interfaces (better known as APIs) also pose new operational risks. These APIs allow for interoperability between different IT systems and form the backbone of open banking efforts, but APIs in other fields have often proved to be weak links that are particularly vulnerable to exploitation by cyber criminals. Although human resources can be a challenge, financial regulatory agencies should strive to employ public-minded technologists who understand these operational risks (and can help agencies see through technology-shrouded regulatory arbitrage strategies like the ones discussed above).
New levels of “bigness”
Another thing that those used to looking at Wall Street may not fully appreciate is how much Silicon Valley tends towards monopoly. There is often room for more than one big player in many traditional financial markets, but that is rarely the case in Silicon Valley. The platform-based economy tends towards “winner takes all” markets, where reams of user data and network effect advantages are used to build unassailable market positions. This tendency is particularly important in the context of pending legislation that would allow the largest tech platforms to issue their own deposit equivalents known as “stablecoins.” Five years ago, Facebook tried to launch its Libra stablecoin and the world collectively – and appropriately – balked at the prospect of a tech platform controlling a significant portion of the money supply and becoming a systemically important financial institution. But that kind of future is precisely what the new stablecoin legislation endorses, and even the largest banks may struggle to compete for deposits in this kind of environment.
Silicon Valley’s incentives and worldviews
Finally, those more used to dealing with financial regulatory policy may not fully appreciate the incentives and worldviews of the Silicon Valley elite who are driving fintech’s expansion. Fintechs are typically funded by venture capital firms like Andreessen Horowitz, and venture capital is in many ways, a very broken business model for everyone but the VCs themselves. Venture capitalists tend to fund their own in-group of privileged men who attended elite universities, and to fund the same overhyped business models that their colleagues are funding. This limits the likelihood of producing truly innovative solutions, but venture capitalists are primarily concerned with “exiting” a few of their investments at a profit within a few years – and they spin hype and lobby heavily to make that happen. Venture capitalists have little incentive to perform diligence on their ventures because they expect most of them to fail; venture capitalists encourage their startups to engage in regulatory arbitrage, knowing that they will be entirely insulated from the consequences of any regulatory crackdown.
In addition, many of the top venture capitalists and other members of the Silicon Valley elite are animated by anti-democratic and TESCREAList worldviews, and handing over the reins of our financial system to the Silicon Valley elites necessarily involves ceding ground to their belief systems. Peter Thiel, for example, has long dreamed of a financial system beyond the reach of governmental authorities and central banks. That was part of his initial ambition for PayPal back in the 1990s; now he supports efforts to build “Network States” with financial systems built on blockchain technologies that would operate as tech-CEO led dictatorships outside the boundaries of democratic governance. Blockchain is extremely limited as a technology; its adoption has arguably been spurred more by an ideological desire to opt out of democratic governance than by any efficiency rationale.
Andreessen Horowitz’s Marc Andreessen is also a proponent of the Network State movement and, like Thiel, ascribes to many TESCREAList ideologies. Rooted in extreme utilitarianism, these beliefs include faith in the ability of technology to radically enhance or even transcend the intelligence and other “desirable” qualities of human beings, and to enable space colonization. These kinds of ambitions and beliefs seem far-fetched, but it is well-documented that they are widely-held among the Silicon Valley elite. For example, Silicon Valley’s faith in the ability of statistically-driven AI tools to one day achieve general intelligence and solve all our problems is just that – an article of ideological faith.
Silicon Valley and financial crises
As Silicon Valley becomes an increasingly important provider of financial services, the confluence of these incentives, worldviews, and new forms of technologically-driven regulatory arbitrage is likely to threaten financial stability. In my book Driverless Finance, I included a prologue that explored how the volatility of Ponzi-like crypto assets, coupled with increased automation from the use of tail-risk ignoring smart contracts and AI tools, could precipitate our next financial crisis. There is no reason to expect that Silicon Valley will seek to stop such a crisis from happening.
Some of Silicon Valley’s incentives are reminiscent of those that led Wall Street to undermine the stability of the financial system in the run-up to 2008. First, venture capitalists will probably have exited their fintech ventures before a crisis occurs. This “I’ll be gone, you’ll be gone” attitude from those with no skin in the game is one that will be very familiar to those who have studied the 2008 crisis. Stablecoin-issuing tech giants may also be insulated from the consequences of a future financial crisis by virtue of achieving “too big to fail” status (although it is possible that, when faced with the prospect of a teetering tech behemoth, the US may come closer to the “too big to save” dilemma that countries like Iceland and Switzerland have faced with their banks in the past).
There are, however, some less familiar incentives and motivations that may exacerbate the risk of a financial crisis. Venture capitalists have benefited significantly from the easy money available during periods of accommodative monetary policy: financial crises typically beget accommodative monetary policy, and so venture capitalists may relish the prospect of another one. Even more disturbingly, the chaos following a financial crisis may be seen as an opportunity for adherents of the anti-democratic Network States movement to exploit. Finally, some of the Silicon Valley elite genuinely ascribe to TESCREALIST ideologies focused on ensuring that our artificially intelligent descendants can prosper in outer galaxies. A global financial crisis would seem like a non-event to someone fixated on a sci-fi style long-term enumerated in the thousands of years. That perspective should be deeply concerning to those of us familiar with the human costs of financial crises.
it is so refreshing to see such simple, common sense advice on a subject that is often subject to ridiculous hype and hand waving. thank you
> Silicon Valley’s faith in the ability of statistically-driven AI tools to one day achieve
> general intelligence [...] is just that – an article of ideological faith.
I have to point out that believing that "statistically-driven" AI tools will *not* achieve general intelligence is also an article of ideological faith. I think many people over-rotate on the "statistically-driven" part, ignoring the fact that the AI tools are explicitly modeled on the neuronal structure of our brains. There is no non-supernatural reason to assume that they will not eventually achieve general intelligence, and the rapid pace of recent progress is a strong indicator that it will happen sooner rather than later.
Further, there's no obvious reason to assume that they will not achieve superhuman intelligence shortly after reaching human-level general intelligence. All that's required is to train them on the same AI-creation knowledge possessed by their human creators and direct them to self-improve (and since self-improvement is a powerful instrumental goal, we probably don't even need to give them that direction, just not be careful enough about denying the ability).
To believe that AI will not achieve human-level and then superhuman general intelligence basically requires a supernatural belief that there is something more to our brains than mere physical matter.
The words I elided in the original quote are important, though:
> and solve all our problems
To the extent this is believed, it is very much an article of ideological faith, an illogical one. There is every reason to expect that artificial superintelligence will have its own goals, which might be antithetical to humanity. ASI probably won't deliberately set out to destroy or enslave us, but may well do so incidentally or inadvertently, much as we've done to so many species we found useful, or irritating, or just in the wrong place.