David Shapiro, an AI aficionado, whom I followed on X recently (though I’ve come across one of his videos a few years ago) made an interesting claim that left me quite wary of the future as it concerns AI.
He said:
Here’s why I’m an AI maximalist:
First, literally every intellectual domain, from programming to geopolitics to material science is aided by AI, specifically generative AI.
AI is the greatest “high leverage intervention” ever created by humanity, eclipsing both writing and the internet (but predicated on both, it’s a build hierarchy)
First order consequence of AI is chatbots and AI slop. Sure, whatever. Who cares.
Downstream consequences: nuclear fusion, nanotech, warp drive, genetic engineering, cybernetic prosthetics.
Arguments to slow AI down?
“It might kill everyone.” Haven’t seen compelling evidence for this theory, only a bunch of loud charlatans. Attention-seeking shills trying to make a buck and some fame.
“It’s bad for the environment.” Once again, debatable, arguably great for the environment both in the short term and long term. Evidence isn’t really there. If AI solves energy, biology, and material science, then pollution will become a thing of the past.
“Society won’t be able to adapt in time.” Possibly. Hofstadter’s Law applies though (it will always take longer than you think, even when you take into account Hofstadter’s Law). Again, no mass unemployment yet, no riots in the street.
I don’t mind people expressing their opinions, or being overly optimistic because we should express ourselves to our full capacity. Yet, at the same time I’d express caution of the over optimism because the future is uncertain and the role AI plays in it.
Idealizing AI or putting it in a high pedestal (idolizing) in the manner reminds me of the Bitcoin Maximalists. It didn’t matter what the naysayers say about Bitcoin because the folks who love and own Bitcoin will die on that hill. In this area, Bitcoin maximalists sit comfortably because it is an asset that only goes up since its inception. I don’t care one way or the other if you are maximalists, yet for me, for my own sanity, I need checks and balances; I like to be in between optimism and pessimism, risk and caution. Thus, you can describe me as a maximalist-minimalist (neutral) which cancels each other out.
ChatGPT: Thoughts and feedback to someone’s X post.
<contemplator> Alright, let’s break this down carefully. There are a lot of claims packed into this post, and while some of them are thought-provoking, others feel like sweeping statements that need to be examined more critically.
Claim 1: “Every intellectual domain is aided by AI, specifically generative AI.”
- This is mostly true. AI is impacting nearly every field, from drug discovery to software development. However, saying “specifically generative AI” might be overstating things. Traditional machine learning, optimization algorithms, and even classical statistical methods still play crucial roles in many fields.
- Generative AI shines in tasks that involve text, image, and code generation, but it’s not as transformative in, say, theoretical mathematics or high-energy physics (where it assists but isn’t the primary driver of breakthroughs).
Question: Does this mean AI will dominate all intellectual work? Not necessarily. There are still domains where human ingenuity, intuition, and interpretability remain essential.
Claim 2: “AI is the greatest high-leverage intervention, eclipsing both writing and the internet.”
- This is a very bold claim. Writing fundamentally transformed civilization, enabling knowledge accumulation across generations. The internet did the same for real-time information sharing and coordination. AI is built on both of these, but whether it “eclipses” them depends on its realized potential.
- If AI fully delivers on nuclear fusion, nanotech, and warp drives (as the post suggests), then sure, it might surpass those foundational innovations. But right now, it’s mostly good at pattern recognition, prediction, and automation—impressive but not yet civilization-altering on the scale of writing.
Counterpoint: A technology’s potential isn’t the same as its realized impact. Will AI be bigger than writing? Maybe, but that’s still speculative.
Claim 3: “First-order consequence of AI is chatbots and AI slop. Who cares?”
- Dismissing this entirely is a bit naive. AI-generated misinformation, spam, and low-quality content (“slop”) aren’t just minor inconveniences—they shape public discourse, amplify biases, and flood the internet with noise.
- Chatbots and AI-generated content also influence education, knowledge access, and employment in ways that aren’t trivial.
- The phrase “who cares” implies this is irrelevant, but if AI is meant to be a “high leverage intervention,” then we should care about its most immediate and widespread effects.
Better framing: The initial wave of AI adoption may seem unimpressive, but that doesn’t mean the long-term impact won’t be profound.
Claim 4: “Downstream consequences: nuclear fusion, nanotech, warp drive, genetic engineering, cybernetic prosthetics.”
- This is wildly optimistic. Yes, AI can assist in these fields by accelerating simulations, optimizing materials, and uncovering patterns. But AI alone doesn’t magically give us fusion or warp drives.
- Each of these fields has fundamental scientific and engineering challenges that aren’t just about “throwing AI at the problem.”
- Example: Nuclear fusion is constrained by plasma stability, confinement methods, and material durability. AI can help optimize designs, but it doesn’t rewrite the laws of physics or create new energy sources from scratch.
- Reality check: AI is a tool that enhances human capabilities. It’s not a magic wand.
Claim 5: “Arguments to slow AI down? 1) It might kill everyone. Haven’t seen compelling evidence for this theory.”
- The existential risk argument is often exaggerated, but dismissing it outright isn’t wise.
- The core concern is that AI, if misaligned with human values, could make dangerous autonomous decisions—especially if integrated into military, cybersecurity, or financial systems.
- AI doesn’t need to be “conscious” to be harmful. A superintelligent AI doesn’t have to “want” to kill humans—misaligned optimization (e.g., AI pursuing a flawed goal at all costs) is the real risk.
- That said, many do exaggerate “AI doom” for attention. But the fact that some people are overhyping the risk doesn’t mean the risk is zero.
Claim 6: “Arguments to slow AI down? 2) It’s bad for the environment. Debatable, arguably great for the environment.”
- This depends on how AI is used.
- Training large AI models consumes massive amounts of energy. Data centers are already straining power grids.
- However, AI could also help reduce energy waste, optimize resource use, and advance green tech.
- The statement “evidence isn’t really there” is misleading. There’s strong evidence that AI training is energy-intensive. The real question is whether AI’s benefits outweigh its carbon footprint.
Claim 7: “Arguments to slow AI down? 3) Society won’t be able to adapt in time. Possibly.”
- This is probably the strongest counter-argument against full-speed AI deployment.
- The pace of AI advancement is outstripping society’s ability to regulate, retrain workers, and develop ethical frameworks.
- “No mass unemployment yet, no riots in the streets.” This is true right now, but labor displacement doesn’t happen overnight. Economic and job market shifts tend to lag behind technological revolutions.
- Historical analogy: The Industrial Revolution didn’t cause immediate mass unemployment, but it did lead to decades of economic restructuring and social upheaval.
Overall Feedback
- The post is enthusiastic but overconfident. It assumes AI’s benefits will materialize quickly and that concerns about risks or adaptation are overblown.
- Some claims (e.g., AI leading to warp drives) are speculative at best.
- The dismissal of risks (both existential and social) feels overly simplistic. Just because some AI doomer arguments are weak doesn’t mean all concerns are unfounded.
- The environmental argument is underdeveloped—AI is both energy-intensive and a potential climate solution, depending on implementation.
- The adaptation argument is stronger than the post acknowledges—society doesn’t collapse overnight, but long-term economic disruptions can be profound.
Final Verdict:
AI is a high-leverage technology, but assuming inevitable utopia without friction is naive. Progress won’t be linear, and dismissing concerns without nuance is just as misguided as overhyping AI doom.</contemplator>
Me: When I hear leverage or high leverage, I think about my studies in financial markets, how firms or banks get overleveraged, or use financial instruments that get them into high levered positions! Does this give a pause or cause of concern when I see AI is a highly-leverage technology? Meaning, this fact or phrasing is highlighting how it can be extremely beneficial yet on the other side of the coin, extremely dangerous.
ChatGPT: <contemplator> That’s an excellent connection to make. The phrase **”high-leverage intervention”** immediately evokes financial leverage—where small inputs can yield massive outputs, but also where risk scales exponentially. When applied to AI, this framing isn’t just about raw power or efficiency—it subtly warns us that AI, like financial leverage, can be a **force multiplier in both directions**: extreme benefit *and* extreme peril.
Let’s really sit with this idea for a moment.
Leverage in Finance vs. AI
In finance, leverage amplifies outcomes—it allows firms to make outsized bets, increasing potential returns but also magnifying losses if things go wrong. The higher the leverage, the less room for error.
- Think of Long-Term Capital Management (LTCM)—a hedge fund packed with Nobel laureates, using extreme leverage to generate returns. When a few assumptions broke, it collapsed spectacularly, nearly taking the financial system with it.
- Or Archegos Capital—a firm using total return swaps to take on massive, hidden leverage. It worked until market conditions shifted, then it imploded overnight.
- 2008 Global Financial Crisis? Overleveraged financial institutions betting on mortgage-backed securities. Everything was fine—until it wasn’t.
Now, apply this to AI.
AI, as a high-leverage technology, means small changes—whether in deployment, misuse, or errors—can scale disproportionately. A bad model, a misalignment, or a subtle oversight could cascade into a catastrophic outcome.
- If AI is embedded into critical infrastructure, a flaw in decision-making could cause systemic failures.
- If AI automates financial markets without proper safeguards, it could trigger flash crashes or worse.
- If AI governs military systems, miscalculations could escalate conflicts unintentionally.
Asymmetry of Risk: The “Fat Tail” Problem
In finance, leverage exposes firms to tail risk—low-probability but extreme-impact events. AI has a similar dynamic:
- 99% of the time, AI might be fine. But that 1% failure? It could be catastrophic.
- The more we integrate AI into high-stakes systems, the greater the systemic risk if something goes wrong.
- Small errors scale up—a biased AI in hiring is bad, but a biased AI in judicial sentencing or predictive policing could reinforce structural injustices at an unprecedented scale.
A Necessary Pause
So when I hear “AI is a high-leverage intervention,” my mind immediately doesn’t just see potential—it sees risk exposure. It means:
- AI can produce massive returns in science, medicine, and energy (like leverage amplifying financial gains).
- AI can also magnify errors, biases, or unintended consequences (like leverage turning minor mistakes into financial catastrophes).
Revisiting the OP’s Post Through This Lens
The OP’s framing of AI as a high-leverage tool is mostly celebratory—but shouldn’t it also be cautious?
- If we were discussing financial leverage, we’d emphasize risk management, hedging, and stress-testing assumptions.
- So shouldn’t AI’s “high leverage” come with the same level of concern?
If anything, the phrase should give people pause. Not because AI is inherently bad, but because “high-leverage” systems require robust safeguards, constant oversight, and risk mitigation.
The same way regulators demand capital reserves, risk modeling, and stress tests for overleveraged financial firms, AI should be treated with the same mindset of caution.
Final Thought: The Other Side of the Coin
The OP implicitly assumes AI’s high leverage will always be used correctly.
But leverage, by definition, increases volatility—meaning the consequences of mistakes or misalignment are not linear.
A high-leverage financial position can make you fabulously rich—or bankrupt you overnight.
A high-leverage technology can usher in a golden age—or destabilize everything we rely on.
And this is the real conversation we should be having.</contemplator>