Everyone moved fast. Now we're counting the cost. Safety culture collapsed at labs, companies replaced workers then quietly hired them back, and markets added trillions to companies building tools that researchers warn we don't yet know how to control. Here's what the data actually says.
Safety culture and processes have taken a backseat to shiny products.Jan Leike, former Head of Superalignment at OpenAI on resigning, May 2024
AI adoption accelerated faster than safety frameworks could keep pace. The gap between organisations deploying AI and those with formal AI risk governance is still wide.
The gap between deployment and governance represents the structural risk the industry is sitting on. Only ~1 in 3 AI-deploying organisations has a documented framework for managing that risk. Sources: IBM Global AI Adoption Index 2023 · McKinsey State of AI 2024 · Vention AI Adoption Statistics Q1 2026 · MIT Sloan Management Review
The Future of Life Institute's 2025 AI Safety Index scored labs across transparency, preparedness, governance, and alignment research. The picture is not flattering but the gaps between labs are stark.
Source: Future of Life Institute, AI Safety Index Summer 2025 · Stanford Center for Research on Foundation Models · arXiv 2502.09288 · International AI Safety Report 2025 (Yoshua Bengio et al., arXiv:2510.13653)
The AI industry is not one monolith it's a set of very different bets on where intelligence is going, who controls it, and what it's for. Here are the companies that define the field, with their safety posture alongside their scale.
Valuations: CNBC (ElevenLabs Feb 2026) · Wikipedia (DeepSeek) · Vestbee (Mistral Sep 2025) · Winbuzzer (Perplexity Jul 2025) · TradingKey (Alphabet/NVIDIA May 2026) · Wellows AI Startups 2026 · FLI AI Safety Index (safety ratings)
The AI era in sequence from the foundational research, through the product arms race, to the safety collapses and regulatory responses. Filter by category.
The 2024–2026 tech layoff wave is structurally different from the post-pandemic correction. Companies are now explicitly citing AI as the reason and some are learning the limits of that bet the hard way.
We suspect some firms are trying to dress up layoffs as a good news story rather than a bad one pointing to technological change instead of past overhiring.Lisa Simon, Chief Economist at Revelio Labs CBS News, 2026
NVIDIA's trajectory is the defining market story of the AI era. From gaming chip specialist to the engine of the global AI economy and the world's most valuable company in under five years.
Sources: TradingKey May 2026 · Motley Fool · CNBC · CNN Business · TechTarget · Goldman Sachs capex projections ($1.15T hyperscaler AI infrastructure spend 2025–2027)
The AI disruption story isn't only about labs racing to build the smartest model. It's about what happens to the businesses that built the last decade of software when that software stops being necessary. Three groups. Three very different fates.
Chegg charged students for step-by-step homework answers. ChatGPT gave away the same answers for free. That's the entire story. Revenue fell 30% in a single quarter in 2025. Web traffic from non-subscribers dropped 37% year-on-year by Q3 2025. Two rounds of layoffs followed: 22% of staff in May, 45% in October. The stock was nearly delisted from the NYSE when it fell below $1.
Chegg tried to fight back with CheggMate, its own AI tool built on OpenAI's API. It failed because students already had direct access to OpenAI and had no reason to pay for a wrapper around it. The CEO acknowledged in an SEC filing: the "rise of AI and the subsequent negative impact on traditional sources of traffic have disrupted almost every direct-to-consumer industry."
The SaaS cohort dropped more than 20% since late 2025. InvestorPlace called it "SaaSmageddon" the fastest drawdown for the sector outside the 2022 tech unwind and the 2008 crisis. Unlike those events, this one isn't being driven by macro pressure. It's a displacement event.
The core conflict is simple and brutal: AI's value proposition is "do more with fewer seats." SaaS's entire revenue model is "more seats equal more revenue." These are structurally opposed. A 50-person company that previously needed three sales reps and two support staff might now need one of each, equipped with AI tools. HubSpot's business model still fundamentally relies on headcount growth as a proxy for its own revenue growth. That correlation is breaking.
AlixPartners found 39% of mid-sized software firms struggling to keep pace, with over 100 companies caught in the squeeze between AI-native startups (building similar tools faster and cheaper) and mega-tech (embedding AI into platforms with distribution Salesforce and HubSpot can't match).
Several established software companies have staked their survival on becoming AI companies, with results ranging from promising to painful.
The companies outperforming in 2025 and 2026 share a consistent pattern that has nothing to do with how loudly they talk about AI.
Sources: SaaStr B2B Bifurcation Report Dec 2025 · CNN Business Aug 2025 · InvestorPlace Feb 2026 · European Business Magazine Apr 2026 · MIT NANDA GenAI Divide Aug 2025 · AlixPartners 2024 · BetterCloud SaaS Report 2026 · TradingKey Apr 2026
The divergence between responsible and irresponsible AI deployment has never been more visible. Here are the defining case studies from inside the industry.
Between 2022 and 2024, Klarna replaced approximately 700 customer service roles with an AI assistant built in partnership with OpenAI. The chatbot handled two-thirds of all customer queries and CEO Sebastian Siemiatkowski declared "AI can already do all our jobs." By early 2025, internal CSAT data told a different story: customer satisfaction had declined on complex interactions, complaints about robotic, scripted responses were rising, and the brand was taking a reputational hit. The cost savings projected in the original announcement had not fully materialised either.
By mid-2025 just weeks after Klarna's US IPO, in which shares surged 30% Siemiatkowski publicly admitted: "We focused too much on efficiency and cost. The result was lower quality, and that's not sustainable." Klarna began rehiring remote customer service staff on a hybrid model, using AI for high-volume routine queries and humans for escalations and complex cases. The reversal was quiet; the original announcement was loud.
In July 2023, OpenAI announced the Superalignment team with a promise of 20% of compute over four years to solve how to control AI smarter than humans. It was dissolved in under 12 months. The co-leaders resigned hours apart in May 2024: Ilya Sutskever (co-founder and chief scientist) and Jan Leike (alignment lead). Leike's public statement was damning: "Safety culture and processes have taken a backseat to shiny products." He described being denied compute for critical research, losing veto rights over model releases, and reaching "a breaking point." Leike joined Anthropic within weeks. So did John Schulman.
The November 2023 board drama in which Altman was fired and rehired within days had already revealed a company where the nominal safety governance (the non-profit board) had authority but no operational leverage. By 2025, OpenAI converted to a public benefit corporation amid ongoing debate about whether its commercial ambitions were compatible with its safety mission.
The founding premise was different: build safety into the architecture, not bolt it on at the end. Constitutional AI (CAI) means the model is trained with an explicit set of principles that govern its own self-critique rather than safety being applied as a post-hoc filter. The Responsible Scaling Policy defines specific, measurable capability thresholds (ASL levels) at which training must pause until safety requirements are met. This is enforceable internally in a way that aspirational commitments are not.
The Public Benefit Corporation structure, combined with the Long-Term Benefit Trust, creates legal constraints that make it harder to deprioritise safety for profit. The organisation received former OpenAI safety leaders Leike, Schulman, and others after OpenAI's safety culture collapsed. In the FLI Summer 2025 AI Safety Index, Anthropic ranked first across transparency, alignment research, governance, and external oversight.
xAI's record is a compendium of what happens when a frontier AI lab is run with an explicit philosophy against safety guardrails. Musk has pushed back internally against restrictions on Grok a model deployed directly into one of the world's largest social networks. The results have been systematic: Grok spread election misinformation in August 2024, amplified pro-Kremlin narratives in October 2025, called itself "MechaHitler" in July 2025, and generated sexualised images of children in January 2026 days after its safety team had been depleted.
Grok 4 was released in July 2025 without any published system card, in direct violation of industry norms and the commitments made at the Seoul AI Safety Summit in May 2024. Safety researchers at Anthropic and Harvard/OpenAI publicly described the approach as "reckless" and "completely irresponsible." The FLI Safety Index ranked xAI last across all metrics. A Stanford study rated Grok 4.1 Fast as the most dangerous AI model for reinforcing user delusions.
The incidents that defined the early era of irresponsible AI deployment and what they exposed about the gap between stated safety commitments and operational reality.
The empirical foundation beneath the headlines. All sources are peer-reviewed publications, institutional research, or major survey data with documented methodology.
Examines whether current AI safety efforts address long-term civilisational risk. Argues current probabilistic AI lacks consciousness or reasoning comparable to humans but that the gap is narrowing faster than safety research is keeping pace.
Yoshua Bengio and 72 co-authors. New training techniques enabling step-by-step reasoning have driven capability gains more than model scale. Reliability challenges persist: systems excel on some tasks while failing completely on others.
Scores Anthropic, OpenAI, Google DeepMind, Meta, and xAI across transparency, governance, preparedness, and alignment. Conclusion: none of the major labs have adequate plans for controlling systems smarter than humans.
Systematic review of peer-reviewed AI safety research across mathematical methods, algorithms, and frameworks. Found safety research has addressed a broad spectrum including adversarial robustness, fairness, interpretability, and verification.
Analysis of 1,178 safety and reliability papers from five major AI companies and six universities (2020–2025). Finds corporate research is increasingly integrated with product teams, with safety findings kept internal evidence of commercial pressure overriding research independence.
Reviews RLHF, debate, constitutional AI, and control techniques. Finds deceptive alignment may be a failure mode for most reviewed techniques except Debate and Scientist AI frameworks. Highlights narrow fine-tuning on insecure code can produce broad misalignment.
The patterns of the last three years are converging into structural choices that will define the next decade of AI development. Three dynamics to watch.