Aaryush Gupta

I help organizations redesign how they work around AI.

Operating models, decision architectures, and AI systems — across healthcare, finance, and tech.

Madison, WI
Headshot of Aaryush Gupta
01

About

AI is becoming the infrastructure underneath how decisions get made, how expertise gets applied, and how teams create value. The organizations that recognize this early won't just be faster — they'll be structurally different from their competitors.

I help companies design for that shift. With a career across healthcare, finance, and tech — building products, leading teams, and redesigning how work gets done — I focus on the operating models that determine how humans and AI systems share decisions, expertise, and execution.

If you're thinking about what AI means for how your organization actually operates, that's the conversation I'm here for.

02

Experience

Gupta Advisory — AI Strategy & Systems

I work with organizations to redesign how their teams operate with AI. Every engagement starts with deep understanding — what the business actually looks like, where the friction is, and what's possible. From there, targeted experiments to find value fast. If we see ROI on day one, we commit to something larger.

Ready Rebound — AI & Engineering Lead

Lead AI and engineering for a workers' comp platform helping employees return to work. Managing the full engineering team, defining technical architecture, and building Native AI Workflows that make the team operate at a fundamentally higher level.

Cylerity — AI Lead

Lead AI for a fintech-healthcare startup that underwrites claims and advances cash to healthcare providers. Designed the decision architecture, compliance framework, and team that took us from MVP to an eight-figure lending facility.

Skale.win — Founding Member, Product & Strategy

Co-founded an end-to-end career platform that helps people understand their strengths, upskill, and get matched to opportunities. Defined the product vision and business model — white-label and enterprise hiring, built on a fundamentally different model of what people are capable of.

03

Selected Work

Cylerity Claims Underwriting

Designed the operating model that turns healthcare claims into same-day funding decisions — from architecture through an eight-figure lending facility.

Same-day decisionsEight-figure facilityEnd-to-end operating model

Skale.win Career Platform

End-to-end career platform — understand your strengths, upskill, and get matched to opportunities based on what you're actually capable of.

End-to-end careersEnterprise & white-labelReinventing meritocracy

UMN Discovery Launchpad

Replaced ad-hoc startup mentorship with a system that detects gaps, matches expertise, and moves founders forward faster.

Automated gap detectionReal-time mentor matchingStructured from chaos

Univise — Academic Advising AI

Built a university advising system that spread across campus through word of mouth — no marketing, no push, just a product that solved a daily problem.

Viral campus adoptionBest pitch at TranscendZero marketing spend

SmartBenAI Claims Adjudication

Automated claims data extraction for healthcare TPAs while keeping humans in the loop where compliance demands it.

Hours of manual work cutCompliance-firstHuman-in-the-loop
04

Areas of Focus

01

AI Operating Model Design

Defining how your organization makes decisions, applies expertise, and creates value with AI — built into how you operate, not layered on top.

02

Human-AI Workflow Architecture

Designing who does what — where AI handles the volume, where humans apply judgment, and how the handoffs actually work.

03

Healthcare & Compliance Systems

Platforms where privacy, accuracy, and regulatory requirements are built into the foundation — not bolted on after the fact.

04

Product Strategy & Delivery

From concept to scale — clear vision, aligned teams, and a bias toward shipping what matters.

05

Writing

This Is Bigger Than Electricity

A letter to the people I love

February 2026

My attempt, after years of failing to find the right words, to finally explain what AI means for everyone — not the technology, but what's already changing in the world around you, right now, while most people are going about their lives.

This Is Bigger Than Electricity

This Is Bigger Than Electricity

Last weekend, I got frustrated with my habits app.

Not dramatically frustrated — the quiet kind, where you keep using something even though it doesn't quite fit, because finding something better feels like more trouble than it's worth. I wanted something simple. Something that tracked the small daily things I was trying to build into my life, without the noise and the gamification and the streaks. Nothing on the market did exactly what I wanted.

So I built one.

Here's what I should tell you about myself: I am not an app developer. I've spent my career in technology, but building mobile applications is its own craft — one I had never learned. A year ago, if you'd told me I was going to build a fully functional app from scratch over a weekend, I would have told you that you had the wrong person.

But I didn't write a single line of code. I described what I wanted, in plain English, to an AI. It built the app. Then it told me exactly how to test it, how to install it on my phone, and how to watch it run for the first time. At no point did it occur to me that I needed to do anything other than describe what I wanted and respond to what came back.

When I held my phone and opened something I had made — something that worked, something that was mine — I felt two things at once. The first was genuine exhilaration. The second was a quiet, creeping worry that had nothing to do with me.

I thought about every person who had spent years learning to build what I had just built in a weekend. And I thought: they don't know yet.

That feeling — wonder and worry arriving together — is what this piece is about.

Not the technology itself. Not the code, or the models, or the technical details that tend to make people's eyes glaze over. This is about what's already happening in the world around you, right now, quietly, while most people are going about their lives.

This is my attempt, after years of failing to find the right words, to finally explain it to everyone I love.

— — —

It's Already Here

A few weeks ago, I was sitting across from a loan officer at my bank. He was sharp, personable, good at his job in every way that mattered. We talked through my finances, my goals, what I was looking for. He asked the right questions. He made me feel taken care of.

And for roughly half of our time together, he typed.

Every answer I gave him went into a system, manually, one field at a time. I watched his eyes move from my face to the screen and back again, over and over, the conversation punctuated by the soft sound of keys. He wasn't doing anything wrong. He was doing exactly what his job required.

But I kept thinking: this shouldn't exist anymore.

Not him — he was wonderful. But that task. That specific, repetitive act of translating a human conversation into a database, one keystroke at a time. Technology that could handle that entirely on its own has existed for a while now. The reason it wasn't happening in that room had nothing to do with capability. It had to do with the fact that somewhere above him, someone with the authority to make that change hadn't made it yet.

He was doing his job perfectly. And his job was already half obsolete. He just hadn't been told.

That's the part that stayed with me on the drive home. Not the technology. The silence around it.

Most of the people whose work is being quietly reshaped by AI right now have no idea it's happening. Not because they're not paying attention. Because nobody is telling them. The conversation about what AI is and what it's already doing is almost entirely contained within a small circle of people in technology — and the rest of the world is going about its day, building careers and raising families, without any sense of what's already changing underneath them.

— — —

This Isn't Just About Tech Jobs

Consider the court reporter.

You've probably seen one without really noticing. The person sitting to the side of the courtroom, fingers moving impossibly fast across a small specialized keyboard, capturing every word in a room where words carry legal weight. It takes years to learn. The certification is hard-earned. For a long time, it was genuinely irreplaceable — because the accuracy required in a legal proceeding demanded a human being who understood context, who could tell one speaker from another, who could flag something unclear rather than guess.

AI can now do this. Not approximately. Not almost. With accuracy that meets or exceeds human performance, in real time, at a fraction of the cost.

Nobody held a press conference about it.

Or think about the paralegal. Someone who spent years in school learning the architecture of the legal system. How to research case law. How to draft documents. How to prepare a case for someone whose name will be on the brief but whose work they are quietly, invisibly carrying. AI now does that work — document review, legal research, contract summarization — in minutes.

These aren't people who chose the wrong career. They chose carefully. They invested in themselves. They built something real.

And the world changed the rules without telling them.

That's what makes this moment different from disruptions that came before. When ATMs arrived, bank tellers saw them. When self-checkout appeared, cashiers knew what was coming. The change was visible. You could point to it, argue about it, prepare for it.

This time the change is quieter. It's happening inside systems, inside workflows, in the back-end of industries you interact with every day without seeing how they work. By the time it becomes visible, many of the decisions will already have been made.

— — —

The Privilege of Seeing It

I have thought in this language my whole life.

Not code, exactly. Something underneath code. The instinct that a machine should already know what you need, that the information is all there, that the friction between human and computer is a problem waiting to be solved. I felt that before I had words for it. Long before I had a career around it.

For a long time I didn't realize how unusual that was. It felt like seeing — just the way I processed the world. It took years of watching other people's eyes glaze over mid-conversation for me to understand that what felt obvious to me was genuinely foreign to most people. Not because they were less capable. Because they had never needed to think this way.

That gap is something I've struggled with.

I've spent years trying to explain to people I love what is happening. My family. Friends outside of tech. People whose instincts and talents run in completely different directions than mine. And I've failed them, more often than I'd like to admit. I'd start talking about AI and within a few sentences I could feel the conversation slipping — their attention drifting not out of disinterest, but because I was speaking a language they had no reason to have learned.

This piece is my attempt to do better.

Because here's what I've come to understand: being close to this technology doesn't make me more important. It makes me more responsible. I'm not standing at the front of a wave that's coming for everyone else. I'm standing somewhere with better visibility, and the only thing that matters is whether I use it to help people see.

The people building and deploying AI right now are a remarkably small group. They are not malicious — most of them are genuinely trying to do something good. But they are making decisions, right now, about systems that will touch every single person reading this. And the smaller the circle of people engaged in that conversation, the more those decisions reflect a narrow set of experiences and priorities.

The room is too small. And most people don't even know there's a room.

— — —

We've Seen This Before

Let's go back further than most people think to go.

Picture a child. Seven years old, maybe eight. Small enough to fit into spaces adults couldn't reach, which is precisely why they were there. Working fourteen hour days in deafening noise, in air thick with dust that slowly destroyed their lungs. No one thought of themselves as a villain. There were quotas to meet, costs to cut, competitors to beat. The child was a resource. Cheaper than an adult. Easier to replace.

And when the machinery took a finger, or a hand, or worse — the machinery kept running.

This wasn't an anomaly. This was the system working exactly as designed. Because when transformative power moves faster than the rules meant to contain it, the people with the least power become the ones who pay the price. Not because those at the top are uniquely evil. Because unchecked systems optimize for efficiency, and human beings — their dignity, their safety, their futures — are inefficiencies.

We told ourselves we learned from that.

Then came oil. Entire towns built around a single company, where workers were paid in currency that could only be spent at company stores, housed in company homes, their lives owned in every practical sense by the same entity that signed their paychecks. When they organized, when they pushed back, the response was swift and brutal. Not because the people in charge were monsters. Because the system had no mechanism to make them care.

We told ourselves we learned from that too.

Then came social media. This one is recent enough that most people reading this lived through it. Platforms engineered — deliberately, measurably, with full internal knowledge — to be as addictive as possible. Algorithms designed not to inform or connect, but to inflame, because outrage kept people scrolling longer. We watched teenage girls develop eating disorders at scale. We watched misinformation spread faster than truth. We watched the mental health of an entire generation quietly, systematically eroded — and the people running those platforms knew. The internal research said so. They optimized anyway.

We told ourselves we would learn from that.

We are at the beginning of something categorically larger than any of these moments. And the window to shape it is right now.

The AI systems being built and deployed today are not neutral tools. They are making decisions about your life already. Whether your resume gets seen by a human being or filtered out in seconds by an algorithm trained on data that encodes decades of existing bias. Whether the financial product being offered to you is genuinely in your interest or engineered to extract maximum value from your specific vulnerabilities. Whether the information you see, the prices you're quoted, the opportunities surfaced to you — are shaped by systems optimizing for someone else's outcomes, not yours.

And here is the part that should make you feel something: most of this is invisible. There is no moment where you find out. There is no letter, no notification, no explanation. The decision was made, upstream, by a system you never interacted with, built by people you'll never meet, governed by rules that don't yet exist.

Technology has existed for years that can track movement, behavior, and association at a scale most people would find difficult to believe. That was ten years ago. What exists today is something most of us can't fully picture. And what will exist in five years — if the people building it remain the only ones deciding how it's used — is something we should all be thinking very hard about.

The child in the mill had no idea that the system consuming their childhood was one that society would one day look back on with horror. They were just living in it. Trying to survive it.

We are not powerless the way they were. We have something they didn't.

We have the chance to see it coming.

— — —

This Time, We Don't Have Decades

Every major technological shift in human history has been disruptive. That's not new.

When the steam engine arrived, it rewired entire societies. Farmers became factory workers. Cities swelled. Families that had lived the same way for generations found themselves inside a world that no longer needed what they knew. It was painful and disorienting and it took decades.

But it took decades.

That time — as brutal as the transition was — allowed something important to happen. Labor movements formed. Laws were written. Institutions adapted. Not perfectly, not fairly, not without tremendous suffering. But there was enough time for society to find its footing. Enough time for the people most affected to organize, to push back, to have some say in the shape of what came next.

We don't have that this time.

What took decades with electricity, with the automobile, with the internet — is taking months now. The gap between a technology existing and that technology being everywhere has collapsed almost entirely. The AI that feels new and experimental to most people today will be embedded in hiring decisions, medical systems, legal proceedings, and financial products before most people have had a chance to form an opinion about it.

And when transformative power moves faster than the people it affects can respond, history tells us exactly what happens.

It concentrates.

This isn't speculation. It's the pattern. Every time a technology moved faster than governance, the people who got there first wrote the rules. They didn't have to be villains to do it. They just had to move fast while everyone else was still figuring out what was happening.

The difference this time is the stakes are total. We're not talking about one industry, or one country, or one corner of the economy. We're talking about cognition itself. The ability to think, to analyze, to create, to decide — these are the things that make human contribution valuable across every field, every profession, every corner of the world. And the tools that augment or replace those abilities are being built and controlled by a very small number of people and organizations.

That's not inevitable. But it becomes inevitable the longer the conversation stays inside the room it's currently in.

— — —

The Great Experiment

I came to this country when I was ten years old.

I didn't know what I was walking into. I didn't know the language well enough — I remember a kid saying "hell" and not understanding him, because where I came from, we said "etch", not H. I didn't know the culture, the rhythms, the unwritten rules of a place that was now supposed to be home.

What I did know, slowly, over years, was this: America had given me something. Not a guarantee. Not a free pass. But a door. An open door that said — whoever you are, wherever you came from, whatever you had or didn't have — you can walk through this and build something. You can contribute. You can matter.

That door is what this country has always been, at its best. Not perfect. Not always open equally to everyone — that's a wound we're still tending. But directionally, aspirationally, fundamentally — the promise of America is that the carpenter and the nurse and the teacher and the immigrant kid who can't say H get to have a say in what this place becomes. That no single group of people, no matter how powerful or how brilliant, gets to decide the future for everyone else.

That was the radical idea. That was the experiment.

Two hundred and fifty years ago, a group of people — flawed, brilliant, frightened, determined — sat in rooms arguing through the night about what kind of world they were building. They were building it for people they would never meet. For generations that didn't exist yet. For a future they could only imagine. And the thing that made what they built remarkable wasn't that they got everything right. It was that they tried to build a system where ordinary people could keep correcting it. Keep shaping it. Keep showing up.

We are in one of those rooms right now.

Not in Philadelphia. Not in the 1780s. But in the same kind of moment — a hinge in history where the decisions being made will echo for two hundred, three hundred, five hundred years. Where the shape of everything that comes after will be determined by what we do, or fail to do, in the years immediately ahead.

The people building AI right now are not villains. Most of them are brilliant and well-intentioned and genuinely trying to do something good. But they are few. And they are moving fast. And the systems they are building will touch every single person alive today and every person who comes after them.

History will look back on this decade the way we look back on the 1780s. As the moment when everything was being decided. When the arguments were happening. When the choices were still open.

The question it will ask of us is simple.

Were you in the room?

Not as an engineer. Not as a politician. Not as someone who understood every technical detail of how these systems work. Just as a citizen. Just as a person who understood that something enormous was happening and decided that their voice — imperfect, uncertain, still learning — belonged in the conversation.

The immigrant who couldn't say H gets to be in that room. The nurse. The teacher. The grandmother. The court reporter. The loan officer typing everything you said into a system that shouldn't exist anymore.

All of them. Every single one.

That is what democracy was built to protect. The right of ordinary people to have a say in the world their children will inherit.

Think back to that child in the mill. Seven years old. No one showed up for them. Not because people didn't care — but because the people who cared didn't know, and the people who knew didn't have the power, and the people who had the power had already made their calculations. By the time society looked up and saw what had happened, a generation had already paid the price.

We are not powerless the way that child was.

We have something they never had — the ability to see it coming. The warning is available. The conversation is possible. The door is still open.

The children who will inherit this world didn't get a vote on any of it. They weren't consulted when these systems were built. They will simply wake up one day in the world we chose to build, or chose to let be built for us.

They are counting on us to show up — not because we have all the answers, not because we understand every line of code or every boardroom decision being made right now, but because we are here. We are alive in this moment. And this moment is asking something of us.

Be in the conversation. Bring someone else into it. Ask the questions out loud. Talk to your kids, your parents, your neighbors, your coworkers. You don't need to understand the technology. You just need to decide that your voice belongs in the room where the future is being decided.

Because somewhere out there, there is a child who will inherit whatever we build or fail to build in these years. They can't speak yet. They can't vote yet. They can't show up yet.

But we can.

Don't let them grow up in a world we were too quiet to shape.

The Quiet Singularity

Recursive Self-Improvement, Collapsing Costs, and the Feedback Loop No One Is Governing

February 2026

An observational analysis of three concurrent developments in artificial intelligence — and their implications for institutions designed around assumptions of scarcity.

The Quiet Singularity

The Quiet Singularity

Recursive Self-Improvement, Collapsing Costs, and the Feedback Loop No One Is Governing

Aaryush Gupta · February 2026

An observational analysis of three concurrent developments in artificial intelligence — recursive model self-improvement, the commoditization of frontier-class inference, and the compounding automation of cognitive labor — and their implications for institutions designed around assumptions of scarcity.

I. The Shift

We are entering a different phase of artificial intelligence. Not better assistants. Not faster code generation. For the first time, we are observing systems that meaningfully participate in building the next generation of themselves — not in theory, not in research papers, but in production environments at the companies creating them.[1][2]

The evidence for this is no longer speculative. It is publicly stated, on the record, by the leadership of the two most prominent frontier AI laboratories in the world. And it is occurring simultaneously with a collapse in the cost of intelligence that most observers have not yet internalized.

This document examines three developments that, taken together, suggest we have entered a compounding feedback loop — one that is accelerating faster than the institutional, policy, and cultural frameworks designed to govern it.

II. Recursive Self-Improvement

A. OpenAI: GPT-5.3-Codex

On February 5, 2026, OpenAI released GPT-5.3-Codex and made an extraordinary public claim: it was the first model in the company's history that was, in their words, "instrumental in creating itself."[1] The Codex engineering team used earlier versions of the model to debug its own training runs, manage its own deployment pipelines, and diagnose its own test results and evaluations.[2]

OpenAI CEO Sam Altman described this publicly as "the beginning of the intelligence explosion."[3] The model's technical report confirmed it served as the primary engineer for its own final optimization phase and deployment pipeline.[4]

This is not fully autonomous recursive self-improvement in the theoretical sense. But it is the first commercial proof point that AI-assisted AI development is no longer theoretical. If a coding model can meaningfully accelerate its own development, the pace of future improvements compounds.[2]

B. Anthropic: Claude Writing Claude

On February 3, 2026, Anthropic Chief Product Officer Mike Krieger stated at the Cisco AI Summit: "Right now for most products at Anthropic it's effectively 100% just Claude writing, and then what we've done is created all the right scaffolds around it to let us trust it."[5] He described engineers routinely shipping pull requests of 2,000 to 3,000 lines generated entirely by Claude.[6]

Boris Cherny, head of Anthropic's Claude Code division, confirmed he has not written a single line of code by hand in over two months. In a public post, he stated: "I shipped 22 PRs yesterday and 27 the day before, each one 100% written by Claude."[7] An Anthropic spokesperson confirmed the company-wide figure is between 70% and 90%.[7]

Perhaps most remarkably, approximately 90% of Claude Code's own codebase is now written by Claude Code itself.[7][8] Anthropic's Cowork product — launched January 12, 2026 — was built in ten days by four engineers, with most of the code written by Claude Code.[9][10]

These are not startups exaggerating capability. These are the two leading frontier AI laboratories describing what is already happening inside their own walls.

III. The Collapse of Cost

Recursive improvement alone would be significant. But it is occurring alongside a dramatic and accelerating collapse in the cost of frontier-class inference.

MiniMax M2: Near-Frontier Performance at Utility Pricing

MiniMax, a Shanghai-based AI company, released M2 — a 230-billion parameter mixture-of-experts model that activates only 10 billion parameters per token. It ranks in the top five globally on the Artificial Analysis Intelligence Index, scoring 61 — ahead of DeepSeek-V3.2 (57) and trailing Claude Sonnet 4.5 (63) and GPT-5 (69). On agentic benchmarks including Terminal-Bench and SWE-bench Verified, it is competitive with frontier proprietary models.[11][12]

Its API pricing: $0.30 per million input tokens and $1.20 per million output tokens.[13][14] This represents approximately 8% of the cost of Claude Sonnet 4.5 ($3.00 / $15.00 per million) with nearly double the inference speed.[15] The model is open-source under an MIT license, with weights freely available.

MiniMax M2.5, released days ago, is the first open-weights model to match Claude Sonnet on independent coding benchmarks.[16]

Model Input / 1M tokens Output / 1M tokens Intelligence Index
Claude Sonnet 4.5$3.00$15.0063
GPT-5 (thinking)~$3.00~$15.0069
MiniMax M2$0.30$1.2061

Cost comparison across frontier-class models. Sources: Artificial Analysis, MiniMax official pricing, Perficient analysis.

IV. The Implication

Intelligence is no longer scarce. It is approaching the price profile of a utility — available to anyone, at any scale, for almost nothing.

I am experiencing this directly. I have not written a single line of code by hand in 2026. My speed and quality of execution are higher than at any point in my career. I am building more, shipping faster, and solving harder problems — not because I became a better engineer, but because the nature of what "engineering" means changed underneath me while I was doing it.

That last sentence should sit with you for a moment.

When execution becomes abundant, leverage shifts — toward judgment, taste, intent, and meaning. The things we spent decades treating as soft skills quietly became the only skills that differentiate human contribution from machine output.

V. The Feedback Loop

What is striking is not a single breakthrough. It is the structure of the dynamic itself.

Better tools build better tools. Those tools lower the cost of building. Lower costs expand access. Expanded access accelerates the next cycle. Each iteration makes the subsequent one faster, cheaper, and more capable — and the humans in the loop shift from writing to directing, from executing to deciding.

This is not a metaphor. It is the operational reality at the organizations building the most advanced AI systems on the planet. OpenAI explicitly described GPT-5.3-Codex as "the first self-developing AI coding model."[2] Anthropic's CPO confirmed "Claude is being written by Claude."[6] SemiAnalysis reports that 4% of all GitHub public commits are now authored by Claude Code, projecting 20%+ by end of 2026.[8]

And it is compounding.

VI. The Institutional Gap

We are not prepared for what this implies.

We do not have language for a world where cognitive labor compounds on itself. We do not have policy for an economy where the marginal cost of expertise trends toward zero. We do not have institutions designed for a rate of capability growth that outpaces the legislative, educational, and cultural systems meant to govern it.

We are still running on frameworks built for scarcity — of knowledge, of skill, of access — and those frameworks are dissolving faster than we are building replacements.

This does not feel like an explosion. It feels like a slope that just became steeper than anything underneath us was designed for.

And the unsettling part is not the speed. It is the quiet. It is how few people have noticed that the ground shifted.

We may look back on this period not as the moment AI arrived, but as the moment the assumptions underneath work, expertise, progress, and purpose began to shift — slowly at first, and then all at once.

References

  1. NBC News. "OpenAI says new Codex coding model helped build itself." February 5, 2026.
  2. The New Stack. "OpenAI's GPT-5.3-Codex helped build itself." February 5, 2026.
  3. Let's Data Science. "GPT-5.3 Codex Explained: OpenAI's Self-Developing Agent." February 2026.
  4. Creati.ai. "OpenAI Launches GPT-5.3-Codex: Revolutionary AI Model That Helped Build Itself." February 6, 2026.
  5. IT Pro. "Anthropic Labs chief Mike Krieger claims Claude is essentially writing itself." February 2026.
  6. TechStory. "'Claude Writing Claude': The Code of Anthropic is Now Nearly 100% AI-Generated." February 9, 2026.
  7. Fortune. "Top engineers at Anthropic, OpenAI say AI now writes 100% of their code." January 29, 2026.
  8. SemiAnalysis. "Claude Code is the Inflection Point." February 2026.
  9. Axios. "Anthropic's Claude Cowork wrote itself." January 13, 2026.
  10. The Pragmatic Engineer. "How Claude Code is built." 2025.
  11. DeepLearning.ai / The Batch. "MiniMax-M2's Lightweight Footprint and Low Costs Belie Its Top Performance." November 6, 2025.
  12. Artificial Analysis. "MiniMax-M2 — Intelligence, Performance, and Pricing."
  13. MiniMax. Official API pricing documentation.
  14. MiniMax. M2 model release announcement.
  15. Perficient. Cost analysis of frontier AI models. 2026.
  16. OpenHands. "MiniMax M2.5: Open Weights Models Catch Up to Claude Sonnet." February 12, 2026.

This isn't speculation.
These are the companies' own words.

I don't have answers. But I am increasingly worried that we are not even asking the right questions yet.

Aaryush Gupta