AI 2026: Where We Are, Where We Are Heading

AI 2026: Where We Are, Where We Are Heading

Here is what we’ve got in store.

Three Big Things from 2025

  • Agents
  • MCP
  • Reasoning Models

Current State of Play

  • Alphabet
  • Anthropic
  • Grok
  • OpenAI

What’s Next

  • Interoperability
  • Personal Assistants
  • Devices

Past: Three Big Things from 2025

Last year brought us some good and bad news in the AI space. Model performance improvements, scientific breakthroughs, and a deluge of model slop. Here are three things that made the scene last year.

Agents

Last year, the choir was calling for agentic AI. The industry delivered.

All major cloud providers have AI infrastructure offerings. Building blocks ready to use for your own squadron of agents. Other companies provide purpose-fit capabilities. From CrowdStrike's agentic AI security platform to Salesforce’s data and collaboration platform AgentForce.

Agent IDEs, such as Google’s Antigravity, and command-line interfaces, such as Claude Code, are purpose-built for vibe coding. New paradigms and tooling for how humans and agents collaborate during the software development lifecycle are solidifying. Agents run for hours based on context and prompts, cranking out code, testing their work, and iterating until the job is done, or the whole setup implodes. 

MCP

MCP was introduced in late 2024 and has been widely adopted by the AI ecosystem. As an open standard like HTTP, the Model Context Protocol enables models to connect to external systems to consume data and control tools in service of their purpose, the prompt.

MCP's growth path is like that of past computing interface standards. Security and management standards are converging. Many are based on concepts from earlier computing generations or borrowed directly, such as OAuth 2 for authentication and authorization.

As we will see in the What’s Next section, MCP is a big part of what will drive the next generation of AI use through interoperability.

Reasoning Models

The seed of reasoning models was planted in 2022, when researchers at Google Brain discovered chain-of-thought prompting. In September of 2024, OpenAI’s o1-preview was the first model to introduce chain of thought, or reasoning, to the market.

Now, choosing between Fast and Reasoning models is a standard option in LLM use. Like everying AI, despite the advances and promise of reasoning models, problems remain.

Present: Current State of Play

Alphabet 

Alphabet has been crushing it in the stock market and dominating in news. They are the clear leader on LM Arena as well as the overall race to dominate AI.

Data is one of the three primary inputs to AI. From its search empire,  YouTube and more Google has more quality data than other platforms. 

Gemini is everywhere behind the scenes. From web searches at Google to Android devices. Through permissions you can grant in Connected Apps, it can use data from Gmail, Calendar, Google Photos, and more. This data will deliver a personalized experience you’re not going to get anywhere else.

Its core business is wildly profitable. It would be impacted by a bubble burst on AI, but has no worries about cash flow. It scored a big win in its deal to be the model behind Siri. Finally, as we’ll discuss later, it has built-in advantages in platforms, specialized markets, and partnerships.

Anthropic 

Anthropic is the darling of the community most heavily invested in AI, software developers. They earned credibility through a solid offering, introducing MCP, Claude Code and Cursor.

Anthropic’s offering Claude is near the top of the overall leaderboard, with strong showing in text and code.

They are on a two-year path to profitability and may be the first of the new wave of gen-AI companies to IPO this year. Reaching the market first will be a win for whoever crosses the line first.

Grok

Grok shows that massive investment can close the gap quickly. It’s not leading any category of the LM Arena, but does come in second overall.

From a data perspective, Elon has cast a wide net. Beyond traditional sources, he’s asked people to submit their health data and is likely to have scored big with the data exfiltrated from the government via DOGE.

The value of X as a data source is questionable. There are just so many bots, and the algorithm so heavily tilted, I wonder what value it’s going to provide distilled through xAI. Garbage in, garbage out.

Seeding an LLM to the point that it refers to itself as MechaHitler doesn’t engender trust. Even worse is allowing Grok to generate CSAM after warnings from internal teams. This behavior would be grounds for dismissal even if the platform wasn’t owned and operated by someone with well-known ties to Epstein.

On the funding side, Elon has plenty of money due to the overvaluation of Tesla, . He also has wealthy friends and other leverage. SpaceX is set for a massive IPO this year and Elon is set to merge xAI and SpaceX.

This would unlock a lot of cash for his efforts, which include deploying a constellation of 1 million orbital data centers (satellites). I am pro-space-based data centers given the advantages of solar energy and cooling. I’m not a fan of this type of deployment. Low Earth orbit is cluttered enough. Given the quality and safety concerns that come with a manic push for massive deployments (see Tesla) I think this is a bad idea.

See also Elon's track record of over-promising and under-delivering.

OpenAI 

OpenAI isn’t even in the top 5 overall leaderboard. Its strength is text-to-image and image editing, which I’m doubtful it can leverage into cash. 

They are confused about who they are competing against. Going into Code Red because they are losing market share to Gemini, stating their competition is Apple while Apple is signing deals to run Gemini, and finally racing to IPO in an attempt to beat Anthropic.

From a cash perspective, they are currently 4 years out from profitability. At the same time they are being sued for $134 billion dollars by Elon Musk, and recently had a deal for $100 billion dollar investment from NVidia isn’t happening.

They are making noise about supporting the science breakthroughs we’ve all been told AI will deliver. Unfortunately, the effort is just a space for scientists to collaborate around using their LLM. That is not where we will see real scientific breakthroughs.

Scientific breakthroughs will come from the systems Alphabet has built like AlphaFold, now working in conjunction with Gemini. OpenAI has a long-term investment to make if it wants to play in this space.

I'm not a hater of Sam Altman and company; I do think they need to narrow their eyes to survive. OpenAI needs to find a specialty it can leverage like Anthropic has with code. Absent that, it’s going to lose to players like Google that have a broader offering, along with strong platforms and partners to deliver it. 

What’s Next?

Interoperability

What’s better than an agent? Two agents, or more.  Now they need to work together without human intermediaries. 

Protocols and tooling are enabling broader interaction among AI systems. This space will allow for more use cases and capabilities through enabling broader use of mixed specialized agent functionality.

Protocol

Developer

Primary Focus

Use Case

MCP (Model Context)

Anthropic / Open

Model-to-Tool

Connecting an agent to your local files, Slack, or GitHub.

A2A (Agent-to-Agent)

Google / Partners

Deep Collaboration

Cross-vendor task delegation (e.g., Gemini asking Claude to audit code).

ACP (Agent Comm.)

IBM / Linux Found.

Structured Messaging

Enterprise-grade multi-agent orchestration via RESTful standards.

ANP (Agent Network)

Cisco / Open Source

Decentralized P2P

Fully decentralized discovery and trust without a central registry.

Collaboration between humans and AI is the other layer of interoperability. Humans& is talking about addressing this, but has no product yet.

Software development is at the forefront of advanced collaboration between humans AI. Existing platforms and procedures for collaboration between humans are getting new layers to include AI agents in the mix.

Platforms

Platforms are the software or hardware systems that deliver AI services to users. These include search engines such as Google and communication platforms like Facebook and WhatsApp. The hardware side includes our phones, smart speakers, and the like.

While the use case and user base of these platforms seem static, it’s not. Google dominated the web until social media emerged, and Facebook captured half the advertising market. Now AI is coming along to capture user attention at the same time bot use rises on social media while interpersonal human use plummets. The tides are shifting.

Personal Assistants 

Jarvis is the gold standard. Beyond Tony Stark’s money, we all want a personal assistant like that. This isn’t about little helpers in each app, it’s about the layer above that will eventually instrument the capability of the individual apps.

This is an area that Alphabet will dominate. If I were hiring a human assistant, they would get more value from workspace apps than from my social networks. Communication between people should be based on open protocols such as email, SMS, and telephone, not on platforms like WhatsApp or Facebook Messenger.

Devices

Phones, computers, and speakers are ubiquitous; these devices alone can provide an ambient computing experience that follows us anywhere. Still, new capabilities are screaming for a few more hardware platforms to deliver ambient experiences.

Zuckerberg is spot on about the utility of smart glasses. I loved my first-generation Meta Ray-Bans. Meta was the first to field a great product, capability, and fashion. The new Display model is great. The camera is much better, and the display is neat, but I’m not sure how often I’d actually use it.

Google is on the verge of disrupting this market, with glasses slated for release this year. Unlike Meta, Google’s partnership net is a bit wider. They have similar, if less lux, fashion partners. They are also focusing on partnerships with Samsung and Apple. Each of whom will release their own smart glasses using Google’s XR/Gemini stack.

As someone who used smart glasses for years, I found the most compelling use cases were listening to music and making calls. Second was taking photos and videos, and a distant third was AI interaction. 

Most of the time, when I wanted to interact with AI, I wanted the type of personal assistant interaction you can only get with integrated access to workspace apps like mail, calendar, notes, tasks, etc. I never had a use case where data from my social network was handy.

Pins that can do audio/video? Yawn. Not much a person couldn’t do with their phone if they wanted. I think there are use cases for smaller personal drones, but it’s a bit niche.

Robots are the big platform coming, maybe not this year, but soon. As alluded to with the cover image of last years AI look forward, AI capability has leapfrogged beyond the hardware platforms, now I expect hardware platforms to catch up and give the software new room to grow.

In this area, Alphabet once again stands out as the only vendor offering an on-platform version of Gemini and an SDK for robotics development.

Past, present, and now the future.

Alphabet will continue to dominate. First, by being at the top in the consumer space based on having a quality offering, existing platforms and partners, and winning new business from glasses to robotics. 

Grok may find a niche or two that play to its strengths, but it still won’t make money without sweetheart deals and siphoning from SpaceX. Its wildcard would be infrastructure advantage with a space-based "data constellation".

Anthropic is going to be fine. They have a path to profitability through expertise in a key use case: software development. This is the play newcomers need to make. Be an expert in something, not a generalist.

OpenAI is in the most peril. The AI bubble may burst, and they are in the most danger. I hope they find their niche because competition is healthy for any market.

We will see where we are a year from now. Given the resources being poured into the field, progress is inevitable. What do you think?