All articles

What We’re Reading (Week Ending 08 March 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 March 2026:

1.Iran: The Day After – Tomas Pueyo

Persia’s Shah used to be aligned with the West: He modernized the country, invited foreign investments, built a lot of infrastructure, improved literacy and healthcare…

The radical Islamists didn’t like this modernization, so they allied with the local Left to gain power, and succeeded in 1979.1 This means the entire legitimacy of the regime is based on opposing the US, its allies, and its values.

This would not have been a problem if Iran had limited itself to hating the US and Israel. Instead, they’ve threatened to attack and eliminate them for the last 47 years, and they haven’t limited themselves to empty threats. They’ve developed ballistic missile and nuclear weapon programs to be able to obliterate Israel, and maybe attack the US too.

For the last few decades, the US and Israel have tried to manage the situation, but the closer Iran is to getting nuclear weapons, the less they can tolerate it. Until recently, they were forced to because Iran was quite strong, with proxies in Palestine, Lebanon, Syria, Iraq, and Yemen. But after October 7th 2023, Israel has systematically eliminated most of them, so it and the US saw an opening last year to weaken Iran and its nuclear program, and took it. But that was just a delay. The truth is they will only be safe when this regime falls.

The problem is that achieving regime change is going to be very difficult…

…The recent strikes have killed the existing Supreme Leader, but there’s a long chain of command to replace him and any other leader killed through strikes. Then, there’s Khamenei’s Bayt, a group of 4,000 close employees who manage Khamenei’s affairs and power, and work as a shadow government mirroring the official one…

…Through this body, Khamenei controlled the BMEE and AQR2, huge conglomerates of over 200 companies with interests in real estate, construction, industry, mining, energy, power, food, agriculture, tourism, transportation, IT, media…

Khamenei’s Bayt was also able to infiltrate the military and the IRGC (Islamic Revolutionary Guard Corps), a kind of Praetorian Guard with over 125,000 members sourced from the Basij militia, a bigger group of ~400,000 poor, Shia radical volunteers (and 25 million members!!) who police the country on behalf of the government…

…45% of the Iranian government’s income comes from oil.3 If the US and Israel prevent Iran from selling its oil, its income will dry up, and it won’t be able to pay salaries. My guess is the Iranian regime will prioritize IRGC, Basij, and military salaries, but even then, losing 50% of your income can’t be easy. Unfortunately, this takes some time to bite, as the government will use other resources to pay its forces for as long as possible, and people can sometimes withstand some time without a salary…

…The vast majority of Iranians are tired of their government.

They are now celebrating the bombings on the streets.

The first consequence is oil. Iran has closed the Strait of Hormuz, many oil pumping stations and refineries have been hit in the area, and oil has stopped flowing. This will put pressure across the world too as oil prices increase…

…Saudi Arabia can ramp up supply, and employ an east-west pipeline that should be able to bypass the strait. It won’t be enough to counter the entire drop in supply, but it might end up benefiting Saudi Arabia through higher oil prices.

Meanwhile, the biggest consumer of Iranian oil is China, but it has historically high oil and gas reserves, so it might be able to withstand the war if it’s short enough…

…Four years ago, China had collected anti-US friends in Russia, Iran and its proxies in Syria, Lebanon, Hamas and the Houthis, Venezuela, Cuba, and a host of satellites considering whether to join them or not. Israel took care of Iran’s acolytes. The US neutralized Venezuela, Cuba is isolated and cut off from oil, Russia is bogged down in Ukraine, and Iran is at risk of falling. Virtually every friend that China has cultivated over the last few years is crumbling.

Not only that, but China’s standing as a provider of technology and military power is completely exposed. If China won’t come to the rescue of its allies, and its weapons can’t stop the US, who will want to side with them?

Then there’s the oil. Venezuela and Iran together accounted for 17% of China’s oil imports.

This is a bad day for China…

…Iran has 90M people, nearly twice South Korea’s population. 42% of them are under 25, and they have a 98% literacy rate. The country birthed one of the oldest civilizations on Earth, the first empire, and has seen a succession of successful ones through the ages. Its diaspora in the world—especially in the US—is educated, rich, and powerful. It could fund and provide the leadership for a renaissance in the country.

But only if the current regime falls.

2. A Munger PA Investment – Joe Raymond

The Alfred C. Munger Foundation (named for Charlie’s father) sold 10,000 shares of Black Hills Corporation (BKH) for $23 each in June 2009, resulting in a short-term gain of 29%…

…A reasonable assumption based on this filing is that Charlie purchased this specific lot of 10,000 shares for $18 apiece in early 2009 and sold in June 2009 around $23.

He could have been buying the stock before that and holding shares after.

The only thing we know with reasonable certainty is that Charlie thought Black Hills was a good buy in 2009 at $18 per share…

…Black Hills is a utility company based in South Dakota.

It was formed in 1941 through a combination of several existing utility companies serving the Black Hills region. The earliest predecessor traces its roots back to 1883…

…Black Hills could be described as a decent and predictable business in the years leading up to Charlie’s purchase. ROE was in the low double digits and book value per share growth (adding back dividends) averaged 11% from 2002 to 2008.

Simple, clean, predictable, decent quality…

…Black Hills earned $105 million in 2008 ($2.75 per share). It paid $1.40 of dividends that year and finished the year with $27.19 of per share book value…

…I think the thesis here was pretty simple.

A durable, safe business that earns double digits on equity shouldn’t trade for 66% of book value.

The crashing economy wasn’t going to kill the utility business. People still needed to turn their lights on and fire up the stove…

…BKH’s average price three years later in 2012 was $33.66 per share, good for a return of 98% (25% CAGR) before dividends…

…The Black Hills case isn’t terribly exciting, but I do find it interesting and useful.

If I had to nail it down to one simple idea it would be this:

Buying an adequately capitalized business, that should earn at least a high-single-digit return on its common equity, at a substantial discount to book value often works very well over short- and medium-term time frames.

3. Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud – Tara Copp, Elizabeth Dwoskin, and Ian Duncan

As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said.

Claude has also been used in countering terror plots and in the raid that captured Venezuelan president Nicolás Maduro. But this is the first time it has been used in major war operations, according to two of the people…

…“It is notable that we’re already at the point where AI has gone from hypothetical to supporting real-world operations being conducted today,” said Paul Scharre, executive vice president at the Center for a New American Security, and who has written about AI in warfare. “The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed.”

The downsides, he said, are “AI gets it wrong. … We need humans to check the output of generative AI when the stakes are life and death.”

The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements. The system has been used to generate proposed targets, to track logistics and provide summaries of intelligence coming in from the field. The Trump administration has vastly expanded the use of Maven into many other parts of the military, with over 20,000 military personnel using it as of last May…

…Ben Van Roo, the CEO and cofounder of Legion Intelligence, a defense software startup, said that in his work over the last two and half years integrating generative AI into software systems at the Department of War, “the baseline use case is chat and advanced search functions — essentially summarizing information.”

It’s not highly integrated into weapons or mission critical systems, he said. He said that he wasn’t aware of its use in Iran, but wondered how it built on existing software that is already able to prioritize targets.

4. The Coase Conjecture in AI Inference Markets – Soren Larson

In 1972 Coase posed a simple question: If a monopolist owns all the land––assumed to be homogenous in kind and quality––in the world, at what price does he sell it?…

…Coase’s argument is interesting and simple. Normally a monopolist would set quantity sold where marginal revenue equals marginal cost. For convenience, let’s say marginal cost is zero.

Once the monopolist land owner has sold a bit of land, he sees the remaining land is also still available, but not monetized. Maybe he should sell a bit more––it’d generate pure profit! To do that, however, he’d have to lower the price to meet demand at the price it’s willing to pay.

Doing this annoys the original buyers.

The land is now worth less than what they paid. Eventually, however, the market catches on. Candidate buyers know the monopolist can’t resist selling more land (marginal cost of selling is zero!) and so they wait.

While the monopolist technically has no competitors, he ends up with one he didn’t expect––his future self. In situations like this, the market can guess a monopolist’s future behavior, so it holds out waiting for the “future self” monopolist to depress his own prices…

…At first glance, Coase seems to apply directly: the monopolist can’t resist selling more inference, buyers anticipate this, and prices unravel.

At first glance it could appear that Coase implies that frontier labs can’t sustain monopoly prices because they can’t resist selling more and more inference at what end up being lower prices.

This, of course, is incomplete in that every inference customer can choose to buy inference from cheaper open source models. It turns out the existence of open-source alternatives protects the monopolist’s pricing power by giving customers a reason to exit the frontier market rather than wait for discounts…

…In cases where buyers have an Outside Option––where they can defect from the monopolist’s market and buy some alternative––the Coasian monopolist unraveling doesn’t happen. The monopolist can sustain the monopoly price indefinitely.

Empirically, this appears to be happening in the inference market…

…Effectively, the outside option is a self-selection device that relieves the monopolist from price-sensitive waiters who’d pressure prices downward over time. The monopolist loses some customers but gets to keep pricing power. This is broadly what we see today…

…There are clear extensions to this setting in inference markets. Suppose you’re considering developing new software using AI: for you, waiting for Anthropic to lower prices could prove costly. A competitor who pays full price today could lock in customers before you enter the market. This dynamic is likely what explains today’s inference market structure: buyers would prefer to pay full price or defect to Minimax M2.5 or GLM 4.7 today than wait and let competitors eat their lunch.

The other extension, of course, is that Outside Options keep getting better. Open source models are improving every quarter: A buyer who defects today to a mediocre alternative might have waited for a better one in a quarter––returning us to the original Coase setting…

…Suppose now that the monopolist wins on all counts: open source improvement is slow enough that buyers don’t bother to wait. Open-source capability might even plateau. The Board and Pycia result holds and the monopolist charges its optimal price at equilibrium.

Is our beloved monopolist now safe?

So far we’ve only discussed pricing power, but what about market capture? Even if the monopolist preserves its pricing power, it could be that so much of the market defects to the Outside Option that pricing power is practically irrelevant.

Consider the buyer’s problem. The inference buyer only pays the monopolist pricing premium if the frontier model offers enough additional value over the open source alternative to justify the price. When open source closes the gap it reduces the collection of buyers for whom the frontier premium is worth the price. These two dynamics compound: a shrinking price corresponding to a lower marginal benefit of frontier v Outside Option mixed with a shrinking customer base means the monopolist’s total revenue erodes faster than the capability gap closes.

Of course, this argument depends on inference buyers actually connecting their buying decisions to value actually delivered.

The market may not be doing this today––many preferring to build Tool Shaped Objects. In fairness, model capabilities are jagged and it’s a reasonable  strategy for firms to keep buying frontier, irrespective of underlying value proposition while the technology matures. On the other hand, as the technology matures and firms begin to connect their inference consumption to value delivered, demand shifts from “just buy the best” to “maximize margins” or “buy what’s worth paying for.” In this world, the monopolist’s value proposition reduces to its incremental value over the Outside Option. And that shrinks even as open-source improves…

…Board and Pycia explain why margins are high: outside options remove the price sensitivity of buyers. High margins are an artifact of the Coase selection mechanism, not evidence of a durable business.

The labs clearly can and are charging high margins today. That’s not the question. It’s whether they will be charging high margins in three years. 

If open source keeps closing the gap, the answer from Board and Pycia––and from Ronald Coase––is probably not.

5. Biggest AI Prediction & Why I’m Allocating $200,000 to it – ContraTurtle

I categorize the AI stack into six levels:

  • Level Zero: Energy (GE Vernova, Cameco Corp, Constellation Energy, etc.)
  • Level One: Chips (TSMC, Nvidia, AMD, ASML, Broadcom, etc.)
  • Level Two: Infrastructure & Data Centre (Equinix, Arista Networks, Vertiv, Amazon, Google, Microsoft, etc.)
  • Level Three: AI Foundation Model Companies (OpenAI, Anthropic, Google DeepMind, Mistral, etc.)
  • Level Four: AI Software Infrastructure (Amazon Web Services, Google Cloud Services, Microsoft Azure, Palantir, Snowflake, Databricks, etc.) – Enterprise platforms enabling AI deployment, orchestration, and data pipelines
  • Level Five: AI Applications, Apps and Services (Meta, Google, Microsoft, Amazon, ServiceNow, Shopify, Axon, Netflix, etc.) – Companies delivering end user value and capturing economic surplus from AI optimisation

I will be focusing on Level Five in this article because this is where economic validation happens.

You can have:

  • The most advanced GPUs
  • The cheapest energy
  • The largest data centres
  • The most powerful foundation models

None of it matters if end users do not generate ROI that justifies capex deployed upstream.

Level 5 determines whether the entire AI stack earns an adequate return on capital.

Over the long term, the bulk of economic surplus accrues to the layer closest to the customer. Historically in technology cycles, infrastructure enables value creation, but applications capture pricing power.

This layer is still early…

…But there is one use case where AI ROI is already direct, measurable, and immediate and that is – Advertising.

Let me explain.

Ads share two structural traits with coding (a use case that has shown the most promise in enterprise):

  • Low cost of failure with hallucination, yet provide high ROI
  • Built-in verification mechanisms

In coding, hallucinated outputs are caught through testing frameworks. Unit tests, integration tests, and runtime checks validate whether the generated code works. If it fails, it does not ship.

Advertising works similarly.

An advertiser can generate five variations of an AI-created image, headline, or video and deploy them simultaneously. Performance is verified empirically through A/B testing across metrics such as:

  • Click-through rate
  • Conversion rate
  • Return on ad spend

Poor-performing creatives are automatically filtered out by the market. Strong performers scale.

Advertising is therefore a near-perfect commercial application of probabilistic AI.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, ASML, Meta Platforms, Microsoft, Netflix, Shopify, and TSMC. Holdings are subject to change at any time.

Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

Last month, I published More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Coupang (NYSE: CPNG)

Coupang’s management is not concerned about disintermediation by AI because they believe that consumers will still want to shop where they can find the best combination of selection, service, and savings; management thinks there’s tremendous potential for AI to amplify the value Coupang brings

[Question] AI seems to be destroying a lot of things and agentic AI impact on e-commerce has been hardly debated in recent periods. So I was hoping you can talk about how you view platforms such as Coupang will not be somehow disintermediated by some chatbot or AI agent somewhere from somebody else.

[Answer] Ultimately, we believe customers care about selection, service and savings. And they’ll shop where they can find the best combination of all three. And as I mentioned in the call earlier, we’re a business that involves not only technology and software, but it’s not just a business made of electrons, but we’re really — we have real retail real infrastructure and people to move physical inventory. There’s tremendous potential for AI to amplify the value that we deliver across all 3 of the pillars that we strive to improve, selection, service and savings. And we believe AI will be a powerful means of us trying to — of us doing those jobs better over time, delivering the best experience at the lowest cost, and we intend to make a strong effort in the coming years to capture those opportunities.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management introduced an AI-enhanced search experience in the marketplace business in 2025 Q4; the new search experience expands product discovery in a persoanlised way; the new search experience includes an AI assistant that can help users refine broad searches; management has introduced a Seller Assistant in the marketplace business to scale onboarding and support for sellers; Seller Assistant helps sellers improve listing quality, create videos of their products, and handle customer queries; management wants to embed AI across the marketplace to improve discovery, increase relevance and conversion and deepen engagement; the Seller Assistant in MercadoLibre’s marketplace already advises 20% of the company’s GMV

In Q4’25, we introduced an AI-enhanced search experience in Argentina that uses insights from individual buyer behavior to expand product discovery from a single search term – for example, a search for “ball” will show tennis balls for tennis players, and footballs for football players.  It may also show specific brands or premium / value products, depending on our knowledge of the buyer from their extensive search and transaction history. An interactive assistant can refine broad searches, such as “smartphone”, into more personalized results by guiding users through key product attributes.

On the supply side, our Seller Assistant is helping us scale onboarding and support. It accelerates sellers’ progression to higher reputation tiers, improves listing quality through targeted recommendations, creates short-format videos from a single product photo, and handles inquiries that were previously managed by customer service teams. 

These initiatives are early steps in a broader effort to embed AI across the marketplace to improve discovery, increase relevance and conversion and deepen engagement…

…I think it’s worth highlighting the fact that we have a seller assistant today running in our platform, basically 20% of our GMV is somehow advised by our assistant.

Improvements in MercadoLibre’s advertising technology are driving higher adoption and spend; AI tools in MercadoLibre’s advertising business are supporting account managers for large advertisers, and are engaging directly with long-tail sellers

In Q4’25 we launched tools such as “budget orchestrator” and one-click campaigns, which performed well during peak season. In parallel, AI tools are supporting account managers for big brands and top sellers, while engaging directly with long-tail sellers to stimulate demand. We also launched our DSP for advertisers in China, which contributed to growth in Q4’25, and should support monetization of our growing CBT business.

MercadoLibre’s management launched the Mercado Pago AI Assistant in October 2025; the Mercado Pago AI Assistant handled more than 9 million conversations in 2025 Q4 and resolved 87% of these conversations without human intervention; management plans to expand the Mercado Pago AI Assistant’s capabilities in the coming months to handle more use cases; management sees potential to use the Mercado Pago AI Assistant for cross-selling fintech products in the future

In October, we launched the Mercado Pago AI Assistant, and early results are encouraging. In Q4’25, the Assistant handled more than 9mn conversations, with nearly 90% resolved without human intervention. There are dozens of use cases, including general inquiries, making transfers and paying bills, and in the coming months, we plan to expand the Assistant’s capabilities to make it increasingly proactive…

…Our Mercado Pago AI assistant is solving 87% of interactions without the need of human support…

…So far, we have been mostly dealing with these interactions that are initiated by users and the vast majority of them are responded by the agent without any kind of human intervention. But I would say, so far, we have not yet started using the agent for cross-sell, but it’s something that we will start doing. Given that you are in a conversation, you can, for example, tell the consumer that she has a credit offer or a credit card offer and the benefits of the credit card. We are not doing that yet, but we believe the opportunity there is significant and the system will become more proactive. And beyond cross-sell, it will also become more proactive in terms of acting like a personal banker. So helping you, I don’t know, allocate your portfolio or make the recommendations of what kind of credit is better for you.

AI tools are helping MercadoLibre’s Merchant Acquiring business’s sales teams by identifying new customers and deepening relationships with existing customers; the Merchant Acquiring business has 25% FX-neutral TPV growth in Brazil in 2025 Q4; the Merchant Acquiring business has 50% FX-neutral TPV growth in Mexico in 2025 Q4; the Merchant Acquiring business’s base of active POS (point of sales) is nearly equal to all of the incumbents combined

AI tools are improving the effectiveness of our sales teams by helping identify new customers and deepen relationships with existing merchants. In Brazil, this has supported higher TPV per merchant and shortened payback periods. This contributed to strong FX-neutral Acquiring TPV growth of 25% YoY in Brazil in Q4’25. In Mexico, growth is being driven by onboarding long-tail and SMB merchants, many of whom are accepting digital payments for the first time. Momentum remains strong with FX-neutral Acquiring TPV growing 50% YoY in Q4’25. As adoption continues to rise, our installed base of active POS devices is approaching that of all incumbents combined.

MercadoLibre’s management thinks MercadoLibre has the best features for agentic commerce and these features go beyond merely searching for an item; management’s focus with agentic commerce is on developing MercadoLibre’s own agentic experience inside the company’s marketplace; management believes MercadoLibre has the first-party data to create the best search, recommendation, and discovery engines for an agentic experience; management thinks the emergence of agentic commerce will mean an even faster transition from offline to online retail; management thinks MercadoLibre is well-positioned to capture advertising revenue from agentic commerce because it is the go-to place for online shopping; management thinks MercadoLibre’s advertising revenue also stands to benefit from agentic commerce activity that happens outside of MercadoLibre because the company has a unique set of data, customer knowledge, and attribution capabilities; management thinks there are many unknown aspects of agentic commerce today such as what hardware and AI models consumers will use, but there are also known aspects, such as what consumers value; management is cognisant of the risk of disintermediation of the MercadoLibre platform when it comes to agentic commerce, but they are confident that the company is coming from a position of strength

Let me start with the idea of Agentic commerce and how that will play out for us and potentially disintermediating, which is something that I’ve been asked over and over. So I think it’s still a bit early in the game, but we don’t think that solving one part of the value chain will actually change the rules of the game, meaning that we still think that the key is to provide the best end-to-end experience for our customer. So we know that searching for an item is one important task but reading reviews, making sure the package arrives on time, offering the widest selection, having the best prices, the best financing, preventing fraud, having the best customer support and so on are also key parts of the end-to-end job on — that we need to solve and that drive the decisions on where buyers will end up buying…

…Where we’re putting most of our efforts is in developing our own agentic experience inside MercadoLibre. We think and we are convinced that we have the first-party data to create the best search, best recommendation, best discovery engine on which we can personalize and lay over the agentic experience that the new technology drives…

…If you believe that there is a world of agentic commerce, that could mean that retail will move even faster from the offline to the online world. So all this to say that I do think that we are well-positioned to actually capturing ad revenues in the future because we still think that MercadoLibre will be go-to place for demand to do shopping online…

…What happens with all the agentic commerce that will occur outside of MercadoLibre because for sure, we will not have 100% market share. And we think that, that also represents an incremental opportunity for many, right? So today, we are providing with our tech stack advertising services to third parties, we do that with Google Ad Manager, with Disney. We do that with Roku with HBO Max. And the reason behind that is that we have a unique set of data, customer knowledge, attribution capabilities that we think are very hard to match…

…[Question] Essentially, how these independent agent systems could introduce new forms of the intermediation and engage clients directly, right, leading to potential changes in — the most obvious 1 we can think of and discuss a lot is the dollar full of advertising. So I really want to hear how you view these risks and how you’re approaching them strategically.

[Answer] There are things that we know and there are things that we don’t know. So we don’t know which hardware people will use in 10 years to buy. We don’t know whether the winning model will be X Y or Z and so on. We do know that consumers do value or do look for the best end-to-end experience. We do know — and that means not only searching for products, but also getting products fast, having the wider selection, pricing, the best financing alternatives, post-purchase support and so on. We also know there’s a technology today that can dramatically improve the product discovery process. And for that reason, we are putting all of our efforts and deploying lots of engineers in building our own agents and our own shopping assistant within MercadoLibre. It’s early to know what will happen with other shopping assistant. I take your point that it might present a risk. I understand where you’re coming from. But we are confident that we are playing this 1 from a position of strength that we have the relationship with consumers. We have a brand that Latin America loved. We have information and data about past purchases that allow us to offer them a great shopping assistant. And we are betting and putting our efforts on what we can control, which is building the best assistant possible

MongoDB (NASDAQ: MDB)

AI is not yet a material driver of MongoDB’s results, but management is encouraged by the growth in customers leveraging the company’s AI capabilities; the number of customers using Vector Search doubled year-on-year in 2025 Q4 (FY2026 Q4); the number of customers using Voyage embedding models has doubled since February 2025; management is seeing customers expand their use of MongoDB as a strategic data platform for both foundational and next-generation AI workloads; management thinks AI and agentic applications require memory, state, and high-quality retrieval capabilities, and these are all native to MongoDB’s OLTP (online transaction processing) platform without the need for ETL (extract, transform, load) or bolt-on systems

While AI is not yet a material driver to our results, we are encouraged by the growth we are seeing with customers leveraging our AI capabilities. The number of customers leveraging Vector Search has nearly doubled year-over-year, and the number of customers using Voyage embedding models has also doubled since the acquisition last February. This growth is across a diverse range of customers, AI natives, digital natives and large enterprises…

…Large enterprises are increasingly standardizing on MongoDB to power a wide spectrum of portals, including both core mission-critical applications and emerging agentic AI applications. Rather than treating AI as a stand-alone initiative, many are expanding their use of us as a strategic data platform that supports both foundational workloads and their next generation of intelligent applications…

…MongoDB is increasingly recognized as the architectural foundation powering innovation for frontier model companies, leading digital natives expanding into AI and AI native organization scaling globally. The database layer has endured through multiple technology shifts over the past 60 years, and it is even more critical in this AI shift. AI and agentic applications require memory, state and high-quality retrieval capabilities native to our modern OLTP [online transaction po platform, which powers real-time applications, without ETL odd bolt-on systems through integrated search, vector search and embeddings. In this platform shift, OLTP is the high ground and MongoDB’s purpose built to win.

MongoDB signed a $90 million deal with a large tech company for the tech company to expand both core and AI workloads on Atlas

We signed several large deals in the quarter, including an approximately $90 million transaction with a large tech company that plans to expand both core and AI workloads on Atlas.

Axon Networks, a global leader in telecom network management, is using Enterprise Advanced to power its operator-as-a-service platform which delivers a real-time digital twin and API-first architecture; Axon Networks’ operator-as-a-service platform is AI-first; management is hopeful that the Enterprise Advanced business can accelerate in the future; management is seeing a trend of companies wanting to keep critical data on-premise because of issues related to AI for mission-critical applications

Axon Networks, a global leader in telecom network management, serving 32 telcos and over 90 million homes and enterprises selected EA as the foundation for its operator-as-a-service platform. This platform delivers a real-time digital twin and API-first architecture designed to handle massive data peaks and high-volume time series workloads. EA provides the flexibility to run across mission-critical environments including hyperscalers and bare metal, along with the enterprise-grade security and operational tooling required to support Axon’s AI-first autonomous networking platform at scale…

…We are actually investing in EA to bring it to parity to Atlas. So certainly, our expectation and hope is that we continue to grow that and can even accelerate it in the future…

…Over a large set of very important customers that is definitely the trend that I’m speaking from our customers is, number one, that because of a variety of issues related to also AI that for mission-critical application, there is this trend I’m seeing where they do want to keep their critical data estates on-prem. And this is not just only in financial services, we are seeing that in health care and other verticals like government. But when I was in Europe and even in Asia, I’m also seeing there that there is a preference for those industries to also use MongoDB potentially with EA and only certain workloads in the cloud

Indian vibe coding startup, Emergent Labs, selected Atlas over PostgresSQL for agentic coding workloads; Atlas is helping Emergent Labs power 6 million applications across 190 countries

Emergent Labs, a leading AI white coding platform in India that just crossed $100 million run rate, selected Atlas over PostgresSQL to power AI agents that build production-ready applications from natural language prompts. They power nearly 6 million applications built across 190 countries and handle applications that averaged 35,000 lines of core with some reaching 300,000, all made possible with Atlas’ flexible document architecture and reliable scale.

AI startup Eleven Labs is using Atlas Search and Vector Search to power the long-term memory and knowledge base of their agents, and to deliver highly personalized interactions in real time, globally 

We are also fueling innovation at AI-native customer Eleven Labs, which is redefining conversational AI with its new enterprise agentic platform. Eleven Labs selected Atlas to power the critical long-term and knowledge base for their autonomous agents. By leveraging Atlas Search and Vector search, they enable their agents to retain complex context and deliver highly personalized interactions in real time and at global scale. Supporting the rapid expansion to $330 million of ARR and $11 billion valuation.

Adobe recently expanded its long-term commitment with MongoDB; Adobe now uses Atlas Vector Search to power its agentic experiences, and will soon include Voyage embeddings; Adobe also uses Enterprise Advanced 

A marquee example of the platform in action is Adobe, which expanded its strategic partnership and long-term commitment with us to accelerate AI-driven innovation. MongoDB now underpins a range of Adobe’s key initiatives, including Agentic experiences powered by Atlas Vector Search and soon Voyage embeddings. Adobe leverages Atlas to manage large fleets and always on database deployments at global scale, while also continuing to partner with us for support of self-managed business-critical workloads on EA, highlighting our ability to operate seamlessly across both cloud and on-prem environments.

MongoDB’s management does not expect AI native companies to contribute much to MongoDB’s revenue in 2026 (FY2027)

In terms of AI, we remain optimistic regarding our opportunity and are seeing encouraging trends with a number of AI native. While this subset of customers has significant potential — many of them remain early in their MongoDB journey and are not yet meaningful drivers of revenue…

MongoDB’s management has found that the key to win in an era where agents are going to be spinning up databases and not humans, is to get agents to love MongoDB as much as human-developers to love MongoDB; this view of management is validated by an AI-native customer of MongoDB that chose to build on the company’s database; management thinks that the way to get agents to love MongoDB is to ensure MongoDB has all the right integrations in place; a key focus area for management is to build MongoDB’s database in the way that would make agents love MongoDB; management has an ambitious roadmap, spanning 2026 (FY2027), to build MongoDB’s database for agents

[Question] How is your product and go-to-market strategy changing, if at all, ahead of the growing reality that agents are going to be the things that are spinning up most databases and not humans in the future.

[Answer] I have a very simple philosophy here. And the philosophy also was validated by one of the AI native companies that has completely built on MongoDB. They had many choices in many clouds and they chose MongoDB. And my initial intuition was the same as you outlined, is that MongoDB’s success over the last many years since the company was founded in 2007 was that builders or developers love MongoDB. And if that’s the premise, there was a lot of work done in the product to ensure that it’s a very natural way, flexible way while keeping the business agile as in the database agile so that it can move with the business.

We want to do the exactly same thing for agents. Agents also need to love MongoDB. That requires us to ensure that we have all the right integration with the right places, whether it’s MCP or whether we are looking at making sure that our APIs, in how you manage how we auto scale, how we ought to perform during the peaks and valleys. All of that truly needs to be autonomous and driven by machines. And that requires absolutely the focus from the engineering team that how would machines look at this if they want to provision an additional node or if they want to manage cluster because of resiliency across multiple clouds. So that will be the North Star for us that our agents will love MongoDB as much as today, human developers love MongoDB…

…We do have ambitious road map, of course. Today, we are already leveraged by some of the AI-native companies and some of them I outlined this time and also last time. And we are learning a lot from them. So we have ambitious road map in terms of truly machine friendly APIs or making sure that our protocol integration across a variety of protocols that machines demand and how do we Auto Scale, Auto chart. All of that will be throughout this coming year.

MongoDB has high-profile AI startup Anthropic as a customer, but MongoDB does not have any customer-concentration among AI natives; management thinks AI natives choose MongoDB for performance, scale, and security; management is seeing some AI natives make their initial database decisions without considering the database’s ability to scale; reads and writes are important with AI applications, and MongoDB is able to scale both for reads and writes; MongoDB can scale reliably with any AI native’s growth

[Question] Great to hear about Anthrapic as a customer at the MDB local event I’d love to hear how you think about the opportunity for Mongo to grow within large AI natives from here. And there’s also mention at the event that Agentic workflows require heavier storage and memory requirements. Love to hear why you think MDP architecturally is that suited for these growing types of AI use cases.

[Answer] The entire cohort, AI natives, frontier model companies, others, many of them choose MongoDB for performance, scale, security and other things. And I would say that the good news here from my standpoint is that we are not concentrated in any one customer when it comes to AI native cohort. So that’s number one. And as they scale, we will scale with them, but we are not concentrated. Even when I look at the growth as a percent of total, we were not concentrated…

…People are making initially database decisions in these AI native companies without realizing that they will run into scale issues or potentially, there was one of the choices that people could have gone with as an AI native company’s founders, had a massive security concern over the weekend where a couple of governments block them from being used. So what I find is that truly enterprise-class database that can scale, and when I say scale specifically, as for these AI native companies as their weekly active users or monthly active users continue to grow, like the example we had with Emergent or Eleven Labs and so on, they find that MongoDB scales better with them. Write performance as well as query performance really matters, and us being a native JSON with search, vector search, and embedding in one rather than multiple moving pieces — if I have to just simplify that, that is the strength because it’s an integrated platform that scales both for read and rights that as you scale your AI native company, they can rely that MongoDB will scale with them.

MongoDB’s management is seeing large enterprises from many different industries still wanting to pursue the modernisation of their technology stack; the enterprises that want to modernise want to shift to MongoDB, but they cannot do it all with AI tools and still require MongoDB’s help, that’s why MongoDB’s management still sees a huge opportunity with modernisation tooling

I was talking to a large financial institution in the U.K. And the Head of Transformation, she told me that, Hey, CJ, I have 50% of real estate that I want to modernize, I know that some of the AI tools can get me to some level, but I really, really need your health and your team’s help to make sure that for this mission-critical applications, we take help from MongoDB to help us land once you prove this out for the first workload, a very critical workload that is moving to MongoDB. The same thing happened, Alex, with a large customer in Spain when I was there a couple of weeks ago, this individuals said, “Hey, we are relying on MongoDBs, as we are modernizing. This is extremely critical workload, once you do that, we are going to open up the aperture and I know that AI will help us modernize, but we still need your help because the destination we want is absolutely MongoDB. So what I’m seeing is the feedback is the modernization and the need for modernization is still very much relevant in the high end of the enterprise, whether it’s a health care company, financial services or even government for that matter or health care. 

Number two, they know that AI tools can help you to some extent, but they definitely want to get there on a modern database to get AI ready where they won’t help from MongoDB to be on MongoDB. And then the last thing I would say is that even with some of the use cases, they try it and they’re like, hey, sometimes this is too hard to assure the reliability, security and all of those things for the application we build.

So I consider this as an opportunity in early stages, and this is definitely a top-down work that we have to do as MongoDB with the CTO and Head of Transformation, but the opportunity still exists and is massive.

MongoDB’s management is seeing that Fortune 500 companies across nearly all verticals are not scaling their agentic workloads into production right now; management thinks that it’s only a matter of time before enterprises scaling their agentic workloads into production

I would tell you it’s not if but when…

…I ask them that simple question, where are you on your Agentic workloads? And I’m talking about Fortune 500, okay, or big retail companies, health care companies pick one and ask them — where are you on agentic workloads? And are they really scaling? And the answer is still not yet. Yes, they have done a few productive productivity types of apps internally, but nothing of scale that is customer-facing, even including with a large retailer on agent commerce and so on. So my first thing is, 1 day, it is going to hit in a positive way. where you will have agents making a meaningful difference to the growth of our customers for either new product lines or existing product lines. We are not seeing that today in the large enterprises across pretty much most of the verticals that we speak to because as you know, MongoDB is across every vertical.

Nu Holdings (NYSE: NU)

Nu Holdings’ foundational AI model, nuFormer, is now in production for credit decisioning in Brazil; management is testing nuFormer in other use cases; management wants to expand nuFormer to more lending in Brazil and to credit cards in Mexico; Nu Holdings’ credit operations see a significant lift when using nuFormer

Our foundation model, nuFormer is now in production for credit decisioning in Brazil and in testing across additional use cases…

…We will expand nuFormer to lending in Brazil and credit cards in Mexico and continue putting AI directly into customers’ hands, moving closer to our long-term vision of an AI-powered personal banker in every customer’s pockets…

…We’ve discussed a few times over the past year, the significant lift that we’re seeing when we’re using our own foundation model on credit

Nu Holdings’ management’s long-term vision for AI is to have each customer have an AI-powered personal banker that they can access through their smartphones 

We will expand nuFormer to lending in Brazil and credit cards in Mexico and continue putting AI directly into customers’ hands, moving closer to our long-term vision of an AI-powered personal banker in every customer’s pockets.

Nu Holdings’ management sees AI as both an opportunity and a risk for Nu Holdings, but sees AI as more of an opportunity; management thinks there is one common denominator across every technology transformation and that is businesses that simply move bits from Point A to Point B gets hurt the fastest; in financial services, management thinks that the movement of money from one point to another has the higher risk of being disrupted by AI and that providing credit is the most sustainable activity; management thinks Nu Holdings is well-protected against AI-disruption because of its strength in credit and the proprietary data on credit it has; management thinks that AI will significantly enhance many aspects of a bank’s business; management thinks that Nu Holdings is well positioned to take advantage of AI to grow revenue and reduce costs

[Question] Do you see a risk that Nu could be disrupted by AI? Or do you see Nu as a potential winner in this transformation?

[Answer] It is both a challenge and has potential for disruption as well as significant opportunity. Net-net, we think it’s more opportunity than challenge for us…

…I think there is one specific trend or one common denominator across every technology transformation. And this goes all the way to even the internet era, which is any business model that relies on simply moving bits from point A to point B, where you’re effectively a broker tends to be hurt the quickest because one of the things that technology does is remove a lot of that friction in those processes. So I think to — some of the commentary that has been around in the market about financial services is, I think businesses in financial services that are simply moving money from one point to another point, will have the higher risk of potential disruption. You need to be able to add more value than that. And I think from that angle, we think — we have always believed that credit, specifically, credit revenue is actually the most sustainable type of revenue in financial services because of the capital intensity, the regulatory nature of it, the balance sheet aspect and the proprietariness of the data where AI plays a role and ultimately allows you to make a better decision on that. So I think from one angle, there is potential for challenging around the business model, but I think we’re very well positioned given the way we are set up in the strength around credit that we have…

…I think every single company really might benefit from that, where every function that you do, especially as a bank from customer service to compliance to regulatory to AML will be significantly enhanced or being significantly enhanced through AI…

…When you think about the fact that 95% of the world’s financial services profits are still concentrated in incumbent banks that still have significantly larger cost structures. Means that we’re very well positioned to take advantage of AI as a technology enabler for revenue and cost and ultimately be one of the winners in this technology shift.

In 2025, Nu Holdings’ management deployed new AI technologies for credit underwriting to increase credit limits in Brazil, under CLIP (Credit Limit Increase Policy); the full benefits of CLIP, especially in driving net income growth for Nu Holdings, has not appeared yet, because there are a few stages for CLIP’s effects to flow through, although the early signs are very promising; management thinks CLIP will continue growing credit limits for Nu Holdings in 2026 and beyond; management wants to deploy CLIP beyond Brazil; management wants to use the predictive AI technologies behind CLIP and apply it to other areas of Nu Holdings’ business

This was a year in which we have deployed this new technologies and approach to credit underwriting very successfully so far in allowing our customers to increase kind of their credit limits, especially in Brazil so far. And the best way for me to kind of illustrate the magnitude of this increase is Jorge, maybe, refer you to explanatory note, #32 of our financial statements in which we are then starting to provide what I call the unused credit limits. And you can see that unused credit limits went from about $18 billion to $29 billion. So an increase of about $11 billion, which accounts for about 60% increase in unused credit limits. It’s a big one. And I think it wouldn’t be possible for us to do so if we hadn’t be leveraging kind of the entirety of the predictive AI credit underwriting tools that have been kind of developed by us over the past now 18 to 24 months.

Have we seen all of those benefits translated into net income? The answer is no, not yet. So usually, I think at least I see kind of credit limits increases playing out in three steps. First, you have to offer the additional credit limits, then the credit limit translates into purchase volume. And then you have to see whether the purchase volume will then translate into IBB, We are starting to see the first step, Jorge, which is in the fourth quarter of 2025, our market share in purchase volume in Brazil has gone up by about 50 basis points. It was the biggest market share gain that we’ve seen in Nubank over the past 10 to 11 quarters. There’s two more to come, and then we still have to see kind of all of those purchase volumes reflecting into IBB.

Even though 2025 was, I think, a big sign of the magnitude of this ability to increase CLIP, I don’t think it will stop there. You will continue to see this kind of unfolding in new models and new improvements throughout 2027 — 2026, 2027 and onwards. And I would also say that the advent of the predictive AI technology will not stop at CLIP Brazil, right? It will be and is being exported to CLIP Mexico, CLIP Colombia, and then we’re going to go acquisition Brazil, acquisition in Mexico, what you’re going to go to fraud. It’s going to go to deposits, pricing and designs.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s management is seeing continued strong demand for demand for Blackwell; NVIDIA’s Data Center revenue again had very strong growth in 2025 Q4 (FY2026 Q4), driven primarily by chips from the Blackwell family; there are currently 9 GW (gigawatts) of Blackwell systems deployed; management expects sequential revenue growth throughout 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity management shared last year

Demand for our Blackwell architecture, extreme co-designed at data center scale continues to strengthen as inference deployments grow in addition to training…

…Q4 data center revenue of $62 billion increased 75% year-over-year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp…

…Nearly a year has passed since the release of our Grace Blackwell NVL72 systems. Today, nearly 9 gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers, hyperscalers, AI model makers and enterprises… 

…We look ahead, we expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027.

NVIDIA’s management is seeing the continued transition to accelerated computing and the infusion of AI across the hyperscalers’ workloads; management thinks the company’s hyperscaler customers are producing evidence of strong ROI; Meta Platforms’ GEM model drove a 3.5% increase in ad clicks on Facebook and 1% gain in conversations on Instagram

The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth…

…Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation and content recommender systems is encouraging our largest customers to accelerate their capital spending. For example, at Meta, advancements in their GEM model drove a 3.5% increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth.

NVIDIA’s management is seeing agentic and physical AI starting to drive the company’s business; NVIDIA’s management recently introduced Alpamayo, the world’s first open portfolio of reasoning Vision Language Action models; Alpamayo enables vehicles to think; the Mercedes-Benz CLA will be the first passenger car featuring Alpamayo; physical AI contributed $6 billion to NVIDIA’s revenue in 2025 (FY2026); management is seeing robotaxi rides grow exponentially; management thinks robotaxi vehicles will scale to millions of vehicles over the next 10 years, driving demand for orders of magnitude more compute from NVIDIA; management continues to advance robotics development; NVIDIA recently announced new partnerships to bring NVIDIA AI infrastructure Omniverse digital twins, World Models and CUDA-X libraries to millions of researchers, designers and engineers; OpenAI’s GPT-5.3-Codex agentic system was trained with, and will run inference on, NVIDIA’s systems; management thinks Anthropic’s Claude Cowork agent has ushered in the ChatGPT-moment for agentic AI; management is certain that agentic AI has reached an inflection point and the tokens generated are productive for users and profitable for the cloud service providers; in agentic AI, inference equates to revenue; all of NVIDIA’s engineers are using agentic coding tools

Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance…

…At CES, we introduced Alpamayo, the world’s first open portfolio of reasoning Vision Language Action models, simulation blueprints and data sets, enabling vehicles that can think. The first passenger car featuring Alpamyo built on NVIDIA DRIVE, will be on the road soon in the new Mercedes-Benz CLA.

Physical AI is here having already contributed north of $6 billion in NVIDIA revenue in fiscal year 2026. Robotaxi rides are growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide and Zoox, and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade, creating a market poised to generate hundreds of billions of dollars of revenue. This expansion will demand orders of magnitude more compute with every major OEM and service provider developing on NVIDIA’s platform.

We continue to advance robotics development. With the new NVIDIA Cosmos and Isaac Group, open models, frameworks and NVIDIA’s powered robots and autonomous machines for leading companies, including Boston Dynamics, Caterpillar, Franka Robotics, LG Electronics and NEURA Robotics. To accelerate industrial physical AI adoption, we also announced new expanding partnerships with Dassault Systemes, Siemens and Synopsys to bring NVIDIA AI infrastructure Omniverse digital twins, World Models and CUDA-X libraries to millions of researchers, designers and engineers building the world’s industries…

…We recently celebrated OpenAI’s launch of GPT-5.3-Codex trained with and inferencing on Grace Blackwell NVLink 72 systems. GPT-5.3-Codex can take on long running tasks that involve research, tool use and complex execution…

…Anthropic’s Claude Cowork agent platform is revolutionary and has opened up floodgates for enterprise AI adoption. Between Claude Cowork and OpenClaw, Anthropic’s Claude Cowork agent platform compute demand is skyrocketing and ChatGPT moment of agentic AI has arrived…

…I am certain that at this point with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them. All of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I am certain at this point that we are at the inflection point, we’ve reached the inflection point and we’re generating profitable tokens that are productive for customers and profitable for the cloud service providers…

…It’s really important to realize that inference equals revenues now for our customers because agents are generating so many tokens, and the results are so effective. When the agents are coding, it’s off generating thousands, tens of thousands, hundreds of thousands because they’re running for minutes to hours. And so these systems, these agentic systems are spawning-off different agents, working as a team. The number of tokens that are being generated is really, really gone exponential. And so we need to inference at a much higher speed. And when you’re inferencing at a much higher speed and each one of those tokens are dollarized, it directly translates into revenues. And so inference equals — inference performance equals revenues for our customers…

…Coding is obviously supported by agentic systems now, and all of our coders here at NVIDIA Corporation are using systems—either Claude Code or OpenAI Codex—enormously, and oftentimes both, and Cursor, oftentimes all three, depends on the use case. But they have agents and co-designed partners, engineering partners, to help them solve problems.

NVIDIA’s data center revenues have grown 13x since the introduction of ChatGPT; management is seeing NVIDIA’s business broaden beyond chat bots, driven by a few forces, namely, (1) a fundamental platform shift from classical machine learning to generative AI from the hyperscalers, (2) skyrocketing adoption of agentic systems, and (3) the growth of sovereign AI

We have now scaled our data center business by nearly 13x since the emergence of ChatGPT in fiscal 2023…

…Our demand profile is broad, diverse and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation and content recommender systems is encouraging our largest customers to accelerate their capital spending…

…Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing and tokens are profitable, driving extreme urgency to scale up compute. Compute directly translate to intelligence and revenue growth…

…Every country will build and operate some parts of its AI infrastructure, just like with electricity and Internet today. In fiscal year 2026, our sovereign AI business more than tripled year-over-year and over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore and the U.K. Over the long run, we expect our sovereign opportunity to grow at least in line with the AI infrastructure market as countries spend on AI proportional to their GDP.

Research firm SemiAnalysis recently declared NVIDIA as the Inference King; NVIDIA’s latest generation Blackwell system, GB300 NVL72, has 50x performance per watt and 35x lower cost per token compared to the Hopper systems for inference; optimisation of CUDA software has helped the GB300 NVL72 to perform 5x better on inference compared to just 4 months ago; management sees NVIDIA has having the lowest cost per token for inference; management thinks data centers using NVIDIA systems will generate the highest revenues; NVIDIA’s next generation of chips, the Rubin family, was recently unveiled at CES; the latest Rubin family of chips consists of 6 different chips; the Rubin chips can train MOE (mixture of experts) models and reduce inference; management has shipped the first Rubin systems to customers; management expects Rubin to have better resiliency and serviceability compared to Blackwell; management expects every cloud provider to deploy Rubin

SemiAnalysis declared NVIDIA, Inference King, as recent results from InferenceX reinforced our inference leadership with GB300 NVL72, achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper, and continuous optimization of CUDA software helped deliver up to 5x better performance on GB200 NVL72 just within 4 months. NVIDIA produces the lowest cost per token and data centers running on NVIDIA generate the highest revenues…

…We unveiled the Rubin platform last month at CES, comprised of 6 new chips, the Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9, SuperNIC, BlueField-4 DPUs and Spectrum-6 Ethernet Switch. The platform will train MOE models with 1/4th number of GPUs and reduce inference token costs by up to 10x compared to Blackwell. We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year. Based on its modular cable-free tray design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin.

NVIDIA’s management will use a $20 billion R&D budget, and the company’s strong system-design capabilities, to deliver X-factor performance leaps per watt for each new generation of AI chip systems

Our pace of innovation, particularly at our scale is unmatched, fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking across chips, systems, algorithms and softwares, we intend to deliver X factor leaps in performance per watt every generation and extend our leadership position over the long term.

NVIDIA’s management is seeing even older generations of the company’s AI chips being sold out in the cloud; NVIDIA’s older generation of chips continue to work well because all of the company’s GPUs are compatible, so the ongoing optimisation of NVIDIA’s software stack also benefits the older generation of chips

With NVIDIA infrastructure in high demand, even Hopper and much of the 6-year-old Ampere-based products are sold out in the cloud…

…All of our GPUs are architecturally compatible, which means that when I’m working on optimizing models today for Blackwell, all of that work and all of that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It’s the reason why A100 continues to feel fresh and continues to stay performant years after we’ve deployed it into the world. Architecture compatibility allows us to do that. Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization, knowing that our entire installed base in the cloud, on-prem, everywhere from generations of architectures and GPUs will all benefit.

NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q4 (FY2026 Q4) (networking revenue was $8.2 billion in 2025 Q3), driven by strong demand across NVLink, Spectrum-X Ethernet, and InfiniBand on a sequential basis, and by strong demand for NVLink 72 scale-up switches on a year-on-year basis; management thinks NVLink scale-up fabric has revolutionised computing; management recently announced that NVLink will be able to integrate with custom silicon from AWS (Amazon Web Services); management is seeing strong momentum with Spectrum-X Ethernet; NVIDIA’s networking revenue exceeded $31 billion in 2025 (FY2026), up more than 10x compared to (2020) FY2021; NVIDIA is now the largest networking company in the world, and is also now, or soon, the largest Ethernet networking company in the world; management has built an Ethernet capability that is powered by AI; NVLink 72 was really hard to develop 

 Networking, a cornerstone of our data center scale infrastructure offering, was a standout this quarter, generating $11 billion in revenue, up more than 3.5x year-over-year. Demand for our scale-up and scale-out technologies reached record levels, both growing double digits sequentially, driven by strong adoption of NVLink, Spectrum-X Ethernet and InfiniBand. On a year-over-year basis, growth was driven primarily by NVLink 72 scale-up switches as Grace Blackwell systems accounted for roughly 2/3 of data center revenue in the quarter.

NVLink scale-up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack. In Q4, we announced that we will enable AWS with NVLink to integrate with their custom silicon.

Momentum is strong with our Spectrum-X Ethernet scale up and scale across networking as customers work to unify distributed data centers into integrated gigascale AI factories.

For the full year, our networking business exceeded $31 billion in revenue, up more than 10x compared to fiscal 2021, the year we acquired Mellanox…

…We’re also now the largest networking company in the world and if you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching. And I think that we’re probably the largest Ethernet networking company in the world today and surely will be soon…

…We created an Ethernet capability that extends Ethernet with artificial intelligence, a way of processing in the data center, and we’re incredibly good at that…

…NVLink 72 has enabled us to deliver generationally 50x more performance per watt. It’s just an incredible leap. And it’s sensible. NVLink 72 is a great invention. It was hard to do. The creation of the switching technology, disaggregating the switches, building the system racks, all of that, we did it all in plain sight and everybody knew how hard it was for us to do. And — but the results are incredible.

NVIDIA’s management is seeing the company’s major customers (the hyperscalers) significantly increase their AI-related capex; the hyperscalers make up 50% of NVIDIA’s data center revenue; management thinks the hyperscalers’ revenues and cash flow will grow and this will generate more demand for NVIDIA’s systems because compute equals revenue; management is certain that agentic AI has reached an inflection point and the tokens generated are productive for users and profitable for the cloud service providers; management thinks inference tokens per watt translates directly into revenue for the CSPs (cloud services providers); management thinks all of NVIDIA’s major customers understand that without investing in compute, there can be no revenue growth

Analyst expectations for 2026 CapEx across the top 5 cloud providers and hyperscalers who collectively account for a little over 50% of our data center revenue are up nearly $120 billion since the start of the year and approaching $700 billion…

…[Question] When you look at your top cloud customers, cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed. So I know you’re very confident about your road map, right, and your purchase commitments and whatnot, but how confident are you about your customers’ ability to continue to grow their CapEx?

[Answer] I am confident in their cash flow growing. And the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You’re seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues. So in this new world of AI, compute equals revenues…

…I am certain that at this point with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them. All of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I am certain at this point that we are at the inflection point, we’ve reached the inflection point and we’re generating profitable tokens that are productive for customers and profitable for the cloud service providers…

…For the data centers, inference tokens per watt translates directly to the revenues of CSPs. And the reason for that is because everybody is power limited. And so I mean, no matter how many data centers you have, each data center, 100 megawatts or 1 gigawatt, has power limits. So the architecture that has the best performance per watt translates because each token, each — the performance tokens per watt, each token is dollarized. Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues…

…Without investing capacity today, without investing in compute, there cannot be revenue growth. And that, I think everybody understands.

NVIDIA is yet to generate revenue from China and management does not know if the company’s AI chips will ever be allowed into China; management thinks China’s AI companies could disrupt the structure of the global AI industry over the long term

While small amounts of H200 products for China-based customers were approved by the U.S. government, we have yet to generate any revenue. And we do not know whether any imports will be allowed into China. Our competitors in China bolstered by recent IPOs are making progress and have the potential to disrupt the structure of the global AI industry over the long term. To sustain its leadership position in AI compute, America must engage every developer and be the platform for choice for every commercial business, including those in China.

There was strong growth in NVIDIA’s gaming segment in 2025 Q4 (FY2026 Q4) driven by the AI-capabilities of the company’s gaming systems; management thinks the memory supply for NVIDIA’s gaming systems is very tight

Gaming revenue of $3.7 billion increased 47% year-on-year, driven by strong Blackwell demand and improved supply. GeForce RTX is the leading platform for PC gamers, creators and developers. In Q4, we added several new technologies and advancements, including DLSS 4.5, which uses AI to bring game visuals to a new level. G-SYNC Pulsar, bringing incredible clear graphics even in motion, and 35% faster LLM inference across leading AIPC frameworks…

…As much as we would love to have additional more supply, we do believe for a couple of quarters, it is going to be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. But it’s still too early for us to know at this time, and we’ll get back to you as soon as we can.

NVIDIA’s management expects tight supply for its advanced chip systems to persist

While we expect tightness in the supply for our advanced architectures to persist, we remain confident in our ability to capitalize on the growth opportunity ahead with our scale, expansive supply chain and the long-standing partnerships continuing to serve us well.

NVIDIA’s management is working on a partnership agreement with OpenAI and are thrilled with it

We continue to work with OpenAI toward a partnership agreement and believe we are close. We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company, we’ve had the pleasure of partnering with since their first days.

NVIDIA’s management recently announced that Meta will be deploying Blackwells and Rubins, and NVIDIA’s networking systems, for training and inference

Meta Superintelligence Labs is scaling up at lightning speed. Last week, we announced that Meta is deploying millions of Blackwells and Rubin GPUs, NVIDIA CPUs and Spectrum-X Ethernet for training and inference.

High-profile AI startup Anthropic recently announced a partnership with NVIDIA, and will run training and inference workloads on NVIDIA’s systems

This quarter, we announced a partnership with Anthropic, and a $10 billion investment in their company. Anthropic will train an inference on Grace Blackwell and Vera Rubin systems.

NVIDIA’s management recently entered into a non-exclusive licensing agreement with Groq for low latency inference technology (as part of the agreement, Groq’s top leaders have joined NVIDIA); management intends to extend NVIDIA’s chip architecture with Groq’s technologies as an accelerator

We recently entered into a nonexclusive licensing agreement with Groq for its low latency inference technology and welcome the team of brilliant engineers to NVIDIA. As we did with Mellanox, we will extend NVIDIA’s architecture with Groq’s innovations to enable new levels of AI infrastructure performance and value…

…What we’ll do with Groq is you’ll come to see GTC, but what we’ll do is we’ll extend our architecture with Groq as an accelerator in very much the way that we extended NVIDIA’s architecture with Mellanox.

NVIDIA has made strategic investments across its ecosystem because management thinks the investments will help to expand and deepen NVIDIA’s reach into its ecosystem

[Question] You talked about some of the strategic investments that you’ve made into Anthropic and potentially OpenAI, CoreWeave as well but also partners, Intel, Nokia, Synopsys. You’re clearly at the center of everything. Can you talk about the role of those investments?

[Answer] We used to be largely a computing platform on GPUs, but now we’re computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. And everything from computing to AI models to networking, to our DPU, all of that has computing stacks on top of it. And as I mentioned before, whether it’s in enterprise or in manufacturing, industrial or science or robotics, each one of these ecosystems have different stacks. And we want to make sure that we continue to invest into our ecosystem. So our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.

NVIDIA’s management thinks the implementation of the dilate architecture should be delayed for as long as possible

Everybody should want to extend, push out dilate as long as they can. And the reason for that is because every time you cross a dilate , you have a dilate , you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We’re not allergic to dilate . We use dielets already, but we try to use dielets only when we absolutely have no choice but to do so. And so we — if you look at the Grace Blackwell architecture and the Rubin architecture, we use 2 giant reticle-limited dies and we abut them, and that reduces the amount of architecture crossing. The dilate  [ tax ] shows up in the architecture effectiveness of the competitors.

The strategy of NVIDIA’s management is to deliver an entire AI infrastructure in each year

Our strategy is to deliver an entire AI infrastructure every single year.

NVIDIA’s management thinks the economics for data centers in space is currently poor, but will get better over time; management thinks the heat-dissipation methods used on Earth will be different from those used in space; NVIDIA’s Hopper is already the world’s first GPU in space; management thinks one of the best use cases of GPUs in space is for imaging; management sees very interesting applications for AI in space

[Question] I’d like to ask about space data centers, which some of your customers are considering. How feasible do you think that is and what kind of horizon? And what do the economics look like today? And how do you think that could evolve over time?

[Answer] The economics are poor today, but it’s going to improve over time. As you know, the way that space works is radically different than how it works down here. There’s an abundance of energy, but solar panels are large, but there’s plenty of space in space. The heat dissipation, it’s cold in space. However, there’s no airflow. And so the only way to dissipate heat is through conduction and the radiators that you need to create are fairly large. Liquid cooling is obviously out of the question because it’s kind of — it’s heavy and freezes. And so the methods that we use here on earth are a little different than the way we would do it in space. But there are many different computing problems that really wants to be done in space. And so NVIDIA is already the world’s first GPU in space, Hopper is in space.

And one of the best use cases of GPUs in space is imaging, to be able to image at extremely high resolutions using, of course, optics and artificial intelligence. And to be able to do that computation of reprojection of different angles and be able to up res and do noise reduction and just be able to see, be able to image at very large, very high resolutions, extremely large scales and very, very fast. It’s hard to do that by sending petabytes and petabytes of imaging data back here on earth and doing that work. It’s easier just to do it out in space. And then ignore all of the data collected and processed until you see something interesting. And so artificial intelligence in space will have very good, very interesting applications.

All 1.5 million AI models on Hugging Face run on NVIDIA’s CUDA software

There’s 1.5 million AI models on Hugging Face, all of it runs on NVIDIA CUDA.

NVIDIA’s management has designed CPUs for AI data centers that are very different from the CPUs designed by other companies; NVIDIA’s CPU is the only CPU that supports LPDDR5, and it is designed to have very high data processing capabilities; in agentic AI, the tools used by agents often run in CPUs only; NVIDIA’s Vera CPU was designed to be an excellent CPU for the post-training process of an AI model; some use cases in AI require a lot of CPUs; at the current phase of development of AI technologies, really fast, single-threaded CPUs are required; NVIDIA’s Grace and Vera CPUs are both great at single-threaded performance, but Vera is much better

At the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world’s CPUs. It’s the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. And the reason for that is because most of the computing problems that we’re interested in are data-driven, artificial intelligence being one. And the single-threaded performance and its ratio with bandwidth is just off the charts.

And we made those architectural decisions because in the entire phase, the different phases of AI from data processing, before you even do training, you have to do data processing. So you have data processing, pre-training and in post-training now, the AIs are learning how to use tools. And the usage of tools, many of those tools run in CPU-only environments or they run in CPU with GPU-accelerated environments. And Vera was designed to be an excellent CPU for post-training. And so some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs. We love CPUs as well as GPUs. And when you accelerate the algorithms to the limit as we have, Amdahl’s Law would suggest that you need really, really fast single-threaded CPUs, and that’s the reason why we built Grace to be extraordinary to be great at single-threaded performance, and Vera is off the charts better than that.

Salesforce (NYSE: CRM)

This is not the first SaaSpocalypse Salesforce’s management has been through; management thinks a SaaSquatch will be eating the SaaSpocalypse because the SaaS companies will be getting a lot better by also providing AaaS (agents-as-a-service)

This is not our first SaaSpocalypse, we have been through many SaaSpocalypses. I remember the horrible SaaS pockets of 2020 when not only the software industry was doing, but we were all dying. But we made it through that. And now everyone is back, doing great — so we’re so grateful to make it through that, and we’re going to make it through this at as well…

…If there is a SaaS polyps, I think it might be eaten by the SaaS watch because there are a lot of companies using a lot of SaaS because SaaS just got a lot better with agents-as-a-service.

In Agentforce’s 1st 15 months, Salesforce has closed 29,000 deals, up 50% sequentially; customers in production with Agentforce is up 50% sequentially in 2025 Q4 (FY2026 Q4); Agentforce and Data 360 reached nearly $2.9 billion in ARR (annual recurring revenue) in 2025 Q4 (FY2026 Q4), up 200% year-on-year (was $1.4 billion in 2025 Q3, up 114% year-on-year) including Informatica; more than 75% of Salesforce’s top 100 wins in 2025 Q4 (FY2026 Q4) had both Agentforce and Data 360; management thinks Agentforce has the potentially to be similar in size as Salesforce’s current software business; Agentforce ARR reached $800 million in ARR in 2025 Q4 (FY2026 Q4) up 169% year-on-year (was $540 million in 2025 Q3, up 330%); Salesforce’s most premium SKUs related to Agentforce saw new bookings triple sequentially in 2025 Q4 (FY2026 Q4); more than 60% of Agentforce and Data 360 bookings in 2025 Q4 (FY2026 Q4) came from existing customers expanding their commitments; all of Salesforce’s top 10 wins in 2025 Q4 (FY2026 Q4) included Agentforce and Data 360; Informatica was in 6 of Salesforce’s top 10 wins; management thinks Agentforce brings incremental value to Salesforce’s software

We’re seeing incredible demand for agent force. In its first 15 months, we closed 29,000 deals, up 50% quarter-over-quarter. Customers in production have increased as well, nearly 50% in Q4…

…Our Agentforce and Data 360 ARR, including Informatica, now exceeds $2.9 billion. I heard ARR doesn’t matter anymore. But in case it does, we have $2.9 billion, up 200% year-over-year. More than 75% of our top 100 wins in Q4 included both agent force and Data 360…

…I can’t tell you when the — an agent force is like about an $800 million business now. So I can’t tell you exactly when Agentforce will be a $46 billion or $30 billion. But it has the potential to go just like… 46×3 is 120 plus 18…

…Agent force and Data 360 ARR inclusive of Informatica Cloud ARR reached $2.9 billion. That’s up over 200% year-over-year. This includes Informatica Cloud ARR of $1.1 billion and Agentforce ARR of approximately $800 million, which is up 169% year-over-year. New bookings for Agentforce 1 Edition and Agentforce For Apps or as we call it, A4X, our most premium SKUs nearly tripled quarter-over-quarter…

…In the quarter, more than 60% of Agentforce and Data 360 bookings came from existing customers expanding their commitments…

…Every single 1 of our top 10 wins included Agentforce, Data, sales, service, platform and analytics. Our newest addition to our portfolio Informatica, landed in 6 of those top 10 wins, proving it is a critical component of us building the data foundation for the Agentic enterprise…

…What we see is now with Agentforce with the system that you laid out, the system with the agents, et cetera, we’re just seeing incremental value to our software.

Salesforce’s management launched Agentforce for life sciences in 2025 (FY2026) and it has won many global pharma companies, including existing customers of Veeva Systems

We built an amazing new life sciences product this year. Agentforce for life sciences and since we launched so many of the global pharma companies, and I’ve met with so many of the CEOs myself, they’re leaving Veeva, the purgatory of Veeva, including AstraZeneca, Novartis, Takeda and of course, Albert at Pfizer, they’re all saying that they are going to Salesforce Life Sciences, which is a product that has apps and agents. And this is amazing. They are the most regulated businesses in the world. and they’re choosing Salesforce.

When a Slack customer turns on Slack Bot, the bot will be able to look at the customer’s data and understand the customer’s business and provide advice and support; Salesforce is able to bring all the LLMs (large language models) from the AI labs into Data 360 to activate AI agents, and have Slack be the Salesforce layer that engages with, manages, and orchestrates AI agents; 90% of Forbes’ top 50 AI companies are using Salesforce and Slack; Slack handles 1 billion messages a day, and they are all about work; Slack Bot is able to orchestrate other agents; management thinks Slack Bot has the #1 AI ecosystem in the world with its partner marketplace having more than 350 AI apps and agents; the high-profile AI start-up Anthropic runs its business and products on Slack; management thinks every AI company runs their businesses on Slack; management sees the UI (user interface) of apps changing in the AI era because of the combination of humans and agents working together, and Slack is the best place to get work done between humans and agents with this changed-UI environment; management thinks Slack might be the most important piece of data Salesforce possess in the AI era

Customers tell me that they want to basically kind of get to that next level. And the way to do that is by including this context, the ability for the AI, the data to know you. No better example of that than Slack Bot immediately as you turn it on you’re a Slack customer, it looks at all your slack. It looks at your DMs, it looks through Salesforce. It looks through Google. It looks even that Microsoft teams as hard as that is for some agents to go and do, but we’ve told them how to do it. And then it says, I understand your business, and I can give you help, advice, support…

…We love all of our children equally and down below here, whether it’s anthropic or open AI or Mistral or Lama all of them, and there’s more coming. They’re amazing. World models are coming. They’re amazing. They’re all down below here, and we’re using them. And then, of course, we bring them into Data 360, and that lets you harmonize your data, integrate your data and federate that means connect into other data sources throughout your company and grab it. Other data repositories, you might be using Snowflake or data bricks you might be using big query or anything, even IBM mainframes and you can bring it into Data 360, you activate your data and then it comes up into your apps. So if you’re using the service app, and you want to have an experience like help that salesforce.com for your company. Now the service app has that Agentic capability, the data is coming up — and it comes up to the next level to agent force and you can build your agents, train your agents, put the guardrails in your agents, give them voice. They can talk now, they’re talking. And then all of a sudden, you can even manage and orchestrate and collaborate from Slack. So this is our architecture…

…Agentforce has the tooling to build, to manage, to orchestrate the agents, to make them talk, to give them determinism, to give them the capabilities if they want. And then we have the engagement layer to deliver agentic enterprises, where work happens in Slack across our apps…

…Nearly 90% of Forbes top 50 AI companies, Forbes top 50 AI companies, use Salesforce and Slack…

…Slack is hosting 1 billion messages a day. And remember, every one of them is about getting work done…

…Slack Bot can access all of those messages as well as your files, your calendar, your sales force, your Google, your Microsoft teams you’re this year that Slack bot goes around, pulls it all together, — and then it knows your business. So then it’s able to orchestrate with other agents. It has an incredible partner marketplace, really the #1 AI ecosystem in the world and has more than 350 AI apps and agents already. There is no other AI ecosystem like it…

…We love Anthropic. We love Dario, Daniella. I tweeted about what they did yesterday, incredible demo. Just yesterday, Dario demonstrated how he is doing something amazing with Salesforce in the enterprise. Every single one of their demos, whether it was for HR, engineering investment banking, started and ending in Slack, pretty awesome…

…Anthropic runs its whole global operation on Salesforce and Slack. I think actually every AI company does…

…Everybody through the past few years has been so enamored with the model, of course, it’s this brand new thing, this intelligence layer that we never had but also the data. But what’s really happening around us is the apps are changing. — the UI is changing, as Miguel is alluding to. And that’s really what we’re seeing because these old apps of these point-and-click buttons, those were designed for human beings to interact with. But what happens when you have human beings and agents in the same place. Right? Suddenly, a lot of those interactions, those UI paradigms kind of get thrown away. You don’t need all of this complex UI anymore. And that’s what makes Slacks powerful, and I think that’s what Anthropic knows. I think that’s what we saw in their demos yesterday. — right? You kind of like process the work. But ultimately, it’s coming — that work is getting done because some person or some agent is asking for it, and then you need to give it back to that person or that agent. And where do you do that? You do that in Slack. And that’s what makes Slack Bot so unbelievably powerful is you never have to leave. And of course, it’s powered by Claude. We love our partners of Anthropic but it knows all of the context of your business, not just the context of your systems of records as we think about it, but all of the conversations happening inside of Slack and has access to all of that and the knowledge that it gains from that truly unmatched. It might be our most important piece of data that we have.

Salesforce’s management thinks companies will be deploying hundreds or thousand different types of agents, and many of them will be from Salesforce; management thinks the deployed agents will need a home, and the home is Salesforce

We already know now, our customers aren’t going to deploy just 1 agent. There’s going to be many agents, many capabilities. the ability to automate many different types of work, and they’re going to deploy hundreds or thousands. Many are going to be from us…

…But these agents can’t work in isolation. — like it, each one of them needs to okay. So that home is Salesforce. And they are calling us through the MCP server or maybe even just through one of our core platforms, and the more agents that our company deploys us or anyone else, the more essential our platform becomes.

Salesforce’s management sees the company as one of the largest consumers of tokens in the world with 19 trillion tokens consumed to-date; management has introduced a new metric, Agentic Work Unit (AWU) that measures how much work agents have performed; Salesforce has delivered 2.4 billion AWUs to-date and 771 million AWUs in 2025 Q4 (FY2026 Q4), up 57% sequentially; AWUs came about because management wanted to really look at the ratio of tokens consumed to effective work produced

We are 1 of the largest consumers of tokens in the world to date, now over 19 trillion tokens. So we continue to show you that because — we want you to see that we’re actually doing what we say. I know that there’s been some enterprise software companies who say they’re doing agents or they’re doing AI, but then they’re not showing up in the token rankings from the language model companies…

…Today, we’re introducing an additional metric. The Agenticwork unit created by our very own Patrick Stokes sitting here at the table. — the AWU not to be confused with our customer, AWS. And AWU represents one unit of AI work, a genetic work unit. We’re rolling this out to see how you like it actually here in earnings. It’s a record updated, workflow triggered, decision made, MCP called. And to date, AI agents on the Salesforce platform delivered 2.4 billion agenetic work units. That is where AI isn’t just thinking or calling things, it’s getting work done, transactions, and in Q4 alone, we delivered about 771 million of them…

…When we started looking at that across our customers, we can start to see, okay, our top 10 customers are consuming this many tokens. We know how many tokens sales force is consuming internally. But it begs the question, well, is it — are they doing anything? Are they working? Are they providing any value? Or is it just input and output of intelligence, right? So you can ask it a question, it can write you a poem, but that’s not really all that valuable in the enterprise world, what’s valuable is creating a document for you or updating a record or helping us right here at this table, we all use Slack bot to prepare our notes here, our customer stories, we’re all preparing that with Slack bottom. So what we did is we said, what if we could count those individual work units. And then what if we could look at those work units relative to the tokens, and we said, “Oh, there’s a relationship between the 2. We can start to see a ratio of tokens being consumed and work coming out…

…The tokens are kind of a leading indicator, but the work unit we think is a much more valuable indicator in terms of where the value is actually coming from for our customers and for our own transformation into an agentic enterprise.

Salesforce’s service organisation did well in 2025 (FY2026) because it used its own Service Cloud with an omnichannel supervisor deployed with Agentforce; Salesforce’s sales organisation did well in 2025 Q4 (FY2026 Q4) because it deployed multiple agents; Salesforce used Agentforce to call back 50,000 customers in 2025 Q4 (FY2026 Q4) that they did not call back

Our service is so much better this year because we’re using our new Service Cloud with our omnichannel supervisor deployed with agent force. Our sales, Miguel just hit record sales numbers, you can see them. We’ve never sold or had so much ACV in our history in the fourth quarter because not only does he have 15,000 account executives. But he has all these agents who are out there doing this amazing work…

…Because like believe there’s $20 million, $30 million. We don’t even know, maybe 100 million people we didn’t call back in the last 26 years. But Miguel called back 50,000 people with agents last week that we would not have gotten to. Even though he’s got all these reps, he still doesn’t have the ability to call everybody back.

Salesforce has invested a total of $330 million in Anthropic, for about 1% of the company and management wishes they had invested more; management thinks the AI models could become platforms in the future, but the reality of today is that software companies are needed to get humans and agents to work together to deliver the desired outcomes of organisations; management thinks that SaaS is needed to convert the raw intelligence of AI models into reliable, accurate enterprise work, and Salesforce is in a great position because it is the system of work and system of agency, and it is already proven in 4,000 production customers

We’re so thrilled of our relationship with Dario and I think we just put another $100 million into the new round. We’re up about $330 million into Anthropic invested. It is almost about 1% of Anthropic. And believe me, I wish we had invested a lot more, John. I don’t know why we didn’t do more…

…Could those models themselves become platforms? So could Open AI then also be a platform? Could Anthropic be a platform, can Gemini be a platform, can DeepSeek be a platform, can Mistral be a platform, can Lama be its own platform? So that in the way that we have Windows and Mac, or HTML, or different things as platforms where applications all of a sudden appear, will all of a sudden, an application come in within one of those platforms and then use some of those services? Absolutely. Those could be new platforms, there will also be other new platforms. I have a platform right here as well – iOS. There are many platforms.

And our job as a software company is to help our customers to create success and to take that and help them connect with their customers in a whole new way. So we’ll deliver our products, our capabilities, our value proposition with our customer relationships, of course, we have over 150,000, I think, customers on our core, 1 million on Slack. We have 15,000 sales reps who are out there. Their job is to work with customers to help architect their future success with these ideas.

And our primary vision though, today, because this, in the current reality, this is about humans and agents working together. And these customers, like you saw today with Wyndham, with SharkNinja, even SaaStr, even Salesforce. Our job is to take what’s available today and make it successful. And that isn’t where those platforms are today, as you know. And in your business, you have — you work for an amazing company. Keith works for an amazing company. And these large banks where we are providing a lot of automation for the sales professionals, the service professionals. There is a lot to do, to not only automate those call centers, those contact centers, the sales forces, the employees with Slack, but then to also then unleash the agents in a way that is compliant, that is secure, that is available, that is scalable, that is reliable, that is able to operate in hand in hand…

…SaaS is more important than ever. In the world of LLMs, I mean, we are so happy that this raw intelligence exist, but to convert raw intelligence into reliable, accurate scalable enterprise work, you need a solver infrastructure like the one that Marc described with our 4 layers system of context, the system of work, this is our big differentiator… We are the systems of work. We have the system of agency, very sophisticated. Some companies are building it, whatever, but we have the best because we are proven in 4,000 production customers, 23,000 total customers. Nobody has that at the scale and the complexity because our agents are connected to the data, able to trigger actions, and then we have the system engagement, which is Slack.

Consumer products company SharkNinja used Salesforce to build a guided shopping agent in 8 weeks right before the holiday season; the shopping agent brought tremendous value to consumers; SharkNinja launched with Salesforce in 2025 Q4 (FY2026 Q4) and Salesforce agents have already participated in 0.25 million consumer engagements; the Salesforce agents have helped SharkNinja provide a better service experience for customers while lowering customer service costs

[SharkNinja executive] We set up with you and your team, a guided shopping agent in 8 weeks right before the holiday season. I was nervous about it as I went to my team and I said, we’re putting this in place in October. There’s generally kind of a cutoff in our business where after October 1, you don’t really do anything. And we launched this in 8 weeks, and it brought tremendous value to the consumer. I mean, it helped them with researching and buying and troubleshooting really all in one seamless conversation. So it was a great success for us this holiday season…

…[SharkNinja executive] Since we launched Salesforce in Q4, I mean, agents have participated in 0.25 million consumer engagements during that period of time… We put so many products out into the market and sometimes that many products creates complexity for the consumer. And so whether they’re calling about a service issue or a troubleshooting issue or where is my order issue, it’s allowed our customer service agents to focus on really the really challenging issues, and it’s freed up an enormous amount of time for them — it’s a win for the consumer because the consumer is getting their questions answered quickly, they’re not waiting. And it’s a win for us because it’s driving down cost. And it’s, in the end, just having a better service experience.

Hotel company Wyndham deployed Agentforce a year ago and now has 5,000 agent deployments across its 8,300 hotels; Agentforce is a crucial part of Wyndham’s agentic platform and Wyndham is starting to roll out the agentic experience internationally; Wyndham has used Salesforce’s products to build a single source of truth about each customer, called Wyndham Guest 360; Wyndham Guest 360 is a key enabler of Wyndham’s agentic experience; Wyndham’s management thinks agents are (1) saving significant labour costs in Wyndham’s operations and (2) driving higher revenue; before Wyndham was integrated with Salesforce, Wyndham had to spend time gathering basic information about every guest; Wyndham saw a 200 basis point increase in direct bookings from AI voice agent conversions; Wyndham’s guest satisfaction scores are up 400 basis points because of its agentic experience

[Wyndham executive] When you think about just how far we’ve come in the last year, today, we have over 5,000 deployments of agent force across our over 8,300 hotels. It is a huge, huge part of our Agentic platform, and we are really just getting started. We’re starting to roll out to Canada and internationally.

But with Salesforce tools like MuleSoft and Data 360, we have built a single source of truth, unified all of our guests reservation information and data, all of their loyalty information and all of their CRM data so that all of our agents now are operating with the same trusted and real-time guests and hotel information, which they weren’t before. We’re calling it Wyndham Guest 360. It is a key enabler for our agent foundry. And it is delighting in better guest experiences, improving those experiences and building on increased loyalty engagement.

But most importantly, Mark, you’ve talked a lot about labor, which is agenetic, — it is taking millions of dollars of labor costs from our small business owners in the front office out of their operation and it is driving millions of dollars of increased revenue for these franchisees…

…Before our integration with you all, our agents had to spend time gathering basic guest information on who Marc Benioff was before he checked in tonight. And that was not easily at their fingertips or even worse, asking Marc for his information that we should have had — and our agents now have encyclopedic knowledge. Think about it of all of your guests history, all of your booking behavior, all of your loyalty status because we tied it all together, giving us an ability to answer any question imaginable that any guests like you might have before you check in tonight before you stay. In moments, not minutes, and we’re booking you into your preferred room based on our knowledge, our guest, salesforce knowledge of your past day history. We are successfully working now. I hope to upsell you a suite upgrade if we haven’t already an early check-in Sounds like you’re getting in at a late checkout tomorrow if you’d like one. I don’t know if you’re bringing — if you have pets, but if you were, those agents would be selling you a pet or an F&B. This is all being done autonomously, which small business owners and operators would not have had time to do before.

We have been working so hard. It is generating so much money. We’re seeing faster average speeds of answer. 0 hold times. I’ve heard you talk a lot about why no customer should wait. And that’s why we’re doing it. we’re receiving and we’re moving more importantly, millions and millions of dollars, as I said, in the front office, but we’re generating millions of dollars of increased ancillary revenues to these small business owners. It’s not costing anything…

…We’re also seeing, which is really, really important, a 200 basis point increase in direct bookings. — from AI voice agents and AI voice agent conversion versus having to get those bookings through expensive third-party online travel agencies. That is increasing guest satisfaction. Our guest satisfaction scores are up 400 basis points, they’ve never been higher. And this customer experience that we’ve created is more efficient. Again, humans with agents driving customer success, we’re agent first, and we’re very proud of it.

The management of SaaS community builder company SaaStr thinks agents-as-a-service is good for Salesforce; SaaStr used Agentforce to close $2.7 million in contracts, and $3.5 million more are in the pipeline; management of SaaStr thinks complex agentic work was simply not possible a year ago because of hallucinations but the situation has changed; with the help of Agentforce, SaaStr recently called back 3,000 customers that it previously failed to do so; SaaStr’s management thinks Agentforce has the potentially to be similar in size as Salesforce’s current software business

[Salesforce executive] Now here you are as agents as a Service as well. You have your vision there now as well. So I guess once a visionary, always a visionary, — but give us your vision then. Where are we going? Because you’ve heard about the SaaSpocalypse. And you know that this isn’t our first SaaSpocalypse. We’ve had a few of them. But now where are we going over the next couple of years?

[SaaStr executive] I think this is good for Salesforce, but I think we’re underestimating how powerful these agents are. I think Look, for most people, AI is confusing, the media is confusing, what the hell is going on. Let me simplify this. I was just looking at our numbers on agent force this morning. So far, and again, we’re a small organization. We went from 15 humans to 2.5 and 20 agents, okay? That’s a lot of change. But an Agentforce alone, as a tiny organization, we closed $2.7 million. That’s not the army contract you got, but that’s a lot for us, $2.7 million with an agent, and we have $3.5 million more in the pipeline…

…[SaaStr executive] Not only was this not really possible a year ago — and this is this — a year — the problem — all of us, we were using ChatGPT in the early days. It was all hallucinations. It was hard to believe this stuff would work even 18 months ago, wasn’t it? It was hard to believe, but everything got okay last summer, and then at the end of the year, it got great. And there’s reasons that Salesforce has got great, but to be nerdy, even Anthropic, your customer, when they rolled out these 4-dot models, up to 4, 5 for B2B stuff like we do, it wasn’t a little bit better. It was like jaw-droppingly better. The hallucinations will be worse than a human mixes and the productivity side…

…[SaaStr executive] We did 3,000 with agent force. And for 1 — I was just looking at a couple of examples. We closed a $250,000 customer this week. But the first 1 with agent force was Freshworks. You know Freshworks. They do support and a bunch of other stuff. — but they’ve changed. Gaurish isn’t the CEO anymore. The marketing teams turned over. We don’t know anybody. The agent found the right person and close the deal. That’s sort of magical. That wouldn’t have really been possible without agents they are…

…[SaaStr executive] I think Agent force — and I’m not being infectious. I think it will be $150 billion at the table because I think the value is about 3x the software.

Salesforce’s management thinks token prices are going to decrease over time and commoditise; management thinks Salesforce’s gross margin will not be affected even with all the agentic work Salesforce is doing

Tokens, those prices, we’re working with our various partners, those are going to start to go down over time and commoditize…

…Short term, we don’t see gross margins getting worse. fairly neutral, long time. We’re doing everything in conjunction with our FY ’27 framework and our overall operating margin improvement to continue to get efficiencies in gross margin and operating margin.

Salesforce’s management has 3 ways of monetising AI, which are (1) upgrading seats to premium SKUs, (2) giving customers access to seats that they couldn’t get before because the agentic experience provides good ROI to customers, and (3) sale of flex credits

We have found the formula to monetize AI. There are 3 ways Three ways, distinct ways, and the main ones that we are using to monetize AI. Number one is our large installed base of 100 millions of seats, we are upgrading to our premium SKUs that contain already embedded AI and unlimited access to Agentic for employee use cases. Number one…

…The second way to monetize. this is very peculiar because now our apps are Agentforce Sales, Agentforce Service, all of them are agentic. So now the ROI that companies generate by implementing our apps has increased. So now we have access to new seats that before companies couldn’t afford to roll out sales force or any of our apps.

And the third way is for customer facing agent use cases, agents, we sell through the credits, flex credits. And companies, if you look at the bookings of Agentforce in Q4, 50% were credits, flex credits, fuel and 50% were higher SKUs.

The Trade Desk (NASDAQ: TTD)

Nearly 100% of Trade Desk’s clients are running through Kokai today (was 85% in 2025 Q3); management thinks Kokai is the most advanced AI-powered advertising-buying platform for the open internet; Kokai has enhanced every unique function in the advertising valuation process with AI; Kokai is an upgrade over Solimar in every aspect; Cheerios used retail data and Kokai to achieve 88% more conversions and 7x higher CPA (cost per acquisition); Deal Desk is a new innovation in Kokai that allows advertisers to centralise their deal creation, management, and analysis; prior to Deal Desk, advertisers have increasingly sought one-to-one deals, which has led to inefficiency in the supply chain; Deal Desk uses AI to forecast the performance of a deal; early results of Deal Desk are promising, with Deal Desk deals meaningfully outperforming legacy deals; more suppliers are signing up with Deal Desk, and 2 of the biggest SSPs (supply side platforms) in Germany recently announced an integration with Deal Desk; IKEA used Kokai to achieve a 17% reduction in CPA; Best Western used Kokai and achieved an 89% improvement in incremental reach

Almost 100% of our clients are running through Kokai today. We think Kokai is the most advanced AI-fueled buying platform ever pointed at the open internet. Kokai broke advertising into the basic elements of an advertising campaign and enabled every unique function in the valuation process to be enhanced with AI. From identity probabilities to valuing impressions to predicting performance to forecasting spend to predicting the right clearing price to detecting auction manipulation or even fraud to generating creatives to supply path optimizations or to even surfacing insights that could once easily be buried in a mountain of data. Kokai and AI enhanced and upgraded nearly every part of Solimar…

…Cheerios ran a display campaign in the U.K. recently using retail data for audience targeting on Kokai. They saw 88% more conversions and 7x better CPA…

…I want to share one more innovation built on Kokai, and that’s Deal Desk. Complexity has brought many advertisers to seek out one-to-one deals as a means of simplifying supply chains, much like they used to in a nondigital world. But in that process, some buyers have inadvertently given up buy-side decisioning power, especially in CTV. They have also given rise to inefficient supply chains or inadvertent oxygen to some bad players that a more efficient supply chain would not allow for. Deals can be a way to leverage size and get a better deal, but measuring the deal’s outcomes becomes very important. It is easier to do a bad deal than ever, especially when pursuing cheap cost. Historically, 90% of deal IDs never scaled, either because they were set up poorly, hard to troubleshoot or simply didn’t perform. Deal Desk centralizes the way buyers create, manage and analyze their deals. It uses AI to forecast how a deal is likely to perform relative to the open market and then highlights where things may go off track. Early results are encouraging. So far, deals that are set up and managed through Deal Desk are performing meaningfully better than those managed the legacy way. More suppliers are signing up for Deal Desk every week. Deal Desk is in early stages, but it is rolling out around the world. Most recently, the 2 biggest SSPs in Germany announced that they are integrating with it…

…IKEA, for example, is using Kokai to get a more intelligent perspective on how their ads perform across all channels. Thanks to Kokai’s AI-fueled omnichannel optimization, they saw cost per acquisition decrease by 17%, while also gaining valuable new insights on the effectiveness of different channel activations at different stages of the customer journey…

…Best Western saw their booking rate double when using Kokai to target live sports opportunities, thanks to an 89% improvement in incremental reach with Kokai.

Every engineer at Trade Desk is using AI coding tools; management has injected AI tools across Trade Desk, resulting in higher productivity

The most obvious of AI’s features is that it is a productivity enhancer. As one example, every engineer at TTD is using AI tools to write and/or test code. We’ve injected AI tools across the company and productivity is going up.

Trade Desk’s management thinks the company’s business will benefit more from AI than any of its competitors; Trade Desk’s scaled competitors are selling their owned and operated (O&O) inventory, whereas Trade Desk does not, and this aligns Trade Desk’s interests with ad buyers; management thinks the buying platform with the most objectivity and the most trust is the one most likely to win; Trade Desk is trying to make millions of complicated decisions every second and AI can help with this

We think our business model is more conducive and will benefit more from AI than any of our competitors. Every scaled competitor we have is first and foremost, selling their owned and operated inventory, O&O. We don’t have O&O. We have aligned our interest with buyers, and that is even more valuable in the AI-fueled ecosystem. AI makes it easier to make better decisions for advertisers and match the best ad opportunities. Valuable data like advertisers’ first-party data is way more valuable in an AI world. Retail data is more valuable in an AI world. The buying platform with the most objectivity and the most trust is the one most likely to create the most scale and win the most market share. At The Trade Desk, we have built the industry’s most advanced, trusted and objective data set, which is based on factors like these, 20 million ad opportunities every second, each with thousands of data variables and each valued objectively. 

Our clients’ valuable first-party data, which they trust us with, that we will never jeopardize. The industry’s most scaled data marketplace, including most of the world’s leading retailers, close integrations with thousands of suppliers and publishers across channels. In short, we are trying to make millions of complicated decisions every second based on massive data sets. This assignment can obviously be enhanced with AI…

…I don’t think there’s any company in our industry that is better positioned to take advantage of advances in AI.

Trade Desk’s management thinks that platforms with scaled, unique, data, and that are trusted, are in a great position to leverage AI, including agentic AI

There is an emerging narrative that AI will compress software value or disintermediate platforms altogether. That might be true for some SaaS businesses, especially those that deal in generic process or low-grade data. However, for platforms that have earned the trust of their clients and partners and have amassed data that is scaled, unique, refined and actionable, they are in the perfect position to leverage advances in AI to add more value…

…We are convinced that Agentic AI will ultimately accrete the most value to companies that already have deep customer trust that have scaled, refined and objective data sets and that prioritize objectivity, not by companies with limited data hoping an AI framework becomes their business model.

Trade Desk’s management recently introduced Audience Unlimited, a new data marketplace; management thinks Audience Unlimited will benefit the entire digital advertising ecosystem in the AI-era; management thinks 3rd-party data and retail data have been massively underutilised since the advent of programmatic advertising because of a lack of price discovery of the data; Audience Unlimited helps advertisers use the most relevant data for a given campaign at an all-in cost; Audience Unlimited was not possible to build before the arrival of agentic AI; management has already seen very positive results with early adopters of Audience Unlimited; the roll out of Audience Unlimited will enable Trade Desk’s partners to use more agentic AI; management sees data as being more powerful when it can be used in AI instead of in a simple algorithm

Audience Unlimited is one of our biggest innovations ever. This will change the usage and value of the data marketplace for both buyers and sellers, and we think that agencies, advertisers, data providers and retailers will all benefit from this innovation, and it is essential in this new AI-fueled world. There has been massive underutilization to third-party data and retail data, in particular, since the advent of programmatic about 20 years ago. I have argued that the data marketplace is anemic for one primary reason. There is no price discovery for data. The cost has really been complicated for marketers. So generally, they don’t use it. We can see though that the value is obvious, especially leveraging AI. And using a flat cost structure, Audience Unlimited helps advertisers use a wider range of the most relevant data to any given campaign for an all-in cost, where value and impact is clearly understood. This innovation wasn’t possible before advances in AI, particularly Agentic AI in this case, which allows us to surface the right data segment at the right moment. Of course, Audience Unlimited is completely optional. Clients can use it or continue to buy third-party data a-la-carte. We are already seeing very positive results with early adopters, and I’m excited for more advertisers to get access as this year progresses…

…The Audience Unlimited rollout is part of a much bigger effort to reform measurement and enable our partners to use more Agentic as well…

…In this AI-fueled world, the third-party data ecosystem that we power using things like Audience Unlimited and other things, all these innovations are meant to make it easier to bring data onto the platform and make it more powerful in an AI-fueled world. It just inherently is more powerful when you can use it in AI instead of a simple algo or a basic bid factor.

Trade Desk’s management thinks agentic AI is the best thing to happen to programmatic advertising because it makes it easier to make decisions in a very complex environment

Agentic AI, I believe, will be the best thing that ever happened to programmatic advertising. And it’s because it makes decisioning in a very complicated environment easier. And when I say easier, I don’t mean that the nature of the market is getting less complex. I mean that the power of man and machine together can reason through this really complicated decision that is in front of an advertiser, which is should I buy this ad that literally you’re deciding in milliseconds. So Agentic is just an amazing tool to use in that environment.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet (parent of Google), Amazon (parent of Amazon Web Services), Coupang, MercadoLibre, Meta Platforms, Microsoft, MongoDB, Nu Holdings, and Salesforce. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 March 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 March 2026:

1.OpenAI Boost Revenue Forecasts, Predicts $111 Billion More Cash Burn Through 2030 – Sri Muppidi and Stephanie Palazzolo

As revenues climb, rising computing costs will weigh on OpenAI’s bottom line. Last year, the company burned $8 billion in cash, about $500 million less than it forecast in the summer. However, the company expects to burn $25 billion this year and $57 billion next year, about $30 billion more in total than previously predicted.

The company still expects to turn cash flow positive in 2030, when it expects to generate nearly $40 billion in cash…

…OpenAI has told investors the costs of running its AI models, a process known as inference, quadrupled in 2025. As a result, the company’s adjusted gross margin—defined as revenue minus the costs of inference—fell to 33% from 40% the year prior. That’s lower than the gross margin expectations of 46% it had set for itself for 2025. It’s also below half the 70%-plus gross margins of best-in-class software companies.

OpenAI has lowered its gross margin forecasts for the next five years, as its inference costs increase. In that period, the measure will range between 52% and 67%, according to the forecasts; previously the company had expected margins to hit 70% by 2029…

…OpenAI’s revenue more than tripled last year to $13.1 billion, $100 million more than its prior projection.

The new forecasts show OpenAI now expects revenue to rise to $30 billion this year and about $62 billion next year, slightly higher than prior forecasts, with its ChatGPT consumer business the largest driver…

…Last year, OpenAI spent more than $8 billion on the costs of running its AI models for its users, with roughly $4.5 billion on inference for paying users. Its inference costs are expected to rise to roughly $14 billion this year and $26 billion next year, or about $8 billion more in total than was earlier predicted.

The company expects to spend even more on computing costs to train its models. Last year, OpenAI spent $8.3 billion, about a billion less than it expected from its summer forecast. It plans to increase its training costs to $32 billion this year and $65 billion next year, or about $44 billion more than previously expected. These training costs add up, totaling nearly $440 billion through 2030.

2. Software bear case push back (and the real risk that I see) – Drew Cohen

There is a lot of talk of the competitve pressures on SaaS companies, but what about the AI Model businesses?

I think the key thing to remember is that the AI models have their own competition and they are all fighting for market share right now.

Partnering with existing incumbents is an easy way for them to win distribution…

…Users of these SaaS companies are already becoming a source of revenue for the AI companies. This greatly reduces the benefit of creating a product AND business support to specifically go after each vertical…

…I think in some specific cases there is a risk of internal IT departments creating their own software, but I don’t think that will be standard practice. We are already seeing that the AI companies themselves all use a variety of software vendors…

…The transition over the past decade has been for companies to outsource server maintenance to the cloud because they can’t run it as efficiently or introduce new features as quickly. It doesn’t make sense for them to run thin internally just as they often outsource facility maintenance. Unless a business has a benefit for maintaining their own software (which I can’t see), they will want to outsource this…

…I think what AI really does is it allows software companies to enter new verticals adjacent from their’s, which increases competition—I don’t think the competition is going to come directly from the AI companies though.

This is similar to the newspaper industry 20 years ago. The increase in competition didn’t come from “the internet”, but rather what the internet enabled, which was many new ways to get news.

The other risk is pricing pressure and the seat model collapsing. I think as long as the value these companies give their customers is as good, or higher, than before, they will be able to transition this.

3. A Level Headed Look at State of Software – DB

Business software as an industry is small in China and India because labor is a direct competitor to packaged software. Historically in these lower cost labor markets with exceptional technical talent, DIY has been the go to solution. Most western company leaders would be shocked to find that technically savvy Asian tech companies not only are able to in-house their own business applications, but even databases, BI and infrastructure technology.

Is this the direction the world is headed? When the token costs decreases 100x, its tempting to think that the math becomes:

Token to generate code + 2 SWE < annual cost of CRM license

But in reality, the decision is a trade-off of management bandwidth. If a vendor CRM breaks, a customer can expect a SLA for it to be brought back up. If there is a security vulnerability, thats the vendor’s responsibility. In fact, the extreme examples of DYI are only found in the most sophisticated technology companies in Asia. I fully expect AI Labs to experiment with DIY everything but with IT at ~5% of US GDP, I would consider this an edge case…

…I think the barrier to agentic success today is primarily because companies simply dont know how to implement the tools available. This is an area where AI Labs will find collaboration with traditional software businesses to be in their best interest.

This is a long way of saying AI Labs will be selective about which first party applications they themselves will go build. But they have a distinct advantage in that know the billions of questions being prompted each day. Personal health, personal finance, coding, improving writing skills/education, etc. are on the top of that list. And I were to bet, the focus of first party apps will be in these areas…

…Yes agents will be transformational, but i’d bet a good portion of the agents will come from the boring old companies you already know today

Oh wait, there’s more than a business process than code

The reality of a regulated industry is that the value proposition is the sheer volume of dirty work that needs to be done in the background to present a customer with something simple. While it may be true that a payment portal can be generated in hours vs. months now, the moat of a payment company is dealing obtaining bank licenses, putting in place a AML/KYC program with the adequate controls for SARs and fraud detection (just ask CZ at Binance). Same can be said healthcare, telcos and a variety of industries. Not only is there no value for DIY, the risk of doing so far outweighs the reward…

…Several things can be true at once:

  • Software companies need to be able to adapt, and some will do it exceptionally well while others won’t
  • New companies will be created
  • Pricing may be compressed
  • Most software companies are too bloated

At the end of the day, what the capital market is doing is applying a higher discount rate to the interim 10 year likelihood of previously forecasted cash flows and the terminal value after those 10 years.

4. Blue Owl Fouls the Nest for AI Financing – Ken Brown

Private market lender Blue Owl is living through the downturn part. The struggles of the firm, which has been a big funder of the AI build-out, could affect the flow of capital into data center developers and cloud providers that need to raise cash…

…Last year, it made at least $5.6 billion of equity investments into data centers and raised $64 billion in debt for those projects, according to internal figures…

…The firm’s effort to manage rising redemptions in one of its smaller funds backfired and appears to have tainted the whole firm. Private lenders live and die on their access to capital and deal flow, both of which are at risk of drying up for Blue Owl.

The firm’s troubles are significant because it sits at the nexus of two important funding sources for the AI build-out—private capital and individual investors. If worries about Blue Owl spread, some projects will be funded at a higher cost—or might not get funded at all…

…Last year, a $1.6 billion private fund that it runs for small investors was facing redemption requests. The firm decided to address the issue by merging the fund with a $16.5 billion publicly traded fund it also runs.

The problem was, the bigger fund was trading at a 20% discount to the value Blue Owl was placing on its assets. The smaller fund, because it wasn’t publicly traded, was priced at the value of its assets. That meant investors in the smaller fund would see the value of their investments fall by 20% when the deal got done. That didn’t make them happy…

…Blue Owl called off the merger, but the damage was done. The deal drew attention to the perennial problem of valuing private assets…

…Fast-forward to last week, when Blue Owl came up with another flawed solution to its problems. It would sell $1.4 billion of assets to three big institutional investors and to an insurance company that it has a deep financial relationship with. That money would fund investor redemptions.

One problem is that when a fund with illiquid holdings sells assets, investors assume it is selling the highest-quality and most liquid ones, meaning what’s left will be harder to sell. That makes further redemptions tougher and gives investors a signal to get out…

…Another issue: Blue Owl selling assets into the insurer, Kuvare Holdings, could indicate that there were no other buyers and that it stuck Kuvare with bad assets…

…That became clear on Friday, when Business Insider reported that Blue Owl had trouble raising funds for a $4 billion data center in Pennsylvania.

The project is relatively speculative as these things go, so there could be other reasons why Blue Owl couldn’t raise the cash. The firm said it has considered outside funding and ultimately didn’t need it.

5. History Rhymes: Large Language Models Off to a Bad Start? – Michael Burry

While mining old newspapers on a quiet Saturday – a hobby of mine – I came upon a story from June 19, 1880, that I found relevant to our modern anxieties about AI.

It is the story of Melville Ballard, who, as a child without language, spied with his eyes a tree stump and asked himself if the first man rose out of it.

This 144-year-old case study – presented at the Smithsonian Institute no less – provides a potentially devastating critique of today’s Large Language Models and the spending behind them. With a simple human story, it boldly announced that complex thought exists in the silence before words…

…There are actually two stories of interest in that old newspaper. Let’s start with the one in the middle. This is Page 3 of this edition of the New York Times, and I see a story called Thought without Language…

…The story concerns one Professor Samuel Porter, of the National Deaf-Mute College at Kendall Green, who presented a paper at the Smithsonian Institution. The paper title, “Is There Thought Without Language? Case of a Deaf Mute.”

At first discussion of deaf-mutes and children having no form of mental action that distinguishes them from brutes, well, understanding has changed a lot, and I was ready to dismiss.

The case study is of a teacher at the Columbia Institute for the Instruction of the Deaf and Dumb. This particular teacher, Melville Ballard, is also a deaf mute and a graduate of the National Deaf Mute College.

Mr. Ballard says that in his infancy he communicated with his parents and brothers by natural signs or pantomime. His father, believing that observation would help to develop his faculties, frequently took him riding.

He continues that it was during a ride two or three years before he was initiated into the rudiments of written language that he began to ask himself the question, “How came the world into being?” and his curiosity was awakened as to what was the origin of human life, its first appearance, the cause of the existence of earth, sun, moon, and stars. At one time, seeing a large stump, he asked himself the question, “Is it possible that the first man that ever came into the world rose out of that stump? But that stump is only a remnant of a once magnificent tree; and how came that tree? Why, it came only by beginning to grow out of the ground, just like these little trees now coming up;” and he dismissed from his mind as absurd the connection between the origin of man and a decaying old stump…

…One of the presentation’s attendees notes, significantly, how Ballard’s eyes conveyed meaning perfectly, without misunderstanding, above all else.

One of the most interesting features of this meeting was Mr. Ballard, by signs, explaining how his mother informed him that he was going a long way to school, where he would read from a book, write and fold a letter, and send it to her, &c., and also, by pantomime, reciting how a hunter, after killing a squirrel, accidentally shot and killed himself. Mr. Ballard’s signs and gestures, with the expression of the eyes and face, conveyed his meaning perfectly to the audience, and, in the words of a member, the expression of the eye was language which could not be misunderstood.

Let us consider these two statements:

  • “That by which we understand all things must be essentially superior to anything else that is understood by it.”
  • “…in the words of a member, the expression of the eye was language which could not be misunderstood.”

In sum,

  1. Language without the Capacity for Reason fails at Understanding
  2. Only with Capacity for Reason does Language unlock Understanding.
  3. Understanding, fully realized, transcends Language.

By putting language first, LLMs build a primitive form of reason purely through logical inference, but this form of reason has been shown flawed and prone to hallucination due to limitations at the many ragged edges of knowledge.

The capacity for reason never existed. Therefore, language cannot scale through reason to understanding.

The professor suggests, in his work with deaf and mute people, he has discovered that a capacity for true reason must exist first, before language, so language can unlock understanding — the product of that capacity for true reason and language.

“The expression of the eye is the language which cannot be misunderstood.”

To wit, expression of the eye is what flawless understanding looks like, without the need for language.

Large Language Models, by putting language first, before the capacity for true reason, can never attain understanding…

…The original approach to AI was to generate a true capacity for reason first, but it was never realized, and the field pivoted to language first because it was easier.

This ‘bad start’ has led to a “parameter trap,” where brute-force language processing powered by zillions of power-hungry chips has become an incredibly ironic bottleneck.

As my conversation with Klarna’s Sebastian Siemiatkowski highlighted, the future lies in compression—leveraging ‘System 2’ reasoning-first to work off the redundancy of information and the relatively finite query sets produced by humans to drastically reduce compute needs.

This new line rejects singularity through language models talking to each other in an infinite mirror as a directionless waste of resources made impossible by lack of a basis in economic realities.

While frontiers like Google’s AlphaGeometry and Meta’s Coconut are finally moving toward this ‘reason-first’ architecture, they are essentially rediscovering what was presented at the Smithsonian 144 years ago: that language is the output of understanding, not the engine of reason…

…I mentioned there was another story of interest, and it is on the same page. More relevant to the first story than anyone in 1880s may have guessed it would be in 2026.

This article is “San Francisco’s Wealth, A Population of Bonanza Speculators.”

This story was written June 1 in San Francisco, and only published in the New York Times on June 19th…

…California was pre-eminently the paradise of the man of small capital. To satisfy the craving for speculation, the peculiar open-board system was adopted, whereby the man who had $50 to invest, by purchasing a share therein, could acquire a small interest in a mine at a dollar a share, or two shares at 50 cents, or any number at varying prices.

A “boom” existed here in certain stocks, seemed not to reach beyond the desire to do so “just once more” it seemed to excite the same gambling fever in San Francisco, and for lines lost by the bonanza firm was eagerly grasped by the people of San Francisco, and of the “boom” having been accompanied and by speculative losses on the part of the people, the “boom” disappeared and stocks fell to their normal condition.

The story closing hits hard for reality today.

The People of San Francisco seem to have become educated to the idea that they must leap into fortune at once, and their big bonanza at Virginia City having failed, they do appear to be willing to exert themselves to hunt for wealth in other directions, such as the development of manufacturing, trade, and agricultural interests. Almost the entire population is imbued with the passion for speculation, and if a new bonanza as big as the one in Nevada were to be discovered either there or near here, stocks would mount again to absurd figures, and San Francisco would again pass through the period of flush times to again suffer as she has during the past two years.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.

How Brilliant Managers Still Make Shareholders Poorer   

There are capital allocation mistakes that even the smartest management teams commit.

The primary role of management in a company is simple: Provide shareholders with the best rate of return. 

This requires operational excellence, profitability, and most importantly, the prudent allocation of capital.

Reaching the helm of a listed company is the pinnacle of professional ambition for many people, and therefore only the very smartest people in the world are selected to be leaders of listed companies.

Yet despite the undeniable talent that leaders must exhibit to get into the role, some leaders still tend to do things that are not optimal for shareholders.

In my years of investing, I’ve identified two areas where even the “smartest” consistently fail shareholders.

Buybacks

One of the most common value-destroying “habits” of management is to buy back shares at bad prices.

Although buybacks are not inherently bad for shareholders, it is extremely price-sensitive.

When a company buys back shares, it reduces the share count, increasing the ownership of remaining shareholders. 

The goal of buybacks is to be able to provide remaining shareholders with higher future dividends per share, eventually. Sounds great, but there’s a catch. 

If buybacks are done at a very high valuation, the number of shares that the company can retire from buybacks drop. This reduces its impact.

And buybacks have a cost- it is paid for by the company’s cash coffers. That war chest can be used for many things such as acquiring another business, paying a dividend or simply being stored on the balance sheet to be used during opportune times. All of these could be better uses of cash than buying back shares at high prices.

Too often I notice companies guide toward a certain amount of buybacks for the year. This means that the company plans to use its cash to buy back shares during the year no matter the price of the shares at that time. This is lazy and can be detrimental to shareholders if share prices rise to an unsustainably high price.

A better way to do it would be to only buy back shares when shares are cheap or trading close to “intrinsic value”. 

A rare example of a leader that bucks this trend is August Troendle of Medpace (NASDAQ: MEDP). Under his guidance, Medpace has only repurchased shares at prices that August deems cheap. He has even directed Medpace to take on debt to buy back shares when prices are cheap. He has let cash pile up on the balance sheet when share prices rose too high.

Stock-based compensation

Stock-based compensation is the practice of paying employees in stock. 

I’ve written in depth about some of the drawbacks of stock-based compensation here. While often marketed as aligning incentives, stock-based compensation can be very dilutive.

This is especially true when stock prices are depressed. 

When stock prices are low, the number of shares that a company needs to grant is higher in order to satisfy an employee’s wage demands.

Yet, management teams seem indifferent to this, actively using stock-based compensation despite these nuances. 

I believe stock-based compensation has a role, especially as a way to incentivise top leaders of a company.

But using stock-based compensation across the entire company and through different cycles of the company is lazy management.

Leaders need to rethink their total rewards framework. If they must give employees stock-based compensation even when shares are cheap, it is important that leaders find a way to manage dilution, perhaps through opportunistic buybacks. 

Constellation Software (TSE: CSU) has a great compensation structure for executives that avoids this conflict. It does not provide traditional stock-based compensation. But to align leaders with shareholders, executives need to invest 75% of their bonus into Constellation Software’s common shares. These shares are held in escrow for four years so that executives’ wealth is also inextricably tied to shareholders.

Bottom line

Top executives are paid top dollar to run a company to maximise shareholder value. Yet elite leaders continue to do things that erode investor wealth. As shareholders, we can’t control what managers do but we can choose where we deploy our capital. 

When we see a management team unwittingly destroying shareholder value, either through “lazy” buybacks or broad-based stock-based compensation, take it for what it is – a massive red flag.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Medpace. Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 February 2026:

1. Google Is Exploring Ways to Use Its Financial Might to Take On Nvidia – Raffaele Huang, Kate Clark, and Berber Jin

The company’s chips are gaining wider adoption for AI workloads, including with startups such as Anthropic, but Google is dealing with myriad challenges as it seeks to grow. The issues include bottlenecks at manufacturing partners and limited interest from cloud-computing rivals that are among the largest buyers of Nvidia processors, according to people familiar with the matter.

To expand its potential market, Google is increasing its financial support to a network of data-center partners that can provide computing power to a broader swath of customers, people familiar with its plans said.

The company is in talks to invest around $100 million in cloud-computing startup Fluidstack, part of a deal that values it at around $7.5 billion, people familiar with the discussions said. Fluidstack is one of a growing number of so-called “neocloud” companies that offer computing services to AI companies and others…

…Google has also held discussions about expanding its financial commitments to other data-center partners that could lead to additional TPU demand, people familiar with the talks said. Google has backstopped financing for projects involving Hut 8, Cipher Mining and TeraWulf, which are former crypto-mining companies that are now developing data centers. Cipher Mining declined to comment. Hut8 and TeraWulf didn’t respond to requests for comment.

Some managers at Google’s cloud-computing division recently refreshed a longstanding internal debate about restructuring the TPU team into a stand-alone unit, people familiar with those discussions said. Such a plan could potentially allow Google to expand its opportunities to invest, including with outside capital.

One challenge for any potential stand-alone unit is that Google’s cloud business relies heavily on Nvidia chips, some of the people said…

…In 2018, Google started selling access to TPUs through its cloud services. The company has traditionally signed up TPU users through its cloud-computing unit, but it is also selling the TPU chips directly to external customers, according to industry research group SemiAnalysis…

…However, interest from major cloud-service providers appears to be tepid, partly because they consider Google a competitor, according to industry participants. Amazon Web Services, Amazon.com’s cloud unit, has also developed its own chips for AI.

2. 10 Years Building Vertical Software: My Perspective on the Selloff – Nicolas Bustamante

Vertical software is software built for a specific industry. Bloomberg for finance. LexisNexis for legal. Epic for healthcare. Procore for construction. Veeva for life sciences, etc.

These companies share a defining characteristic: they charge a lot and customers rarely leave. FactSet charges $15,000+ per user per year. Bloomberg Terminal costs $25,000 per seat. LexisNexis charges law firms thousands per month. And retention rates hover around 95%.

I would say that there are ten distinct moats. LLMs are attacking some of them while leaving others intact…

…Knowledge workers pay to not relearn a workflow they’ve spent a decade mastering. The interface IS a big part of the value prop…

…LLMs collapse all proprietary interfaces into one Chat…

…Vertical software encodes how an industry actually works. A legal research platform doesn’t just store case law. It encodes citational networks, Shepardize signals, headnote taxonomies, and the specific way a litigation associate builds a brief.

This business logic took years to build. It reflects thousands of conversations with domain experts. When I built Doctrine, the hardest part wasn’t the technology. It was understanding how lawyers actually work: how they research case law, how they draft documents, how they build a litigation strategy from intake to trial. Encoding that understanding into working software was a huge part of what made vertical software valuable—and defensible.

LLMs turn all of this into a markdown file…

…A massive portion of vertical software’s value proposition was making hard-to-access data easy to query. FactSet makes SEC filings searchable. LexisNexis makes case law searchable. These are genuine services. SEC filings are technically public, but try reading a 200-page 10-K in raw HTML. The structure is inconsistent across companies. The accounting terminology is dense. Extracting the actual numbers you need requires parsing nested tables, following footnote references, reconciling restated figures.

Before LLMs, accessing this public data required specialized software and significant engineering scaffolding. Companies like FactSet built thousands of parsers, one for each filing type, each company’s idiosyncratic formatting. Armies of engineers maintained these parsers as formats changed. The code to turn a raw SEC filing into queryable data was a genuine competitive advantage…

…LLMs make this trivial. Frontier models already know how to parse SEC filings from their training data. They understand the structure of a 10-K, where to find revenue recognition policies, how to reconcile GAAP and non-GAAP figures. You don’t need to build a parser. The model IS the parser. Feed it a 10-K and it can answer any question about it. Feed it the entire corpus of federal case law and it can find relevant precedent…

…At Doctrine, hiring was brutal. We didn’t just need good engineers. We needed engineers who could understand legal reasoning: how precedent works, how jurisdictions interact, what grounds for appeal to the supreme court look like. These people barely existed. So we built our own. Every week, we held internal lectures where lawyers taught engineers how the legal system actually worked. It took months before a new engineer was productive. The talent scarcity was a genuine barrier, not just for us, but for anyone trying to compete with us.

At Fintool, we don’t do any of that. Our domain experts (portfolio managers, analysts) write their methodology directly into markdown skill files. They don’t need to learn Python. They don’t need to understand APIs. They write in plain English what a good DCF analysis looks like, and the LLM executes it. The engineering is handled by the model. The domain expertise, which was always the abundant resource, can now become software directly without the engineering bottleneck.

LLMs make the engineering trivially accessible, which means the scarce resource (domain expertise) is suddenly abundant in its ability to become software. This is why the barrier to entry collapses so dramatically…

…Vertical software companies expand by bundling adjacent capabilities. Bloomberg started with market data, then added messaging, news, analytics, trading, and compliance. Each new module increases switching costs because customers now depend on the entire ecosystem, not just one product. S&P Global’s acquisition of IHS Markit for $44B was exactly this strategy. The bundle becomes the moat…

…LLM agents break the bundling moat because the agent IS the bundle…

…Some vertical software companies own or license data that doesn’t exist anywhere else. Bloomberg collects real-time pricing data from trading desks worldwide. S&P Global owns credit ratings and proprietary analytics. Dun & Bradstreet maintains business credit files on 500M+ entities. This data was collected over decades, often through exclusive relationships. You can’t just scrape it. You can’t recreate it.

If your data genuinely cannot be replicated, LLMs make it MORE valuable, not less…

…The test is simple: Can this data be obtained, licensed, or synthesized by someone else? If no, the moat holds. If yes, you’re in trouble…

…The irony is that LLMs accelerate the bifurcation. Companies with proprietary data win bigger. Companies without it lose everything…

…HIPAA doesn’t care about LLMs. FDA certification doesn’t get easier because GPT-5 exists. SOX compliance requirements don’t change because Anthropic released a new plugin…

…In fact, regulatory requirements may slow LLM adoption in exactly the verticals where compliance lock-in is strongest. A hospital can’t replace Epic with an LLM agent because the LLM agent isn’t HIPAA certified, doesn’t have the required audit trails, and hasn’t been validated by the FDA for clinical decision support…

…Some vertical software becomes more valuable as more industry participants use it. Bloomberg’s messaging function (IB chat) is the de facto communication layer for Wall Street. If every counterparty uses Bloomberg, you have to use Bloomberg. Not because of the data. Because of the network.

LLMs don’t break network effects. If anything, they might make communication networks more valuable. The information flowing through these networks becomes training data, context, signal…

…Some vertical software sits directly in the money flow. Payment processing for restaurants. Loan origination for banks. Claims processing for insurance companies. When you’re embedded in the transaction, switching means interrupting revenue. Nobody does that voluntarily.

If your software processes payments, originates loans, or settles trades, an LLM doesn’t disintermediate you. It might sit on top of you as a better interface, but the rails themselves remain essential…

…LLMs don’t directly threaten system of record status today. But agents are quietly building their own.

Here’s what’s happening: AI agents don’t just query existing systems. They read your SharePoint, your Outlook, your Slack. They collect data on the user. They write detailed memory files that persist across sessions. And when they perform key actions, they store that context. Over time, the agent accumulates a richer, more complete picture of a user’s work than any single system of record.

The agent’s memory becomes the new source of truth. Not because anyone planned it, but because the agent is the one layer that sees everything. Salesforce sees your CRM data. Outlook sees your emails. SharePoint sees your documents. The agent sees all three, and remembers…

…The real threat isn’t the LLM itself. It’s a pincer movement that vertical software incumbents didn’t see coming.

From below, hundreds of AI-native startups are entering every vertical. When building a credible financial data product required 200 engineers and $50M in data licensing, markets naturally consolidated to 3-4 players. When it requires 10 engineers and frontier model APIs, the market fragments violently. Competition goes from 3 to 300…

…From above, horizontal platforms are going deep into vertical territory for the first time. Microsoft Copilot inside Excel now does AI-powered DCF modeling and financial statement parsing. Copilot inside Word does contract review and case law research. The horizontal tool becomes vertical through AI, not through engineering…

…For any vertical software company, ask three questions:

1. Is the data proprietary? If yes, the moat holds. If no, the accessibility layer is collapsing.

2. Is there regulatory lock-in? If yes, LLMs don’t change the switching cost equation. If no, switching costs are primarily interface-driven and dissolving.

3. Is the software embedded in the transaction? If yes, LLMs sit on top of you, not instead of you. If no, you’re replaceable.

Zero “yes” answers: high risk. One: medium risk. Two or three: you’re probably fine.

3. Rebuttal to Nicolas – Unemployed Capital Allocator

I used to work for a relatively large long only shop.

We switched from Factset to Bloomberg + CapIQ.

We spent approximately 0 seconds discussing the UI change…

…Where does learned UI really matter? Tools with tons of degree of freedom, and where action per minute actually does matter. Professional workflow tools. Modelling software. Video editing software. Ones where knowing the shortcut is a decent part of the job.

A text box isn’t replacing this.

The idea is quite alluring – to those that don’t know the UI. Look! You can just tell it to do something and … it does it!

Until you need to do it multiple times. Then you start to go – man, I wish there was a quick way for me to send this prompt, to do this exact thing I want it to do. Oh and remember all the info I’m supposed to provide so that I get back exactly what I want. Maybe I can map it to a button and a keyboard shortcu…

Oh wait – that’s UI.

Text is amazing because it’s universal. Text is also absolutely horrible because it has infinite degree of freedom, and introduces another level of abstraction. This is not what you want when you need to do a lot of specific things, quickly.

Oh and btw – these ‘legacy providers’ with pesky, hard to learn UI and custom codes? They can very easily tack on a text box to help new users – or power users that are doing a new workflow. While providing the flexibility of getting shit done when you need to…

…There’s zero chance that a complex web of markdown files is going to replace business logic entirely.

The reason is quite simple. You do not want to introduce a layer of unpredictability and degree of freedom to your core business logic. This is stuff of nightmares even at simple levels. When you introduce complexity and interdependency, it’s straight line to system failure and bankruptcy…

…I am not sure why an agent would choose one vendor for alerts functionality and another for watchlist and 3rd for news – or how it would even go about doing this – or why this would save money. Maybe these will all be new providers? Maybe the model will just vibe code point solutions as needed? Maybe there will be perfect interoperability between all the modules? Or maybe LLM will learn to translate them all perfectly? I don’t know…

…SoRs exist as the core, singular database of truth that the whole org agrees is the truth.

Why are we splitting this across thousands of markdown files???? With no way to audit, reconcile, track … basically all the things we need a SoR to do????

4. The Golden Age of Software – Unemployed Capital Allocator

There’s a classic CS exercise: write instructions for making a PB&J sandwich, then watch someone follow them literally. “Put peanut butter on the bread” — and they place the sealed jar on top of the loaf. The lesson: every instruction you write is full of assumptions the other person doesn’t share.

This is what’s happening every time you prompt an LLM. You say “build me a user dashboard” and the model fills in hundreds of implicit assumptions about the world that you never specified. And here’s the thing: it’s really good at this. Good enough that the code runs, the demo looks great, and you feel like a genius. But those decisions are educated guesses. The model built you a PB&J. It doesn’t know that you’re allergic to peanuts.

When you’re vibe coding a demo or a small CRUD app, none of this matters. You’re on the happy path, everything works, nobody cares about code quality. It’s beautiful. But enterprise software in the real world is about every path but the happy one — a world where failure on one of those paths means losses that dwarf annual costs…

…So what happens when the market gets carpet-bombed with new products and DIY builds — in a market where customers ask “who else uses this?” as a standard question?

Decision fatigue. Procurement asking, “Who even are these guys?”

In a world where production becomes free, the existing distribution relationship becomes the chokehold. And this is what every incumbent has. Yes — this is the tired old distribution vs. product debate. But I’d argue the current moment makes it more true than it’s ever been, precisely because the supply explosion makes trust, brand, and existing relationships much more valuable…

…While existing relationships holds the line, incumbents also get to play offence.

Your development team now has a new source of leverage. Properly harnessed, everything from research to product creation to debugging and maintenance gets faster. “Where is this logic?” stops being a week-long archaeology expedition. You simply do more with the same team.

In addition, the value ceiling of software today is dramatically higher than it was two years ago. Stuff that was “too expensive,” “too custom,” or “not worth the engineering time” suddenly becomes shippable. LLMs and VLMs have unlocked capabilities that were science projects two years ago…

…What about agents taking over corporate workflows and becoming a key user of software products? Doesn’t that leave a lot of products open to disintermediation?

I have three pushbacks.

First — a lot of workflow shifting to agents is not the same as all workflow shifting to agents. The gap between those two things is enormous, and the bear case tends to hand-wave right past it.

Second — agentic workflow is still a pipeline. And when you have a working production pipeline, you don’t rip out a key component to save a couple thousand bucks. But this isn’t just an inertia argument — it’s a structural one. The agent replacing that component needs to match the accumulated production knowledge baked into the existing solution: every edge case, every integration quirk, every failure mode discovered over years of real-world use. That’s not a matter of writing code. It’s a matter of replicating hard-won context that doesn’t exist in any training set. The idea that agents will vibe code an alternative for a critical piece of a high-speed production system isn’t just unlikely because of switching costs — it’s unlikely because the agent literally doesn’t know what it doesn’t know.

Third — non-humans using software is not a new thing. There’s a whole class of software that is mostly consumed by other software, and these still make amazing businesses. The identity of the user changing from human to agent doesn’t inherently destroy the value of the product.

5. How will OpenAI compete? – Ben Evans

“Jakub and Mark set the research direction for the long run. Then after months of work, something incredible emerges and I get a researcher pinging me saying: “I have something pretty cool. How are you going to use it in chat? How are you going to use it for our enterprise products?” 

– Fidji Simo, head of Product at OpenAI, 2026

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it”

– Steve Jobs, 1997

It seems to me that OpenAI has four fundamental strategic questions.

First, the business as we see it today doesn’t have a strong, clear competitive lead. It doesn’t have a unique technology or product. The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable. Nor does OpenAI have consumer products on top of the models themselves that have product-market fit. 

Second, the experience, product, value capture and strategic leverage in AI will all change an enormous amount in the next couple of years as the market develops. Big aggressive incumbents and thousands of entrepreneurs are trying to create new features, experiences and business models, and in the process try to turn foundation models themselves into commodity infrastructure sold at marginal cost. Having kicked off the LLM boom, OpenAI now has to invent a whole other set of new things as well, or at least fend off, co-opt and absorb the thousands of other people who are trying to do that.

Third, while much of this applies to everyone else in the field as well, OpenAI, like Anthropic, has to ‘cross the chasm’ across the ‘messy middle’ (insert your favourite startup book title here) without existing products that can act as distribution and make all of this a feature, and to compete in one of the most capital-intensive industries in history without cashflows from existing businesses to lean on. Of course, companies that do have all of that need to be able to disrupt themselves, but we’re well past the point that people said Google couldn’t do AI.

The fourth problem is expressed in the quotes I used above…

…There are something like half a dozen organisations that are currently shipping competitive frontier models, all with pretty-much equivalent capabilities. Every few weeks they leapfrog each other…

…There is no equivalent of the network effects seen at everything from Windows to Google Search to iOS to Instagram, where market share was self-reinforcing and no amount of money and effort was enough for someone else to to break in or catch up.

This could change if there was a breakthrough that enabled a network effect, most obviously continuous learning, but we can’t plan for that happening…

…The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day. The data that OpenAI released in its ‘2025 wrapped’ promotion tells us that 80% of users sent less than 1,000 ‘messages’ in 2025. We don’t know how that changed in the year (it probably grew) but at face value that’s an average of less than three prompts per day, and many fewer individual chats. Usage is a mile wide but an inch deep…

…OpenAI’s ad project is partly just about covering the cost of serving the 90% or more of users who don’t pay (and capturing an early lead with advertisers and early learning in how this might work), but more strategically, it’s also about making it possible to give those users the latest and most powerful (i.e. expensive) models, in the hope that this will deepen their engagement. Fidji Simo says here that “diffusion and scale is the most important thing.” That might work (though it also might drive them to pay, or drive them to Gemini). But it’s not self-evident that if someone can’t think of anything to do with ChatGPT today or this week, that will change if you give them a better model. It might, but it’s at least equally likely that they’re stuck on the blank screen problem, or that the chatbot itself just isn’t the right product and experience for their use-cases no matter how good the model is.

In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness…

…So: you don’t know how you can make your core technology better than anyone else’s. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven’t been invented yet, and you can’t invent all of those yourself. What do you do?

For a lot of last year, it felt like OpenAI’s answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I’ve forgotten!  And, of course, trillions of dollars of capex announcements, or at least capex aspirations…

…As we all know, OpenAI has been running around trying to join the club, claiming a few months ago to have $1.4tr and 30 gigawatts of compute commitment for the future (with no timeline), while it reported 1.9 gigawatts in use at the end of 2025…

…But, again, does that get you anything more than a seat at that table? TSMC isn’t just an oligopolist – it has a de facto monopoly on cutting edge chips – but that gives it little to no leverage or value-capture further up the stack. People built Windows apps, web services and iPhone apps – they don’t build TSMC apps or Intel apps.

Developers had to build for Windows because it had almost all the users, and users had to buy Windows PCs because it had almost all the developers (a network effect!). But if you invent a brilliant new app or product or service using generative AI, or add it as a feature to an existing product, you use the APIs to call a foundation model running in the cloud and the users don’t know or care what model you used. No-one using Snap cares if it runs on AWS or GCP. When you buy an enterprise SaaS product you don’t care if it uses AWS or Azure. And if I do a Google Search and the first match is a product that’s running on Google Cloud, I would never know…

…As I’ve written this essay, I’ve returned again and again to terms like platform, ecosystem, leverage and network effect. These terms get used a lot in tech, but they have pretty vague meanings. Google Cloud, Apple’s App Store, Amazon Marketplace, and even TikTok are all ‘platforms’ but they’re all very different.

Maybe the word I’m really looking for is power. When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don’t want to do, and that’s really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does?…

…Foundation models are certainly multipliers: massive amounts of new stuff will be built with them. But do you have a reason why everyone has to use your thing, even though your competitors have built the same thing? And are there reasons why your thing will always be better than the competition no matter how much money and effort they throw at it? That’s how the entire consumer tech industry has worked for all of our lives. If not, then the only thing you have is execution, every single day. Executing better than everyone else is certainly an aspiration, and some companies have managed it over extended periods and even persuaded themselves that they’ve institutionalised this, but it’s not a strategy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Google Cloud), Amazon (parent of AWS), Apple, Meta Platforms, Microsoft (parent of Azure), Salesforce, and TSMC. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

Earlier this month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adyen (OTC: ADYYF)

 Adyen’s platform now has Dynamic Identification, which enables real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision; Dynamic Identification enables agentic commerce; 95% of Black Friday Cyber Monday shoppers were recognised by Dynamic Identification across online and in-store channels; Dynamic Identification was created to address the challenges AI was posing to document-based approaches to identity and risk; Dynamic Identification uses AI to draw insights from trillions of interactions across Adyen’s online and in-person flows, instead of performing static checks; Dynamic Identification powers Adyen Uplift, which makes payments decisions that balance conversion, cost, and risk; Dynamic Identification is the foundation for the Personalise module in Adyen Uplift that was developed in 2025 H2; Dynamic Identification helps merchants deal with policy abuse that includes exploitation of returns, promotions, and refunds; Dynamic Identification helped a global luxury group and a large sports and entertainment company identify highly problematic shoppers that were previously undetected; Dynamic Identification is not a product itself

Dynamic Identification adds an intelligence layer to our platform, enabling real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision as our customers scale across channels…

…This new foundational layer also addresses policy abuse and enables emerging models such as agent-led commerce. Peak events validate the strength of this new layer, with ~95% of Black Friday Cyber Monday shoppers recognized across online and in-store channels…

…Advances in AI, increasingly sophisticated fraud, and the growing misuse of digital systems are exposing the limits of static, document-based approaches to identity and risk. Designed for a different era, traditional controls add friction for legitimate businesses and shoppers, while struggling to prevent abuse at scale. To address this, we have integrated a third foundational layer: Dynamic Identification. Moving beyond static checks, we designed this layer to draw on trillions of interactions across our online and in-person flows. By embedding this intelligence directly into our stack, we assess risk dynamically and adapt decisions in real time, enabling us to eliminate friction while tightening security with surgical precision…

…The most immediate impact of Dynamic Identification is visible across our optimization and risk products. It is the intelligence layer that powers Adyen Uplift, enabling decisions that balance conversion, cost, and risk across the full payment flow rather than in isolation. 

Building on this foundation, we introduced the newest module within Adyen Uplift in 2025: Personalize. It was developed and validated through pilots with a select group of enterprise customers in the second half of the year, focusing on one of the most common trade-offs merchants face as they scale across channels: how to lower payment costs without negatively impacting conversion. Lower-cost payment methods are often available, but encouraging shoppers to choose them indiscriminately can increase checkout abandonment and degrade the customer experience. Dynamic Identification allows this trade-off to be managed intelligently. By understanding who the shopper is and how they behave across both online and in-person touchpoints, we can personalize the payment experience in real time, guiding shoppers toward preferred and lower-cost options only when the data indicates they are likely to complete the transaction…

…For our customers, an underestimated share of losses comes not only from traditional payment fraud, but also from policy abuse: repeated exploitation of returns, promotions, and refunds that often appear legitimate in isolation but compound into material cost over time. Without visibility into repeat behavior, merchants are left to rely on manual reviews or broad policy restrictions, increasing friction for legitimate customers while failing to address the underlying problem.

In the second half of 2025, we applied Dynamic Identification to this challenge through targeted pilots with enterprise customers. By linking refund activity at the identity level, rather than viewing transactions in isolation, we were able to surface patterns that had previously remained hidden.

The pilots showed strong engagement, with merchants using these insights on a daily basis, rather than only for ad hoc investigations. More importantly, they reported a step change in confidence: they were able to identify abuse clearly, measure its true scale, and pinpoint its sources. This replaced fragmented, manual processes with shared, data-driven visibility. Capabilities such as identifying top refund contributors at the shopper level were consistently cited as materially reducing investigation time and operational overhead…

…One global luxury group identified individual shoppers each receiving up to €5k in refunds, in some cases up to twenty times their average basket value, revealing potential material losses that had gone unnoticed. In another case, a large sports and entertainment customer identified a shopper with roughly 70% of transactions refunded over several years, exposing a long-standing abuse pattern that had not been visible through traditional transaction-level analysis…

…Dynamic Identification is our way of applying AI to the large data set we have…

…Dynamic Identification is in itself not a product. So one of the product suites that is built upon Dynamic Identification is Uplift.

Adyen’s management sees Dynamic Identification as an enabler of agentic commerce; management thinks merchants see clear potential in agentic commerce, but merchants also want to retain ownership of the customer relationship, control over payments and data, and the same level and type of risks; Dynamic Identification enables verification of shopper intent, adaptive authentication, and identity-informed risk decisions even without a human in the loop; Adyen is engaged with the broader ecosystem in enabling agentic commerce; agentic commerce currently has immaterial volume on Adyen; management is not including agentic commerce in Adyen’s 2026 guidance, but thinks it will be a growth driver in the long term; management sees trust as a really important component in agentic commerce, and that’s where Dynamic Identification helps; management thinks it’s really important for Adyen to work with key players in the agentic commerce ecosystem, such as OpenAI and Google, to develop protocols

Dynamic Identification is also a critical enabler of emerging models such as agentic commerce. As this evolution unfolds over time, traditional identity signals are likely to fall away. Transactions initiated by agents will require new trust frameworks, relying on infrastructure, behavioral context, and adaptive risk models rather than direct human interaction.

In H2, we focused on understanding our customers’ needs and how we can best build to meet them. We held extensive conversations with enterprise merchants across retail, luxury, travel, entertainment, and platforms to understand both their ambitions and their concerns. While merchants see clear potential in agent-led commerce, they are equally clear about what must not change: ownership of the customer relationship, control over payments and data, and confidence that new channels can be adopted without introducing new risk…

…Rather than building isolated agent experiences, we are extending our existing platform so that agent-initiated transactions become another channel within a merchant’s existing workflows, governed by the same principles of control, security, and interoperability. Dynamic Identification plays a central role here, enabling verifiable shopper intent, adaptive authentication, and identity-informed risk decisions even when a human is no longer directly in the loop…

…We deepened our engagement with the broader ecosystem by collaborating with partners including OpenAI, Google, Cloudflare, Visa, and Mastercard, and joining the Agentic AI Foundation. Together, we are contributing to the development of open standards that allow agent-led commerce to scale safely and interoperably, without locking merchants into closed systems or fragmenting the ecosystem…

At the moment, the number of transactions is still immaterial on our platform. We started with it. I think that’s very important, so we started with Agentic Commerce. It’s an additional sales channel, and the beauty of having a single platform globally is that we basically have all the building blocks to cater it and to start growing this sales channel with our customers…

Take agentic commerce as one example. It’s not gonna drive short-term revenues, right? So it’s not a big part of our 2026 revenue expectations, but if it’s a top priority for your customer, you want to be there, and you want to support them with it, and that’s where we’re well-positioned to do it, and it will help us drive growth over a longer period of time, right?…

…In this new world, we need to know who is the consumer behind the agent, and how do we know that we can trust the agent, that he’s indeed acting on behalf of the consumer? And that’s where Dynamic Identification really helps. So it helps to look at the signals that we get and compare that to the signals that we have in our system, and then come up with the right outcome or decision, whether this can be trusted or not…

…I’s also very important to shape the protocols with OpenAI, with Google, to make sure that that information is not get lost, and making sure that also our merchants do not lose the connection with the consumer behind the agent. Because that’s one of the key elements that our merchants find important, and we want to make sure that that connection is not lost.

In pilot tests, Personalise, which is powered by Dynamic Identification, helped merchants improve conversion by 6% while lowering transaction costs by 3%; mobility provider Hoppy used Personalise and achieved 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities; Personalise was able to to dynamically prioritise the payment methods riders were most likely to use for Hoppy

Insights from the H2 pilots demonstrate the value of this adaptive approach. Merchants observed conversion improvements of up to 6%, alongside transaction cost reductions of up to 3%, achieved through personalized optimization rather than static, rule-based, and generic logic…

…Mobility provider Hoppy realized 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities. By dynamically prioritizing the payment methods riders were most likely to use, while favoring cost-efficient options where possible, Hoppy protected margins without compromising conversion. Together, these results show how moving beyond static checkout logic enables businesses to better align shopper preferences with cost-efficient payment methods, turning checkout into a scalable driver of growth and profitability. This is the power of Dynamic Identification: translating real-time intelligence into decisions that drive tangible results.

Airbnb (NASDAQ: ABNB)

Airbnb’s management chose to deploy AI for customer support as the first use case within the company; Airbnb built an AI agent trained on millions of support interactions; Airbnb’s AI agent is now resolving 1/3 of support issues, and resolution times are now much faster; Airbnb’s AI agent is live in the US, and management plans to roll it out globally; management’s vision for the customer support AI agent is for guests to be able to call and talk to the agent; management thinks that an AI agent that can converse with guests via voice will (1) lower customer support costs for Airbnb, and (2) improve the quality of customer support

The final piece that accelerates everything we do is AI. Now we’ve taken a really intentional path here. While other companies rush to build chatbots into their existing apps, we started by solving the hardest problem, customer support. We built a custom AI agent trained on millions of our support interactions. It’s already resolving 1/3 of the support issues without needing a live specialist and resolution times are significantly faster. It’s live across North America, and we’re planning to roll it out globally…

…Right now, nearly 30% of tickets in North America that are English-based are handled by an AI agent. A year from now, if we’re successful, significantly more than 30% of tickets will be handled by a customer service agent in many more languages, in all the languages where we have live agents and AI customer service will not only be chat, it will be voice. You can actually call and talk to an AI agent. We think this is going to be massive because not only does this reduce the cost base of Airbnb customer service, but the kind of quality of service is going to be a huge step change. Not only can you get responses in seconds, but the agents using AI are going to be significantly more productive.

Airbnb’s management is building an AI-native experience within the app that knows guests and hosts and will help (1) guests plan their entire trip, and (2) hosts run their businesses better; management will build the AI-native experience without spending significant sums of money on data centers; management will build the AI-native experience without building AI models; management thinks Airbnb’s investments into AI will not affect the company’s profit; management thinks AI will help personalise the user-experience for guests on Airbnb 

We’re building an AI-native experience where the app doesn’t just search for you. It knows you. It will help guests plan their entire trip, help hosts better run their businesses and help the company operate more efficiently at scale…

…We don’t operate experiences, and we’re not building data centers. What we’re doing is finding small wins and scaling them profitably…

…I think one of the great things about Airbnb is that we have a very, very cost-efficient innovation model. So unlike other companies, we’re not building models. We do not have a huge CapEx cost base. So our investment in AI will not affect the P&L. I don’t think you’ll see it in the P&L…

…AI allows us to personalize. Some people come to Airbnb and all they want to see are unique homes. And before AI, like, personalization was a little more primitive. So if they saw a hotel, it might be jarring. Now we can really personalize. So people who just want to see Airbnbs can see Airbnbs. People just want to see hotels, we can eventually personalize, they can just see hotels. If people want to see both, we can know if you’re booking last minute, 1 night, then we’re going to show you a hotel. If you’re booking a family of 5 in Italy, we’re going to show you a home. So it really goes back to personalization.

Airbnb’s management believes that LLM (large language model) chatbots cannot disintermediate Airbnb because they lack access to the unique data and functionality that Airbnb has; management believes that adding an AI layer onto the Airbnb app will create something that is impossible to replicate; management thinks LLM chatbots will be very similar to online search in being good top-of-funnel discoveries for guests and this will be positive for Airbnb; management has seen that traffic from LLM chatbots converts at a higher rate than Google traffic; management sees AI models as being available for use by anyone; management thinks specialisation will win in travel with AI because Airbnb can use any leading AI model and customise it based on Airbnb’s millions of interactions, and hook up the model to important contact points; management does not think that one model builder will end up owning everything

This approach is also our strongest defense against disintermediation. A chatbot can give you a list of homes, but it can’t give you the unique points you find in Airbnb. A chatbot doesn’t have our 200 million verified identities or our 500 million proprietary reviews, and it can’t message the host, which 90% of our guests do. It can’t provide global payment processing, customer support or insurance. By layering AI over the entire Airbnb experience, we believe we’re building something that’s impossible to replicate…

…I think these chatbot platforms are going to be very similar to search. They’re going to be really good top-of-funnel discoveries. And in fact, what we’ve seen is, I think, they’re going to be positive for Airbnb. And I’m very, very deep in this space. And what we see is that traffic that comes from chatbots converts at a higher rate than traffic that comes from Google. But the other thing to know, and this is the most important point, is that these models are not proprietary. The models in ChatGPT, the models in Gemini, the models in Claude and the models like Kiwi are available to every single company. And so pretty soon, every company becomes an AI platform if they make the shift. We will be able to build everything everyone else will have if we use their models. And we believe specialization will win in travel because if somebody wants to find an Airbnb or have a trip, we can take their model, the same model they use, we can post-train it and tune it based on our millions of interactions. We can connect it to our customer support agents. We can connect it to our hosts. And that’s fundamentally what we think…

…I don’t think that one company is going to own everything. I think we’re going to be able to work together. And these companies will be very helpful top-of-funnel traffic generators for Airbnb just like Google was.

Airbnb’s management wants to nail down AI search for Airbnb first and then applying the AI search form factor to sponsored listings; Airbnb is currently conducting small-scale tests on AI search; management can’t pin down a concrete timeline for building AI search; management thinks AI search is difficult problem to solve for e-commerce because it is multi-modal; management thinks a chat interface for AI search for e-commerce (and travel) is not ideal, and Airbnb needs to innovate on the user interface

One of the things that’s been really clear with the — after the launch of ChatGPT was that traditional search was going to become essentially conversational AI search. And that what we wanted to do is really design AI search, really see how that works. And then if we are going to do sponsored listings, we design that ad unit in that form factor. So we’re focused, first and foremost, on the most perishable opportunity, which is AI search. Actually, funny enough, we are doing tests as we speak. So AI search is live to a very small percent of traffic right now. We’re doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we’re going to be experimenting with making AI search more conversational, integrating it into more of the trip. And eventually, we will be looking at sponsor listings as a result of that. But we want to first nail AI search…

…AI search will eventually — I can’t put a time line on it because AI is obviously highly unpredictable. But we want to be — we would love to be the first company in e-commerce that really nails AI search, conversational search. I think it’s really hard not just in travel, but all e-commerce. One of the reasons that chatbots are really hard for commerce is because they’re very visual. They’re photo forward. You need to be able to compare. You need to be able to open different tabs. So a text forward chatbot interface is not the ideal. So we have to actually innovate on the user interface.

Airbnb’s management thinks AI will significantly improve productivity for all Airbnb employees; more than 80% of Airbnb engineers are currently using AI tools

It’s going to make our engineers and everyone at Airbnb significantly more efficient. More than 80% of engineers are now using AI tools. That soon will be 100%.

Arista Networks (NYSE: ANET)

Arista Networks has exceeded its goal of earning $1.5 billion in AI center networking revenue in 2025; management has raised their AI center revenue-goal for 2026 and now expects Arista Networks’ AI center revenue in 2026 to be double that of 2025’s; management’s target for AI center revenue in 2026 includes both front-end and back-end networking

As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion as well as $1.5 billion in AI center networking…

…With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue…

…We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion…

…3 years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? 3 years later, I’m actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it’s tied to the back-end GPU clusters, and it’s an all Ethernet, all AI system for agentic AI applications.

Arista Networks’ products can interoperate with NVIDIA, but management sees Arista Networks emerging as the gold standard network for running training and inference models that process tokens at teraflops speed; Arista Networks is co-designing AI rack systems with 1.6T (1.6 terabits per second) switching coming in 2026

We interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage and VAST Data, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops…

…We are codesigning several AI rack systems with 1.6T switching emerging this year.

Arista Networks’ management recently launched its flagship 7800 R4 spine product for routing use cases that include AI spines

In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases.

In 2025, Arista Networks participated in Ethernet-based industry standards for AI scale-up and scale-out networking; Arista Networks’ networking portfolio are successfully deployed in scale-up, scale-out, and scale-across AI networks; management thinks AI networking architectures need to handle both training and inference frontier models to ease congestion; the key metric when handling training is job completion time, while the key metric when handling inference is time taken to a first token; management sees Arista Networks’ portfolio has having the features to handle the fidelity of AI and cloud workloads; management’s strategy for AI networking is based on Autonomous Virtual Assist, which helps instrument customers’ networks for enhanced security, observability and agentic AI operations

In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 Specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front-end of compute storage, WAN and classic cloud networking. Our AI accelerated networking portfolio consisting of 3 families of EtherLink spine-leaf fabric are successfully deployed in scale-up, scale-out and scale-across networks.

Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different. It’s the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow and all the patterns associated with it.

Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our published subscribed state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers’ networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for Network as a Service.

Arista Networks’ purchase commitments at the end of 2025 Q4 was $6.8 billion, up 42% sequentially; the sequential increase in purchase commitments was for chips related to new products and AI deployments, and was affected by the supply constraint on DDR4 memory chips; pricing for memory chips have gone up significantly for Arista Networks; management sees memory chips as the new gold in the AI sector

Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing such as the supply constraint on DDR4 memory and the lead times from our key suppliers…

…Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they’re more memory intensive. Add to that, that we’re expecting increases from the silicon fabrication that all the chips are made, as you know, essentially with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We’re having to smile and take it just about at any price we can get and the prices are horrendous. They’re an order of magnitude exponentially higher. So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector.

The demand for Arista Networks’ networking products in AI data centers comes only after the data centers are built and after the GPUs and other AI chips are purchased; management sees demand for Arista Networks’ products as being very good, but the exact timing for shipments is harder to pin down

That’s an important thing to understand, that we don’t track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network comes — lags a little. So demand is going to be very good, but whether the shipments exactly fall into ’26 or ’27, Todd, you can clarify when they really fall in, but there’s a lot of variables there.

Arista Networks was initially working with only a small handful for model builders and AI chip designers, but the company is now working with many more of such entities; NVIDIA had essentially 100% market share  just a year ago, but Arista Networks’ management now sees AMD AI chips as having about 20%-25% market share; Arista Networks is the preferred provider for AI data centers that utilise AMD AI chips

If you look at us initially, we were largely working with 1 or 2 model builders and 1 or 2 accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there’s really multiple layers in a cake where you’ve got the GPU accelerators…

…Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or xAI or Anthropic Claude or OpenAI and many more coming. These models and the multiprotocol algorithm or nature of these models is something we have to make sure we build a network correctly for. So that’s one…

…A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they’re building best-of-breed building blocks for the NIC, for the network, for the I/O and they want open standards as opposed to full-on vertical stack from one vendor.

Arista Networks’ management thinks AI model builders will be working with multiple cloud providers, and Arista Networks will be working with all the cloud providers

I think the biggest issue is not only the model builders, but they’re no more in silos in one data center, and you’re going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we’ve historically not worked with this. So I think you’ll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.

Arista Networks’ management are careful going into business with some AI neoclouds (the ones that converted from oil money or crypto money into AI) because their businesses and financial health are questionable

There are a set of neoclouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI. And over there we are going to be much more careful because some of those neoclouds are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a onetime. We don’t know the exact nature of their business and those will be smaller.

Arista Networks’ management does not believe that AI is eating software; management believes that AI enables better software to be built

I don’t think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software.

Arista Networks’ management thinks that the rise of agentic AI will increase demand for all kinds of XPUs

The rise of agentic AI will only increase, not just the GPU, but all gradations of XPU that can be used in the back end and front end.

Arista Networks’ 4 major AI customers are all deploying AI with Ethernet; 3 of the 4 customers have deployed 100,000 GPUs each, and they are growing; the remaining customer is migrating from Infiniband and is still below 100,000 GPUs

We are in all 4 customers deploying AI with Ethernet. So that’s the good news. 3 of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it’s still below 100,000 GPUs at this time, but I fully expect them to get there this year, and then we shall see how they get beyond that. 

Arista Networks has extended the ability to stream the state of a network into AI clusters

The EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, the CloudVision or whatever system can then receive it. And we’re extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what’s going on in the RDMA stack on the host, what’s going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull those — that information all together in CloudVision and give the operator a unified view of what’s happening in the network and what’s happening in the host.

Cloudflare (NYSE: NET)

A leading AI company expanded its relationship with Cloudflare, and Cloudflare is now the AI company’s only long-term infrastructure provider with 100% traffic allocation; Cloudflare’s management is seeing a trend of AI companies choosing Cloudflare as their infrastructure platform

A leading AI company expanded their relationship with Cloudflare, signing a 2-year $85 million pool of funds contract for our full platform, selecting Cloudflare as their single long-term infrastructure provider with 100% traffic allocation. Following a rigorous RFP, they selected Cloudflare over major hyperscalers not just for our unified stack and rapid innovation, but also for our strategic neutrality. This win underscores a growing trend, the most sophisticated AI companies are choosing Cloudflare as their mission-critical, independent platform to connect, protect and build the future of the AI-driven Internet.

A leading AI company expanded its relationship with Cloudflare; this AI company chose Cloudflare in a build versus buy scenario; Cloudflare enables the AI company to manage global traffic with 99.999% availability 

Another leading AI company expanded their relationship with Cloudflare, signing a 1-year $5.4 million contract for our Workers developer platform and application services. What’s most compelling about this win is that it was a classic build versus buy scenario against the hyperscalers. In an industry where being first matters, our ready-to-deploy developer platform provided the agility and speed to market they couldn’t find elsewhere. With Cloudflare, this customer is now able to manage heavy global traffic with 99.999% availability. This deal is a testament to our shift from being just a vendor to instead being a strategic co-innovation partner for the world’s most sophisticated AI companies.

A Fortune 100 company that is also a leader in AI expanded its relationship with Cloudflare; the Fortune 100 company requires zero downtime and chose Cloudflare not because of price, but because of performance

A Fortune 100 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million contract, representing a notable upsell from their initial engagement with us in mid-2025. As a leader in AI, this customer operates under a strict mandate for global resiliency requiring a multi-vendor architecture to ensure zero downtime for their application performance. We beat out the competition not on price but rather on performance and engineering innovation.

A European Global 2000 technology company expanded its relationship with Cloudflare, and is in discussions with Cloudflare about AI Crawl Control 

A European Global 2000 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million pool of funds contract to provide seamless access to our entire platform. We signed our first deal with this customer back in February. After quickly realizing the power of Cloudflare’s platform, they came back to us looking to move from a small variable commitment to a deep strategic partnership. Unlike their legacy incumbents, our combination of best-of-breed security and our Workers developer platform enables sophisticated automation to manage their global infrastructure and greater flexibility to innovate at scale. It’s early days with this customer, and we’re already in discussions regarding AI Crawl Control.

A US media company signed a contract with Cloudflare for AI Crawl Control; the media company was facing a massive increase in AI scraping and chose Cloudflare to gain visibility into which AI models are consuming their data; with the visibility on the AI models, the media company can better monetise its content  

A U.S. media company signed a 3-year $3.1 million contract for AI Crawl Control, along with application services and Workers. This customer was facing a massive increase in AI scraping, which was crushing their network and driving up infrastructure costs. They chose Cloudflare to gain visibility into which AI models are consuming their data, allowing them to protect and eventually monetize their unique content. By leveraging Cloudflare Workers to replace years of complex technical debt from an incumbent, they were able to migrate massive Internet properties into production in just 2 weeks. This deal proves that as AI accelerates, Cloudflare is the partner of choice for companies looking to protect their IP while improving performance, reducing operational costs and enhancing their security postures.

Cloudflare’s management is seeing the shift to AI and agents driving more demand for the company’s services; management thinks AI agents (1) look at significantly more sites when making decisions, (2) allow for much greater degree of software customisation, and (3) never need to rest, unlike humans; management thinks AI agents are changing the economics of software from a seat-based model, to one where the importance lies with providing the compute, connectivity, and guard rails for agents; management thinks Cloudflare is able to capture value on both sides of agentic interactions; most vibe coding platforms are either built on Cloudflare Workers or have it as their preferred deployment target; human developers are using Cloudlfare Workers to manage inference with caching, rate limiting and observability; usage of AI is driving adoption for Cloudflare’s Zero Trust platform; management is seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven apps; management sees Cloudflare, which has 20% of the web sitting behind its network, as the global control plane for the agentic internet; management thinks the agentic internet is creating new growth opportunities for Cloudflare; a Fortune 500 pharmaceutical company is using Cloudflare to build AI tools; a technology company is using Cloudflare Containers to allow its customers to deploy AI tools in a secure isolated environment; a leading financial services company used Cloudflare to launch a MCP (model context protocol) server for AI agents to interact directly with its payment services; management thinks companies like Cloudflare for deploying AI because it offers (1) a complete tool kit, (2) a modern architecture that fits agentic work, and (3) cost-efficient scalability; management sees AI as a pure tailwind for Cloudflare’s business

Second, we are seeing the shift to AI and agents drive more demand for Cloudflare services. What we’re witnessing is a fundamental replatforming of the Internet. AI is driving a paradigm shift in how software is both created and consumed, and that is turning out to be the biggest tailwind for Cloudflare’s network and Workers developer platform. If you look at the last 30-plus years of the Internet and software ecosystem, they were built for human consumption, people in seats and clicks. Now the agentic Internet is emerging, and we can already see its trends. If humans looked at 5 sites when they were making a decision, agents might look at 5,000. If humans had to fall back on generalized software and interfaces, agents allow for infinite customizability of every software application for every need. If humans follow a common circadian rhythm to work, agents never need to sleep. Agents in other words, are the ultimate infrastructure multiplier. In turn, they are reshaping the very economics of software. The industry is transitioning from a business model defined by seat licenses to one where the winners are those providing the compute, connectivity and rails and guardrails for these new digital workers at scale. Cloudflare was built for this moment. We are uniquely architected to capture value on both sides of the agentic interactions. That means we win when AI applications are built on Cloudflare Workers, but we also win just from the increased usage of all of our products and agentic Internet drives…

…When the cost of generating code drops to near 0, the volume of new applications explode. It’s not a coincidence that most so-called vibe coding platforms are either built on Cloudflare Workers or have us as their preferred deployment target. We exited 2025 with more than 4.5 million human developers active on our platform. It’s a lot more if we count their agents. Developers are using Workers to run autonomous logic across our global network, containers for sandboxes and AI gateway to manage inference with caching, rate limiting and observability. AI usage is even driving adoption of our Zero Trust platform to ensure that data is compartmentalized and access granted in limited and controlled ways…

…We’re seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven applications. Over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. This is driving increased demand for our whole platform. This is where Cloudflare’s scale becomes our moat. With more than 20% of the web already sitting behind Cloudflare’s network, we are effectively the global control plane for the agentic Internet. That’s creating a number of new growth opportunities, both with our traditional business as well as what we’ve begun calling Act 4, helping invent the future business model of the Internet. If AI agents are the new users of the Internet, Cloudflare is the platform they run on and the network they pass through. This creates a virtuous flywheel, more agents drive more code execution on our Workers development platform, which in turn drives more demand for Cloudflare’s performance, security and networking services…

There’s a Fortune 500 pharmaceutical company that literally built a vibe coding platform on Cloudflare where their internal developers are using Workers AI and Durable Objects to build AI-assisted tools…

…Another publicly traded technology company is migrating their plug-in sandbox infrastructure to Cloudflare Containers for secure isolated execution of code at scale, which let their customers then prompt deployments directly to their system, but do it in a way which is secure because one of the things that’s really scary sometimes about deploying AI tools, especially to customer-facing applications is there can be a lot of damage that they do if one of these agents goes rogue or something goes wrong, the way that we’ve architected sandboxes allows them to — and containers allows them to do this secure isolated code deployment. And again, it all comes as part of the toolkit of Cloudflare Workers, which is allowing them to go really quickly…

…A leading financial services company has partnered with us to launch an official MCP server designed to allow AI agents like Claude, Cursor or OpenAI to interact directly with the company’s payment services. The whole thing is built on Cloudflare Workers. And this allows merchants to manage commerce tasks, such as creating invoices, checking transactions, processing and payments using natural language command and using things that are running on Cloudflare…

…I think what they like about us is, first, you get a complete toolkit. Second, that toolkit has been architected in a modern way to build exactly what you need for agents and AI applications. And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

…I know that AI is putting pressure on some companies that are out there. It’s not putting pressure on Cloudflare. We are seeing it as nothing but a tailwind for us, both for our developer tools and kind of the Act 4 stuff that we’re working on, but actually for even our legacy products like application services and Zero Trust as well.

Cloudflare’s management thinks the hyperscalers have no incentive to figure out how to run AI workloads more efficiently, unlike Cloudflare; management thinks Cloudflare can get up to 10x the amount of work done off the same GPU compared to a hyperscaler; because of Cloudflare’s efficiency, its capex has not increased significantly to handle Ai workloads; management thinks Cloudflare’s infrastructure offers much higher levels of flexibility to users when it comes to scaling up or down AI compute consumption when compared to the hyperscalers; management thinks Cloudflare is increasingly shifting AI compute-spend away from the hyperscalers

Cloudflare is in the business of getting work done. And so what we are constantly doing is having research teams inside of Cloudflare figure out how you can run AI workloads significantly more efficiently. The hyperscalers is actually have no incentive to do that. They don’t want AI workloads to be more efficient because that just means you have to lease fewer machines from them. Whereas we — because we only charge you for the actual work that’s getting done, that means that we’re just getting oftentimes as much as 10x the amount of work off of the same GPU that you might get with a hyperscaler. That advantage is part of how we’re able to just bring much more out of the CapEx that we spend than others are. Our CapEx has ticked up a little bit, and I think that that’s in response to the fact that we’ve seen an increase in terms of workers, but it’s nowhere close to what we’re seeing from the hyperscalers…

…And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

… I think that the work that we’re doing to really embed with customers is driving success there. And again, we’re still not to a point where we’re going to be doing a $100 million deal a quarter, but we will get to that point. And I think we’ve seen an enormous total addressable market for the Cloudflare Workers platform. And I think that will shift more and more spend away from what people are using the hyperscalers for.

Cloudflare’s management thinks that the predominant business model of the internet in the AI era will shift away from advertising and subscriptions; Cloudflare’s recent acquisition, Human Native, will have an important role in helping the company come up with the next business model for the internet; Cloudflare is able to rewrite internet content that flows through its infrastructure, so it will be able to rewrite internet content in the best way for AI agents to consume; management thinks Cloudflare’s business is incredibly durable because it is able to automatically bring along the part of the internet that sits behind the company into whatever comes next in the AI era; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

In Human Native case, they’re really helping us think through what is the next business model of the Internet going to look like. It’s going to move, I think, away from advertisement. It’s going to move away from subscriptions. It’s going to move to something else. And Human Native who came out of Google and we are just extraordinary in thinking about what that future business model looks like. I think that you’re going to see extraordinary things from them and they fit right in a Cloudflare and we’re excited to have them…

…But then because our application services sit in front of people and one of the things that people don’t understand is, there’s a lot different than what people think of sort of just traditional CDNs or other things like that, is that we’re actually able to rewrite the content that flows through us as it flows through. So if it turns out that agents are better at speaking, I don’t know, Latin than they are speaking English, we can literally rewrite the content that’s behind Cloudflare in Latin rather than being in English. Now that’s not going to be what agents are good at, but they are going to be better probably at speaking code than they are going to be maybe speaking on other things that we might invent. So I think that what we’re able to do and part of the reason we think that our legacy business is going to be incredibly durable is that it’s going to be able to automatically bring along all of the rest of the Internet that already sits behind us into whatever comes next. And I think we’re going to figure that out…

So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Cloudflare’s management thinks that agentic commerce could put a lot of pressure on small businesses, and management is figuring out how they can bring all these small businesses along in an incredibly intuitive and easy way for the small businesses to adopt; management does not have the solutions yet, but they’re confident they can figure it out

One of the things I’m thinking a lot about is what happens to small businesses in a agentic commerce world. There’s a lot of ways where agents could be very consolidating and actually put a lot of pressure on small businesses. And so I think us in combination with great companies that we’re working with, like a Shopify or a Visa or PayPal or Mastercard, we’ve got to figure out how do we make sure that we bring all of these small business along, give them the right tools. And that’s exactly the sort of thing that we’re thinking about as we think about Act 4 and it’s not going to require you to have to go in and rebuild things. We want to make it one click simple where as soon as we figure out this is what really works, you push a button in that just whatever you had as your old shopping marketplace, that just comes along with it and gets to support whatever agents are going to be providing in the future. I don’t know exactly what all those things are going to look like, but we’ve got an incredible team

AI companies are looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services; non-AI companies are also looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services because the non-human users were generating an order of magnitude more volume than the human users

The first place that we saw just demand was actually from a lot of the AI companies, where the AI companies would say to us, we can’t continue to operate our systems unless we can have the security and ability to deal with the load, which Cloudflare provides by default. Every time you run a query against an AI company, it’s pretty expensive to deal with those queries. And so being able to sort out who’s a human and who’s not a human, which is something we’re the best in the world at, is really important for the AI companies, and that’s driven actually just a lot of those initial relationships that are there.

What really took off in Q4, though, was where we saw other companies, media companies, e-commerce companies, companies that were just doing more traditional things online, seeing such an enormous uptick in how agents were interacting with their systems. I mean if any of you have used a tool like a ChatGPT or a Grok or a Claude, and you just watch how many different things it is looking at for every query that you send out, that’s just an order of magnitude increase in the volume of queries that are coming to the Internet. And so the people who are providing what is that Internet that they’re querying against, they need ways to do that in a way which is efficient and able to continue to scale. And Cloudflare is — and again, those application services functions that we have, the kind of Act 1 products that we have, are really critical of being able to deliver that.

Cloudflare’s newer but still-legacy Zero Trust products are helping users to secure AI agents

If you look at something like the new agents that people are running on their own machines often, the amazing thing is that people are waking up very quickly. We’re sort of speedrunning all of the security challenges that are out there, where all of a sudden you say, I’ve just given my agent access to everything in my life, what could go wrong? People are very quickly figuring out a lot could go wrong and so you got to put controls in place. And that’s exactly where our Act 2 or Zero Trust products come into play, where we’ve actually seen a real uptick even in a self-service business of the Zero Trust products.

Content publishers have been overwhelmingly positive towards Cloudflare’s Crawl Control product; Cloudflare’s management has been positively surprised by the reaction from research teams in the finance industry towards Crawl Control; AI companies may not necessarily like Crawl Control, but Cloudflare’s management thinks the AI companies understand why Crawl Control needs to exist; large technology companies have tried to establish content marketplaces, but Cloudflare’s management thinks that content publishers have higher trust in Cloudflare as a neutral 3rd party; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

[Question] Just double-clicking into Act 4, particularly in light of the wins, like the media company signing that $3.1 million contract for AI Crawl Control. So as you’re engaging with publishers, can you share early feedback around adoption towards this opt out controls to block scraping, but also the evolution of a structured marketplace model here.

[Answer] We’ve been sort of that neutral honest broker between the 2 sides that can come together and say, okay, like in order for this to all work, the Internet needs to have a business model, like people who create content deserve to get paid. And one of the things that actually surprised me to some extent, which might be relevant to a lot of you listening in, is we’ve actually been getting called not just from like the Associated Press and BBC and New York Times, but we’ve been getting calls increasingly from banks where their research teams are saying, we’re actually seeing fewer people subscribe to and read our research because the AI companies, the people are just turning the AI companies, they’re slurping all the data down and taking that intellectual property. Again, I think journalists get deserved to get paid, but so do research analysts… 

…The reaction from the content creator side has been just overwhelmingly positive. And we come back to something pretty simple, which is just if you create content, it should be up to you who gets access to it and who doesn’t, and we can provide the tools to do that. On the AI company side, they also — again, nobody wants to pay for something that they were getting for free. But I think that they understand that we’re a fair broker. And when we walk them through what happens if we don’t create some healthy ecosystem here, they say, we get it. We just want to make sure that everyone is treated fairly…

…Microsoft and Amazon have announced content marketplaces. And they may be successful, but what we’re hearing from both the AI companies and from the content creators is that because Cloudflare is that trusted neutral third-party that we can be that honest broker between them that they would rather us be the one that figure out what that future business model looks like as opposed to one of the hyperscalers, which is out there creating their own foundational model themselves and might have a very different incentives. So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Datadog (NASDAQ: DDOG)

Datadog’s management sees a positive demand environment, driven by cloud migration; management is seeing strong growth from both non-AI native companies and AI-native companies; in particular, the AI-native companies have very high growth and are going into production

We continue to see broad-based positive trends in the demand environment. With the ongoing momentum of cloud migration, we experienced strength across our business, across our product lines and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products.

Datadog’s management sees the company’s AI initiatives as being split into 2 buckets; one bucket is AI for Datadog, where management is building AI products to make Datadog better for customers; in AI for Datadog, management made Bits AI SRE (site reliability engineering) Agent, which does root cause analysis, generally available in December 2025 and it had 2,000 trial and paying customers in January 2025; Datadog has other AI products, such as Bits AI Dev agent, Bits AI Security Agent, and the Datadog MCP (Model Context Protocol) server; Datadog MCP server saw an 11-fold increase in tool calls in 2025 Q4 compared to 2025 Q3; the other bucket is Datadog for AI, where management is building capabilities for end-to-end observability across the entire AI stack; management is seeing an acceleration in growth for the LLM (large language models) Observability product; LLM Observability has 1,000 customers and number of LLM spans customers are sending to Datadog is up 10x over 6 months; management will soon release AI Agent Console to monitor AI agents; management is working on GPU monitoring; management is seeing Datadog’s overall customers base increase their usage of GPUs; management is improving the ability of Datadog’s products to secure the AI stack against attacks; management continues to see customer interest grow for next-gen AI observability; 5,500 customers are sending AI data to one or more of Datadog’s AI integrations (was 5,000 in 2025 Q3); management recently launched Feature Flags, which could be the foundation for automatically validating applications written by AI agents; management thinks that observability products for LLMs are currently undifferentiated but it will be differentiated in the future; management thinks observability tools for LLMs should be the same as for the rest of an organisation’s systems because LLMs do not work in isolation

We are executing relentlessly on our very ambitious AI road map, and I will split our AI efforts into 2 buckets: AI for Datadog and Datadog for AI.

So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers. We launched Bits AI SRE Agent for general availability in December to accelerate root cause analysis and incident response. Over 2,000 trial and paying customers have run investigations in the past month, which indicates significant interest and shows great outcomes with Bits AI SRE. And we’re well on our way with Bits AI Dev agent, which detects code level issues, generates fixes in production context and can even help release the monitor a fix. And Bits AI Security Agent, which autonomously triages SIEM signals, conducts investigations and delivers recommendations. The Datadog MCP server is being used by thousands of customers in preview. Our MCP server responds to the AI agent and user prompts and uses real-time production data and rich Datadog context to drive troubleshooting, root cause analysis and automation. And we’re seeing explosive growth in MCP usage, with the number of tool calls growing 11-fold in Q4 compared to Q3.

Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for LLM Observability. Over 1,000 customers are using the product and the number of spans since has increased 10x over the last 6 months. In 2025, we broadened the product to better support application development and iteration adding capabilities such as LLM Experiments and LLM Playground, LLM Prompt Analysis and custom LLM-as-a-judge. And we will soon release our AI Agent Console to monitor usage and adoption of AI agents and coding assistance. We are working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. And we are building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking and data poisoning among many other risks…

…We continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI and LLM usage…

…In software delivery, in January, we launched Feature Flags. They combine with our real-time observability to enable canary rollouts, so teams can deploy new code with confidence. And we expect them to gain importance in the future as they serve as a foundation for automating the validation and release of applications in an AI agentic development world…

…We mentioned our LLM Observability product. There are a few other products in the market for that. I think it’s still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are, but we expect that to shake out more into the future. We think, in the end, there’s no reason to have observability for your LLM that is different from the rest of your system in great part because your LLM don’t work in isolation. The way they implement their smarts is by using tools, the tools on your applications and your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.

Example of an 8-figure land deal with a high-profile AI foundation model builder (most likely Anthropic); the model builder’s observability stack was fragmented; the model builder will consolidate more than 5 observability tools into Datadog; the model builder wants to focus on building its own products; this model builder is the 2nd high-profile model builder that Datadog has as a customer (with the other being OpenAI); every customer of Datadog is also using some in-house or open-source observability tools and the same goes for the AI companies; management is seeing AI model builders’ having the same reasons as non-AI companies for adopting Datadog and that is Datadog is able to prove its value very quickly

We landed an 8-figure annualized deal and our biggest new logo deal to date with one of the largest AI foundational model companies. This customer has a fragmented observability stack and cumbersome monitoring workflows leading to poor productivity. This is a consolidation of more than 5 open source, commercial, hyperscaler and in-house observability tools into the unified Datadog platform that has returned meaningful time to developers and has enabled a more cohesive approach to observability. This customer is experiencing very rapid growth. Datadog allows them to focus on product development and supporting their users, which is critical to their business success…

…[Question] It’s now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?

[Answer] Every customer we land has some — has had some at homegrown. They have some open source. They might still run some open source, like that’s typically where we see everywhere. The — it’s cheaper to do it yourself is usually not the case. So your engineers typically are very well compensated in the big part of the spend in this company. Their velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So it’s not any different from what we see with any other customer. And also within the AI cohort, it’s not original at all like — AI cohort in general is who’s who of the companies that are growing very fast and that are shaping the world in AI and they’re all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.

Datadog’s management continues to believe that digital transformation and cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks that agentic coding is beneficial for Datadog because it leads to more coding volume to observe, and the need for observability in areas that were not necessary before; Datadog’s management thinks it’s very hard to tell what level of model-inferencing will happen because of the gargantuan amount of capex from the hyperscalers, but they think it’s likely to lead to more complexity in the technology ecosystem, which will benefit Datadog’s business

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience and security needs. Meanwhile, we’re moving fast in AI, by integrating AI into the Datadog platform to improve customer value and outcome and by building products to observe, secure and act across our customers’ AI stack…

…[Question] In the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we’ve seen from Anthropic and new frontier models from OpenAI, just in terms of like what this means for observability as a category, defensibility of it in terms of can customers use these tools to build homegrown solutions for observability?

[Answer] There’s a few different ways to look at it. One is there’s going to be many more applications than there were before. Like people are building much more and they are building much faster. We covered that in previous calls, but we think that the — this is nothing, but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time. And you move a lot of the value from the act of writing the code, which now you actually don’t do yourself anymore to validating, testing, making sure it works in production, making sure it’s safe, making sure it interacts well with the rest of the world, with end users, make sure it does what it’s supposed to do for the business, which is what we do with observability. So we see a lot more volume there, and we see that as what we do basically where observability can help. The other part that’s interesting is that we — a lot happens — a lot more happens within these agents and these applications. And a lot of what we do as humans now starts to look like observability. Basically, we’re here to understand — we’re trying to understand what the machine does. We’re trying to make sure it’s aligned with us. We’re trying to make sure the output is what we expected when we started, and that we didn’t break anything. And so we think it’s going to bring observability more widely in domains that it didn’t necessarily cover before…

…[Question] I’m wondering if you’ve collected enough signal from the last couple of years of CapEx, that trend to estimate how much of that is training related and when it might convert to inferencing where Datadog might be required? In other words, are you looking at this wave of CapEx and able to say it’s going to create a predictable ramp in your LLM observability revenue?

[Answer] I think it’s pretty too reductive to peg that on LLM observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it’s kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value 2 or 3 or 4 years from now. So I think we’ll have to see what the conversion rate is on that. But look, it definitely points to very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy. And so we think it’s going to be — like it’s going to be of great help to our business, let’s put it this way.

Datadog experienced adoption growth in AI native customers in 2025 Q4 that significantly outpaced non-AI customers; Datadog now has more than 650 AI native companies (was 500 in 2025 Q3), of which 19 are spending more than $1 million (was 15 in 2025 Q3); 14 of the top 20 AI-native companies globally are Datadog customers; management chose not to share the percentage of revenue coming from AI native customers in 2025 Q4 (was 12% in 2025 Q3); the AI native companies are not dilutive for Datadog’s gross margin; the large AI native customers get the same kind of volume discount as the large non-AI customers

We are seeing continued strong adoption amongst AI-native customers with growth that significantly outpaces the rest of the business. We see more AI-native customers using Datadog with about 650 customers in this group. And we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers…

…[Question] Can you give us the percent of revenue of the AI cohort this quarter?

[Answer] We didn’t — have not put it in there…

…[Question] On margin, are the large AI-native customers significantly dilutive to gross margin?

[Answer] On a weighted average, they’re not. As we’ve always said, for larger customers, it isn’t about the AI-natives or non-AI-natives, it has to do with the size of the customer. We have a highly differentiated — diversified customer base. So I would say we’re essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it’s more or less what we’ve seen over the past couple of years, not really affected by AI or non-AI native.

Datadog’s management’s basis for guidance is to have conservative assumptions on usage growth trends observed in recent months; in setting guidance, management made the conservative assumption that Datadog’s core business is growing faster than the business from its large AI customer (OpenAI)

Our guidance philosophy overall remains unchanged. As a reminder, we based our guidance on trends observed in recent months and apply conservatism on these growth trends…

…We noted that with the guidance being 18% to 20% and the non-AI or heavily diversified business being 20% plus, that would imply that the growth rate of that core business assumed in the guidance is higher than the growth rate of the large customer. It doesn’t mean the large customer is growing any which way. It’s just that in our consumption model, we essentially don’t control that. And so we took a very conservative assumption there.

Datadog’s management thinks that as agentic developers proliferate, there will be a lot more automation in observability workflows, but there will still be a need for UIs (user interfaces) for human developers to interact; to prepare for the rise in automation in observability workflows, management is exposing a lot of Datadog’s functionality directly to agents; management thinks it’s likely that Datadog’s MCP (Model Context Protocol) server will be part of how agents interact with Datadog’s products

[Question] In a world where there’s a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it’s UI or how workflows work in observability and how maybe Datadog sort of tries to align with that evolution that’s likely to come in the next couple of years?

[Answer] There’s going to be an evolution, that’s certain. There’s going to be a lot more automation. We see it today, like we see the — all the signs we see point to everything moving faster, more data and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that. And a lot of the interaction will be automated by agent. So we’re building the products to satisfy both conditions. So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what their options are, give them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. We mentioned on the call, we have an MCP server that is currently in preview and that is really seeing explosive growth of usage from our customers. And so it’s a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.

Datadog’s management thinks that LLMs (large language models) are getting better all the time; management sees 2 parts to Datadog’s defensibility against LLMs; the 1st part is Datadog understands how all the data fits together; the 2nd part is Datadog has the foundation to provide proactive, real-time anomaly detection and solutions as Datadog is embedded in an organisation’s data plane; management thinks that the world of observability is shifting towards one where it’s important for observability providers to provide proactive, real-time anomaly detection and solutions; management is developing Datadog’s ability to provide proactive, real-time anomaly detection and solutions; the data planes in a typical organisation Datadog works with are real time and many orders of magnitude larger in volume than what an LLM typically sees; management is not seeing any change in the intensity of competition for Datadog’s business from LLMs; management thinks it’s only rational for all AI native customers to use Datadog’s products

We definitely see that LLMs are getting better and better, and we’ll bet on them getting significantly better every few months as we’ve seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So if you feed a lot of data to an LLM and ask for an analysis, you’re very likely to get something that is very good and that is going to get even better.

So when you think of what we have that is fundamentally our moat here, there’s 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that’s how we aggregate all the data we get, we parse out the benefits. We understand how everything fits together and we can feed that into the LMM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so customers can recombine that in different ways using different intelligence tools.

But the other part that we think where the world is going for observability is that right now, we are — the SDLC [software development life cycle] is accelerating a lot, but it’s still somewhat slow. And so it’s okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling for them. Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that’s happening in your system. So you need to be proactive. You’ll need to run analysis in stream as all the data flows through, you’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the [ fact ] 10, 15 minutes later. And that’s what we’re uniquely positioned to do.

We are building that. We’re not quite there yet, but we think that a few years from now, that’s what the world is going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems…

…The data plates we’re talking about are very real time, and there are many orders of magnitude larger in terms of data flows, data volumes than what you typically feed into an LLM. So it’s a bit of a different problem to solve…

…[Question] I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted?

[Answer] There hasn’t been any particular change in competition in that we see the same kind of folks and the positions are relatively similar. And we are pulling away. We’re taking share from anybody who has scale. And I know there’s been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, nothing that we saw in deals, nothing that had a large market impact. And so we don’t see that as changing the competitive dynamics for us in the near future…

…At the end of the day, it should be irrational for customers — for all customers in the AI cohort not to use our product…

…I think as you look at being in-stream looking at 3, 4, 5 orders of magnitude, more data, looking at the data in real time, and passing judgment in real time on what’s normal, what’s anomalous and what might be going wrong doing that hundreds, thousands, millions of times per second. I think that’s what is going to be our advantage and where it’s going to be much harder for others to compete, especially general purpose AI platforms.

Datadog’s management thinks the best way to justify the existence of Datadog in an environment where observability bills are going up because of AI usage, is to prove the cost-savings to customers 

[Question] Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up.

[Answer] There’s only 2 reasons people buy your product is to make more money or to save money. So whatever you do, when customers use a new product, they need to see a cost savings somewhere or they need to see that they’re going to get to customers they wouldn’t get to otherwise. So we have to prove that. We always prove that. Any time a customer buys a product, that’s what is happening behind the scenes. The — in general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.

Datadog’s management is seeing great productivity gains when employing AI internally

In terms of AI, to date, we are using it in our internal operations. So far, it’s — with the first signs of what we’re seeing is productivity and adoption…

…We’re getting a lot — we see great productivity gains with AI there, but at this point of detail, it helps us build more faster and get to solve more problems for our customers. And — but we’re very busy adopting AI across the organization.

Paycom Software (NYSE: PAYC)

IWant allows anyone to become an expert in the system without training; Forrester found that organizations with more than 500 employees that use IWant experienced an ROI of over 400%; with IWant, managers up to 600 hours, executives 60 hours, HR teams 240 hours and employees 3,600 hours, on an annual basis; the leaders of organisations using IWant get immediate value out of the product without any training; IWant usage is up 80% in January 2026 from 2025 Q4; IWant’s functionality is continuously being improved

Our most advanced AI solution, IWant, is designed to accelerate the speed to value by allowing anyone to become an expert in the system without any training. Forrester’s recent analysis of a composite organization with more than 500 employees found that organizations using IWant experienced an ROI of over 400%, driven by productivity gains at every level. Managers save as many as 600 hours per year, executives up to 60 hours, HR teams up to 240 hours and employees across the organization collectively reclaim 3,600 hours annually.

Leaders describe IWant as a catalyst for deeper insight and one CEO remarked, I get immediate value. Without any training or knowledge of Paycom, I can go in and immediately understand more about my business…

…IWant usage is up 80% in January alone just based and that’s from fourth quarter…

…We continue to build out the IWant system. We continue to add more and more functionality to it. It continues to get stronger and stronger.

Paycom’s management thinks that AI is not a threat to Paycom; management thinks AI will give Paycom the opportunity to enter adjacent industries that it was not able to in the past

think there’s a little misjudgment about the AI thesis materializing as a threat weapon that will be used against us. I mean AI is our friend at Paycom. And I’ve worked very hard to ensure that the misunderstanding of AI’s impact on us isn’t on our end.

And I just believe as you look into the future, we have opportunities now that we didn’t have in the past, right? Like the speed of development has increased, the pace of the user buyer being able to digest it might lag a little bit, but we can develop a lot more today than what we’ve been able to in the past. We’re in this age of software development and in some instances, replacement of specific software. Paycom can get into every adjacent industry now within weeks or months. And I’ll remind everybody that I was the first Bob [ coater ] back in 1998. So there are several easy-to-displace industries that don’t just sit ancillary to our industry, but they’re dependent upon our industry of where the data starts. And so now that we can develop anything very quickly and use all these technologies to replace other industries in a matter of weeks or months, we’re excited about how that — what that looks like for our future as well.

Paycom’s management is currently not seeing any impact on overall employment from AI, but is not dismissing impacts in the future; management thinks that Paycom still has ample growth opportunities even if AI does lead to lower overall employment

[Question] The AI impact to overall employment. How do you see that impacting Paycom business?

[Answer] I’d say we’re not seeing it. I’m not going to dismiss potential impacts for us to the future. I would say that we are not overexposed to any one industry, any one client, client size. And again, we only have 5% of the market. And so you could do some calculations and we’re the most automated product in the industry and the best product for the best value that someone is going to achieve throughout the industry. And so when you look at that, I think that you could do some adjustments in employment, which again, we have not seen. But I mean, even if you did, I still think our opportunities intact for us.

Shopify (NASDAQ: SHOP)

Shopify has been building for AI shopping for some time; orders coming to Shopify stores from AI search has increased 15x since January 2025, albeit from a small base; management thinks AI shopping helps serve smaller merchants to the right buyers who might otherwise never have discovered the merchants; management thinks AI shopping benefits consumers because they gain access to a personal shopper; management thinks AI shopping will increase e-commerce penetration faster than it would have otherwise; management thinks it’s important that AI shopping is at least as good as shopping at a merchant’s digital storefront; Shopify has introduced Shopify Agentic Storefronts, which lets all major AI platforms access billions of products from Shopify merchants accurately and in an up-to-date way; AI platforms are plugged into the best commerce source of truth with Shopify, and this translates to better experiences for consumers; through the Agentic plan, brand not already using Shopify will soon be able to sell through the same AI platforms as Shopify merchants; Shopify built Universal Commerce Protocol (UCP) with Google as the common rails to support agentic commerce; UCP is payments agnostic and keeps merchant’ essential checkout logic intact; UCP is the only protocol that covers the full commerce journey end-to-end; leading retailers are already using UCP; agentic commerce does not bypass Shopify’s checkout; management has no opinion on which LLM platform will be the dominant one for agentic commerce and they just want to allow merchants to sell through agentic commerce; comanagement sees merchants’ economics remaining the same between agentic commerce and selling directly from their stores

We’ve been building for this new era of AI shopping for a long time, and it’s now here. In fact, since January 2025, orders coming to Shopify stores from AI search are up 15x. Now that’s on a small base, but that’s still a really big jump in 12 months. For our merchants, it matters because it powers the long tail of commerce, servicing smaller merchants to the right buyers who might otherwise have never discovered them. This is merit-based discovery at scale. For buyers, it matters because it’s like having a personal shopper in your pocket, someone who really understands them, their taste, their preference, their size…

…For Shopify, it matters because we believe it can bend the curve of e-commerce penetration by stripping out friction, pulling late adopters in and moving more everyday purchases online…

…It is critical that shopping in an AI conversation is at least as good as shopping at the merchant’s online store…

…Shopify Agentic Storefronts syndicates billions of products through our catalog to all major AI platforms, Google AI Mode and Gemini, ChatGPT, Microsoft Copilot, one click and our merchants get instant access to millions of potential buyers who are actively looking for their products. We’ve already seen huge brands like Vuori, Glossier, Steve Madden and SPANX sign up and start selling. Plus through the catalog, our partners get the most accurate up-to-date data for billions of products for millions of the best brands on the planet. And this is really important because when they tap into our catalog, they’re not just ingesting another feed, they’re plugging into the best commerce source of truth. And that source of truth means cleaner matching and fresher data, which translates directly into faster and more trustworthy experiences.

The new Agentic plan means that any brand not already using Shopify will soon be able to sell through the same AI platforms as our merchants as well as on the Shop app. Why? Because frankly, when commerce flows freely across agents, everybody wins…

…We built the Universal Commerce Protocol or UCP. UCP is infrastructure. It’s not a product. It’s the common rails Agentic commerce runs on. Shopify co-developed this with Google because we know commerce better than anyone. It’s an open standard for any agent to connect with any brand on the Internet. UCP is built to flex to the many ways commerce happens. It’s payment agnostic by design. It keeps the merchants essential checkout logic intact without forcing them to rebuild their customizations over and over again to fit our system. UCP is the only protocol that covers the full commerce journey end-to-end from search to cart, then checkout to post order, and it’s already being used by the world’s leading retailers…

…LLMs do not bypass Shopify’s Checkout. Checkout is really 2 parts. Think of it this way. You have a front end that’s the user interface that buyers interact with and the back end that processing everything server to server. So if you think about a Shopify store today, Shopify runs both the front end and the back end. And under UCP, Shopify still powers the overall experience, but the merchant gets to keep their own checkout system on the back end. Now with something like ChatGPT, for example, OpenAI will run the front end, which is sort of the screens and the forms that the buyer uses. But Shopify still runs the back end. And so things like order processing and payments through Shopify Payments, that all runs through Shopify’s infrastructure…

….We want to make sure that whatever surface, whatever permutation is the one that actually becomes the mainstay in Agentic that it reflects exactly the experience that the merchants want similar to what they have in the online store as well. And so the economics for Shopify merchants’ economics are the same as if the transaction happened in the online store as well. There should be no difference there.

Shopify’s on-platform AI assistant, Sidekick, proactively helps merchants prioritise and execute tasks; Sidekick’s usefulness is enhanced because Shopify powers a merchant’s store, checkout data, and apps; in the 3 weeks since Sidekick’s latest edition was released, it has generated almost 4,000 custom apps, created over 29,000 automations, built almost 355,000 task lists, and edited over 1.2 million photos; Sidekick Pulse is a new feature in Sidekick that surfaces tailored advice for merchants; Sidekick Pulse recently recommended a Shopify jewelry merchant to bundle 4 products because the Sidekick Pulse knew the 4 products were best sellers and bundles tend to convert better

Our on-platform AI assistant, Sidekick has come a long lane in a year. Sidekick is effectively a co-founder for our merchants. It uses everything it knows about your business, and it proactively tells you which task to prioritize. And it will even help you execute those tasks. Because Shopify powers the store, checkout data and apps, Sidekick can see the entire picture and do the work in one place…

…In just 3 weeks after our latest edition drop, Sidekick generated almost 4,000 custom apps, created over 29,000 automations with Shopify Flow, built almost 355,000 task lists and edited over 1.2 million photos. So it’s clear that Sidekick is doing real heavy lifting for our merchants…

…Sidekick Pulse is our new feature that proactively helps merchants grow their business. It works in the background to surface tailored advice that’s grounded in each merchant’s business, powered by over 2 decades of data…

…Last week, Sidekick Pulse made a recommendation to one of our jewelry brands. It suggested bundling 4 separate products and selling them together as a stack. Why? Because it knew that those 4 products were already best sellers, and it also knew that bundles tend to convert better and drive up cart value. Personalized data analysis paired with intelligence gained from hundreds of millions of other transactions. This is where our AI assistant really becomes the AI co-founder. It’s bespoke, it’s intuitive.

Shopify’s new app SimGym simulates real buyer behavior to provide feedback on store changes before they are shipped

Our new app SimGym simulates real buyer behavior to give you feedback on changes to your store before you even ship them.

0.5 million merchants have used AI within Shopify’s online store editor to create 6.5 million custom elements; Shopify’s online store editor allows anyone to design without code

Within our online store editor, more than 0.5 million merchants have used AI to create 6.5 million custom elements. Now anyone can design without code. This is really Shopify at its best. Massive complexity transform into a tool for anyone with imagination, no technical skills required.

Shopify’s management believes AI advances will make Shopify even more essential for merchants

As AI advances, Shopify becomes even more essential. AI transforms interfaces and accelerates the pace of change, but it doesn’t alter the underlying architecture of commerce. Commerce will always require speed, reliability and trust at a global scale. When I say scale, consider the billions of transactions that we facilitate. But it’s not just about the volume. It’s the comprehensive commerce experience we support. When an AI agent surfaces a product in any interface, merchants still need a reliable, secure and compliant path to purchase and post purchase. They still need our ecosystem of buyers, developers and partners. We help merchants be everything everywhere all at once, representing over 14% of U.S. e-commerce today and rapidly growing percentages in many geographies across the globe, we have an unparalleled view of commerce. Simply, we are the experts at commerce. AI will be a force multiplier. It will help us achieve our goals of democratizing entrepreneurship, inspiring more merchants, driving more transactions and creating more commerce channels.

Shopify was able to accelerate product development in 2025 without growing the size of the team because of the use of AI

Throughout 2025, we achieved operating leverage in each of R&D, sales and marketing and G&A, largely due to disciplined headcount management. By leveraging AI, automation and our proprietary project management and talent management systems, we’ve been able to accelerate our product development capabilities without growing the size of the team.

Shopify’s management sees Agentic Plan as an on-ramp for non-Shopify merchants to enter the Shopify ecosystem, similar to how Commerce Components works

The Agentic plans opens our infrastructure to all brands. And I think this idea that we’re bringing Agentic Commerce to every brand, whether or not they’re on Shopify, we think will be — I mean, it certainly has already been an incredible way for us to start conversations with brands who might not be ready to migrate or have not anticipated a full forklift migration just yet, but they don’t want to miss out on this incredible opportunity that might be this Agentic Commerce. And so in a similar vein to how we started — we created Commerce Components a couple of years ago where non-Shopify merchants can use things like Shop Pay or they can simply use Shopify Checkout as a component. That allowed us to start conversations with brands that we weren’t otherwise talking to. In some cases, some of those brands who came to us initially just for Shop Pay are now entirely on Shopify. So certainly, we think this could be an incredible on-ramp just like the Commerce Components play was.

The Catalog is important for Shopify’s agentic commerce ambitions because it is a source of truth for agents, and agents do not have to rely on scraping information from the internet

It is incredibly important that Tobi said something recently about Catalog. He said that everyone else has to scrape the Internet, we actually have the source of it. The fact that we have structured billions of products so agents can surface the most relevant items in seconds, the fact that products are going to be then surfaced based on relevance and sort of this merit-based discovery is going to happen. I think that every retailer and every merchant on the planet is thinking about how they can get in front of as many buyers and consumers on Agentic. If they continue down that path and do the math, more and more, they realize that Shopify is the company that is front and center.

Shopify’s management appears to see UCP (Universal Commerce Protocol) as being the significantly more important rails for agentic commerce compared to OpenAI’s ACP (Agentic Commerce Protocol)

[Question] Can you help us understand the UCP versus ACP, the other standard that OpenAI and Stripe are putting forward. Are these overlapping standards? Do they compete? Are they complementary in any way?

[Answer] Yes. Look, the goal is simple with UCP. It’s one common language for agents and retailers. The idea is that merchants can keep the brand, the attributions, buyers get these incredibly trustworthy experiences and Agentic Commerce can scale. UCP is specifically geared towards being a protocol that covers the full commerce journey end-to-end from search to cart, then checkout. It includes post order. It keeps the merchants essential checkout logic intact.

It doesn’t force them to rebuild customizations over and over again. It’s payment agnostic by design. It’s built to flex in many ways. I mentioned a couple of examples in my prepared remarks. I mean you think about ButcherBox or you think of AG1, for example, those — that subscription logic is really complex because sometimes you want to skip a month, sometimes you want to double up. If you’re on vacation, you want to do a hold or some of the larger furniture companies on Shopify that do this incredible white glove delivery where you can set the exact time and date for your couch being delivered.

These things need to be ported over into the Agentic world, and UCP does that. So in our view, UCP covers the full commerce journey end-to-end. And we think — we have 20 years of doing this. Commerce is very complex. It is easy to get it wrong. And I think that it’s more than just a transaction. It’s an entire experience and UCP covers all of that. And we’re really proud of what we did with our friends at Google. It was an incredible experience to work on it with them, but it works, and we think we’re already seeing incredible adoption from some of the largest retailers on the planet.

Shopify’s management is not seeing a competitive threat develop in terms of companies choosing to replace or bypass Shopify’s solutions with vibe-coded tools

[Question] About the feedback from merchants having discussions at the Board level about moving to Shop. Specifically, AI, the feedback that you’re getting from companies in terms of the AI road map, is that — I imagine it’s influencing decisions. Are you also seeing merchants evaluate custom solutions in light of what they can do with AI tools?

[Answer] I think a lot of the largest retailers, certainly the ones I’m meeting with, I mentioned brands like General Motors or L’Oreal or SuitSupply or Amer Sports, who runs Wilson and Salomon. What we hear from them is they’re looking — if they’re not on Shopify already, usually, they come to us with a particular problem. In some cases, it’s — we want to make sure we don’t miss out on Agentic. In other cases, they’re coming to us because they want to replace their homegrown system that they built many years ago for e-commerce. They don’t want to have 400 engineers anymore. They want to effectively come to Shopify because they want to go back to what they do best, which is they want to build furniture. They want to be a cosmetics company. They don’t necessarily want to have this massive engineering team… I think the days of let’s just build everything ourselves in-house is long gone. And I think that gives Shopify an incredible opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adyen, Datadog, Mastercard, Paycom Software, Shopify, and Visa. Holdings are subject to change at any time.

What We’re Reading (Week Ending 15 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 15 February 2026:

1. Before the market declared SaaS dead, it should have tested Anthropic’s new tools first. We did – Jim Wagner

Lawyers are not early adopters by temperament, and they don’t grade on a curve. A tool that reviews a contract and misses a material protection doesn’t get classified as “promising but incomplete.” It risks being shelved. Permanently. The standard is binary: either the tool is reliable enough that I can build a workflow around it, or it isn’t. There is no middle ground where a legal team says “it caught seven out of ten critical issues, so let’s use it for now.”

This is especially true in regulated environments — clinical trials, financial services, healthcare — where a missed clause isn’t an aesthetic problem. It’s a liability exposure, a regulatory finding, or a damaged institutional relationship. The question isn’t whether AI can review contracts. It can. The question is whether it can do so at the threshold required for a professional to rely on it…

…A clinical trial agreement is a different animal. It’s longer, more technically complex, and touches regulatory frameworks — HIPAA, FDA reporting obligations, IRB oversight, 21 CFR Part 54 financial disclosure — that require genuine domain expertise. The provisions interact with each other in ways that matter: a change to monitoring visit procedures can impact confidentiality obligations; a publication review period needs to account for patent deferral timelines; a subject injury provision needs to include a safe harbor for protocol deviations made to protect patient safety.

Once again, we gave Claude the identical playbook TCN uses — one specifically structured for AI consumption, with clear logic and well-defined positions — and ran both systems against the same clinical trial agreement.

The gap didn’t narrow. It widened.

TCN made 101 insertions of required protective language and 62 targeted deletions — 163 substantive changes in total. Claude made 7 insertions and 4 deletions. Tellingly, Claude’s changes were largely find-and-replace-level revisions: substituting “immediately” with “promptly,” replacing “sole” with “reasonable,” increasing an insurance figure, and adding pandemic language to a force majeure clause. These are real edits. They are also the edits a first-year associate would make in the first twenty minutes of review…

…These results are not a reflection of Claude’s quality as a language model. Claude is an extraordinarily capable general-purpose AI, and we use it daily in our own work. The gap is a reflection of architecture and ambition.

Claude’s legal plugin reads an entire agreement and an entire playbook, then attempts to produce all of its analysis and redlines in a single pass. This is analogous to asking a lawyer to read a thirty-page contract and a fifty-topic playbook simultaneously, then dictate every markup from memory in one sitting. Issues inevitably get lost — not because the lawyer lacks ability, but because the task exceeds what any single-pass process can reliably accomplish.

A purpose-built system works differently. Each playbook position is matched against the agreement independently and analyzed in a dedicated step with only the relevant clause text and guidance in front of it. Nothing competes for attention. Every position in the playbook is programmatically guaranteed to be evaluated. The system doesn’t need to “remember” to check a provision — it cannot skip one.

This also explains why the gap widened on the longer, more complex clinical trial agreement. The more provisions, the more playbook positions, and the more regulatory context a single-pass system must hold in working memory simultaneously, the more it drops. A purpose-built pipeline scales linearly. A single-pass approach degrades…

…The stock market’s reaction treated Anthropic’s announcement as if a general-purpose model with a vertical plugin is architecturally equivalent to purpose-built vertical software. It isn’t — and the evidence is now available for anyone willing to run an actual test.

But there’s a more fundamental point. Nothing Anthropic announced addresses multi-document congruence, multi-party collaboration, or institutional workflow orchestration. A Claude user reviewing a clinical trial agreement operates in a single chat window with a single document. The protocol, consent form, budget, and coverage analysis — all of which must be internally consistent with the contract — exist nowhere in that workflow. Imagine five users with five separate skills in five disconnected chat windows, each trying to keep their work coordinated, cross-checked, and accurate. There is no shared data model. No audit trail. No collaboration layer. No mechanism to ensure that a change to the protocol ripples correctly through the budget, the consent form, and the contract.

The natural counterargument is that agentic AI frameworks — autonomous agents that chain tasks, manage state, and coordinate across documents — will close this gap. They will have an impact, we use them ourselves and we take that seriously. But agentic frameworks don’t arrive pre-built with plug-and-play domain solutions. They are tools, not answers. An agent orchestrating clinical trial study startup still needs deep context understanding of the subject matter, the stakeholder requirements, and the interconnectedness of every document and every party involved. It needs to know that a change to a protocol’s schedule of events must ripple through the budget, the consent form, and the coverage analysis — and it needs to know how. That’s not something you install. It’s something you build — substantial work that relies on deep expertise with respect to the subject matter and AI implementation, refined across thousands of agreements. The same architectural principles that separate a plugin from a platform will separate a generic agent from a team of purpose-built ones.

2. As AI enters the operating room, reports arise of botched surgeries and misidentified body parts – Jaimi Dowdell, Steve Stecklow, Chad Terhune and Rachael Levy

In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.

The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.

At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations…

…In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough’s carotid artery allegedly “blew.” Blood “was spraying all over” – even landing on an Acclarent representative who was observing the surgery, according to a lawsuit Fernihough filed in U.S. District Court in Fort Worth against Acclarent and several manufacturers. One of Fernihough’s carotid arteries was damaged. She suffered a stroke the day of the surgery, according to her suit.

Acclarent “knew or should have known that the purported artificial intelligence caused or exacerbated the tendency of the integrated navigation system product to be inconsistent, inaccurate, and unreliable,” the suit alleges.

Acclarent has denied the allegations in both suits, which are ongoing, according to court filings. The company says it did not design or manufacture the TruDi system but only distributed it, according to court filings. Acclarent’s owner, Integra LifeSciences, told Reuters there’s no evidence of a link between the AI technology and any alleged injuries…

…Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn’t comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.

One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images.

“Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts,” stated the report, which does not say that any patient was harmed. Sonio Detect is owned by Samsung Medison, a unit of Samsung Electronics. Samsung Medison said the FDA report about Sonio Detect “does not indicate any safety issue, nor has the FDA requested any action from Sonio.”…

…The FDA requires clinical trials for new drugs, but medical devices face different screening. Most AI-enabled devices coming to market aren’t required to be tested on patients, according to FDA rules. Instead, makers satisfy FDA rules by citing previously authorized devices that had no AI-related capabilities, says Dr. Alexander Everhart, an instructor at Washington University’s medical school in St. Louis and an expert on medical device regulation.

Positioning new devices as updates on existing ones is a long-established practice, but Everhart says AI brings new uncertainty to the status quo.

“I think the FDA’s traditional approach to regulating medical devices is not up to the task of ensuring AI-enabled technologies are safe and effective,” Everhart told Reuters. “We’re relying on manufacturers to do a good job at putting products out. I don’t know what’s in place at the FDA represents meaningful guardrails.”

3. Clouded Judgement 2.13.26 – Build vs Buy – Jamin Ball

The cost of creating software is going to zero. The risk isn’t that someone will vibe code a internal CRM replacement…The risk is that 10 companies could now create a new CRM, from the ground up, built for a new end user in mind (agents vs people), with a business model for the AI world (consumption / usage vs seats), and now all of a sudden the market is flooded with offerings and the legacy space commoditizes.

This, to me, is the real risk. Software broadly commoditizes, with a new crop of software / value emerging. A big constraint to the development of software is engineering resources. Before the cloud, a constraint was how quickly could you stand up racks of servers to support user growth. In the cloud era that was commoditized, and engineering resources became the constraining factor (how quickly could you develop software). With AI, that constraining resource (engineering velocity) is going away.

So what happens from here…The world is about to be flooded with software. For companies that can’t innovate and capture this next S-Curve of innovation, they will slowly fade to irrelevance. The will be valued as companies in a post-growth industry, and receive a post-growth valuation multiple (see ya revenue multiples…). For those who can, a new vector of growth lays ahead of them…

…If we bring this back to the “is software dead” conversation, many are pointing to the recent Q4 earnings reports (we’re in the middle of earnings season right now) as “evidence” that AI isn’t eating software. For the most part, earnings have been good! Retention figures don’t seem to show any sign of cracking. However, I found an awesome graphic floating around X this week (copied below). It showed an index of newspaper companies stock performance and earnings over time (starting in 2002). What you’ll see, is the voting machine of the market saw the disruption coming from the internet, and started to discount the newspaper stocks right away. From 2002 to 2009 those stocks basically went down in a straight line. However, if you look at earnings estimates for that same set of companies, they actually grew for about 5 straight years! During that time, the stocks continued to drop. It wasn’t until 2007 when the earnings really started to get disrupted. Earnings then fell off a cliff. All of this to say – don’t take too much comfort in the short term quarterly results 🙂 Disruption generally takes a bit longer

4. Earnings Drive Stocks – Matt Cerminaro

Below I’m showing you the net income share vs the market cap share of each sector within the S&P 500 since 2005…

…Each color represents a sector. Net income share is on the left and market cap share is on the right.

Let’s start on the left.

See how the Technology Sector’s net income share has grown over time? It’s the light blue shade at the bottom of the chart.

Now look at the chart on the right.

That same light blue shade rising over time is the market cap share of Tech growing concurrently with the net income share.

Energy, the orange shade, used to command a larger share of the S&P 500’s overall net income, but it has shrunk over time.

Its market cap share has done the same.

5. AI and the Economics of the Human Touch – Adam Ozimek

The player piano, or pianola, was invented by Edwin Votey in 1895. At first it was a stand-alone machine that would be pushed up against an existing piano, like the one shown below.

Within a few years, player pianos could be built into the pianos themselves. The machines “read” music that was encoded onto rolls of paper. The notes were represented as holes in the paper that directed pneumatic airflow, which then pushed down the levers that depressed the piano keys.

The only role for humans to play in the functioning of a player piano was to pump the pneumatic foot pedals to keep the piano playing. No need for a skilled human piano player.

And yet, despite the technology to fully automate the job having been invented more than a century ago, people still make a living playing the piano today.

The job is not just limited to piano players performing in ticketed concert events, which of course are quite common. Hotels, bars, and restaurants continue to hire live piano players to provide background music as if it was 1894, the year before the invention of the pianola, which itself is hardly ever used anymore.

Listeners simply prefer music from a piano player rather than a player piano…

…In 2007, a restaurant entrepreneur named Jack Baum was teaching an executive MBA program at Southern Methodist University. He challenged the class to come up with a way to help restaurant customers pay their bill faster than simply waiting for the server to bring the check. Three students arrived at such a compelling answer that the four of them turned it into a company called Ziosk.

Ziosk’s tabletop ordering system provides customers with a tablet that allows them to order, pay, play games, enter coupons, and much else. Thus was born the ability to automate away the job of waiter.

The tablet debuted at 125 Chili’s locations in 2013, and today they are in thousands of restaurants. Ordering devices like this are much more commonplace today, including QR codes that allow customers to order from their own smartphones.

On paper, the job of waiter has been fully automated for over a decade. And yet, today there remain 1.9 million waiters across the US. It’s true that this number has dipped recently, and is slightly below the historical peak. Under the pressure of automation, the BLS forecasts that it will further decline within the next decade… by 1 percent. Is that the worst that full automation can do to this job?…

…Consider first that even some restaurants that have implemented automation nevertheless have wait staff. At Olive Garden, you can order and pay from a provided tablet at any point, but you still have a waiter who greets you, offers to take your order if you don’t want to use the tablet, and checks in on you throughout the meal. If you wait long enough, they will even bring the check. That is a strong signal that the waiter is adding value above and beyond automation…

…If productivity surges from AI, the United States will become a far richer country per capita. It’s not clear whether this will translate into much faster income growth for the median workers. In recent decades, after all, median wage growth has lagged mean wage growth — likely reflecting the trend that overall productivity growth has exceeded the growth in productivity of the typical worker.

Median wage growth has been positive, so it is not true that the typical workers fails to benefit from faster productivity growth. But the benefit for the typical worker is not proportional to the economy-wide growth in productivity, raising the spectre that future productivity growth could be even less proportional.

The result would be rising income inequality — which can straightforwardly be offset with policies that redistribute income. Redistribution might be expensive, but the same AI-driven economic growth that generated the rising inequality would also create the fiscal space needed to offset it. In short, spreading income around is a political challenge, not a policy or economic challenge.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Company Notes Series (#13): SR Bancorp

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first 11 editions in the series can be found hereherehereherehereherehere,  here,  herehere,  here, and here. Please share your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes for SR Bancorp

Data as of 2025-04-24

General background on SR Bancorp

  • SR Bancorp (ticker symbol “SRBK”) is the holding company for Somerset Regal Bank.
  • Somerset Regal Bank was established in 1887 as The Bound Brook Building and Loan Association; it became Somerset Regal Bank in 2023.
  • Somerset Regal Bank conducted a standard conversion that was completed in September 2023; trading of SR Bancorp shares on the NASDAQ exchange started on 20 September 2023. When the IPO was completed, SR Bancorp had 9.50793 million shares outstanding.
  • A day prior to the conversion, SR Bancorp acquired Regal Bancorp and its subsidiary, Regal Bank. The Somerset Regal Bank of today is thus the combination of Somerset Savings Bank and Regal Bank.
  • SR Bancorp’s branches are under either the Somerset Savings Bank banner or the Regal Bank banner; all the branches are in the North Eastern part of the state of New Jersey in the USA.
Figure 1; Source: IPO prospectus
  • SR Bancorp engages primarily in the lending of fixed-rate and adjustable-rate commercial real estate and residential mortgage loans to individuals. Within commercial real estate loans, most of them are in multi-family loans, which are still related to residential real estate (see Figure 2). Loan-to-value ratios for the loans are acceptable: generally no more than 75% for commercial loans, 80% for multifamily loans and 80% for residential loans (residential mortgage loans granted in excess of the 80% loan-to-value ratio criterion generally require private mortgage insurance). Nearly all of SR Bancorp’s loan portfolio is in New Jersey.
Figure 2; Source: SR Bancorp FY2025 Q2 10-Q

Investing information on SR Bancorp

  • SR Bancorp is a thrift conversion – see here for how to invest in thrifts
  • As of 31 December 2024, SR Bancorp had total assets of US$1.065 billion and shareholders’ equity of US$0.198 billion, giving a total equity to assets ratio of an excellent 18.6%. SR Bancorp’s total assets include securities held-to-maturity at amortized cost of US$148.8 million as of 31 December 2024; these securities have a marked-to-market value of US$122.6 million. If SR Bancorp’s shareholders’ equity is adjusted for the marked-to-market value, it would be US$0.172 billion, which would give a total equity to assets ratio of a still-robust 16%.
  • As of 24 April 2025, SR Bancorp has a stock price of US$13.23. Its latest financials (for the 3 months ended 31 December 2024) has its adjusted tangible shareholders’ equity (adjusted for mark-to-market value of securities and intangible assets) at US$0.144 billion, and its share count as 9,255,948, giving an adjusted tangible book value per share of US$15.56, and thus a price-to-tangible book (PTB) ratio of 0.85. If tangible shareholders’ equity was used, the tangible book value per share would be US$18.45 and the PTB ratio would be even better at 0.72
  • On 20 September 2024, SR Bancorp adopted a program to repurchase up to 950,793 shares, which was around 10% of its outstanding share count back then. Since the adoption of the buyback programme, SR Bancorp’s management has led buybacks of 347,067 shares, as of 31 December 2024, at an average price of US$11.29 each. Considering SR Bancorp’s low PTB ratio, the buybacks are accretive to shareholder value. Moreover, the adoption of the repurchase program happened exactly on the 1st anniversary of the thrift’s IPO, which is the earliest date on which a converted thrift can start repurchasing shares; this is a sign that management understands capital allocation and is trying to do the right things for shareholders
  • SR Bancorp has no non-performing assets as of 31 December 2024. Non-performing assets were 0.00% and 0.03% of total assets in FY2024 (fiscal year ended 30 June 2024) and FY2023. This points to well-run lending practices.
  • SR Bancorp’s annualised return on average equity in the first half of FY2025 was a decent (relative for a thrift!) 2.47%.
  • SR Bancorp’s three senior-most leaders are:
    • William Taylor, CEO of SR Bancorp and Somerset Regal Bank, and Chairman of Somerset Regal Bank; Taylor has been CEO since 2013, and Chairman since 2018; Taylor is already 67
    • Christopher Pribula, President and COO of SR Bancorp and Somerset Regal Bank; Pribula has been COO since 2013; Pribula is already 60
    • David Orbach, Executive Chair of SR Bancorp and Executive Vice Chair of Somerset Regal Bank; Orbach had been Executive Chairman of Regal Bancorp since its formation and of Regal Bank since 2011; Orbach is only 51
  • The compensation of Taylor, Pribula, and Orbach, are reasonable, as shown in Figure 3 below. As of 12 February 2024, Taylor, Pribula, and Orbach control 49,269 shares, 30,166 shares, and 133,919 shares respectively; based on SR Bancorp’s share price of US$13.23 as of 24 April 2025, the value of their stakes are US$0.652 million, US$0.399 million, and US$1.77 million, respectively. For Orbach, who has the most shares among the leadership team, his equity value significantly outstrips his annual compensation.
Figure 3; Source: SR Bancorp FY2024 proxy filing
  • Taylor, Pribula, and Orbach have compensation plans that include change in control provisions. In the event that SR Bancorp or Somerset Regal Bank is acquired and the trio’s employment ends, they are each entitled to a severance payment that is equal to 3x the sum of (1) their highest base salary in the three years before their termination, and (2) their average annual total incentive bonus for the three years before their termination. In addition, the terminated executive would also receive a lump sum payment equal to the value of the cost of 36 months of health care.
  • Putting everything together, it appears that SR Bancorp is a thrift with (1) a low valuation, (2) a management team that understands capital allocation, (3) well-run lending operations, (4) a management team with reasonable capability in running a profitable banking operation, and (5) a management team with reasonable compensation and some incentive to sell the bank. SR Bancorp’s standard conversion was completed in September 2023, so the earliest it can sell itself will be September 2026. The ages of Taylor and Pribula suggest that they would be very open to sell SR Bancorp, but Orbach is still relatively young so Orbach’s age could be a “risk” of SR Bancorp choosing to remain independent – the saving grace is that Orbach’s equity value significantly outstrips his annual compensation, as mentioned earlier
  • Assume that SR Bancorp (a) has a return on equity of 2.5% each year, (b) has a P/TB ratio that consistently hovers at 0.7, (c) uses up its repurchase program by April 2026, and subsequently buys back 5% of its outstanding shares annually, (d) gets acquired at a P/TB ratio of 1.4 eventually. Under such a scenario, the returns we could theoretically earn are shown in Table 1
Table 1

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 February 2026:

1. Software Is Dead. Long Live Software – Eugene Ng

SaaS software stocks have declined significantly since Oct 2025 amid broader concerns that software is in decline, disrupted, displaced, and replaced by AI. Companies will use AI to redesign and unbundle their workflows over time, and the markets are effectively pricing in a software apocalypse.

The selloff has been almost indiscriminate, and the market is overly pessimistic…

…Software is a digital tool. It does not make sense to keep reinventing tools (e.g., a calculator or a hammer). If there are new tasks that have not yet been automated and can now be automated with software, now is the best time. Software is a TAM accelerator, and companies can create new and more products in shorter time frames.

The future appears to be agentic, with agents constituting the new digital workforce for humans, working for us and with other agents on exploratory, low-value, and repetitive tasks, thereby allowing us to focus on higher-value creative and strategic tasks.

The fact that everyone has a pen or a keyboard does not mean that we will have a rush of great writers, authors, or coders. The best work will still be done by the select minority, not the vast majority. Writing code is easy. Shipping a basic V1 is just 1% of the work. 99% of building enterprise software is about writing code that actually works and keeps working, maintaining it, iterating on it, securing it, and scaling it, and that is where the real difficulties lie. Vibe coding might be incredible for prototypes, internal tools, and new products, but it is not replacing a proven tool.

It is the same with AI. It does not mean that, if one can code faster with AI assistants, one can write great code or develop a great product. It still requires deep understanding, intent, judgment, and taste. And that’s where the bottleneck lies. Try getting a first-year coder to “vibe-code” and build a massive CRM database, and you will soon realise that it is not as easy. Automation scales whatever structure already exists. Agents tend to work best when intent is explicit and stable, and struggle when it is implicit and judgment-intensive.

SaaS is heterogeneous, not homogeneous. One cannot simply be lazy and lump everything into a single category of thought. The idea that enterprises will dump all software to “vibe-code” their own software with AI agents is wildly optimistic. Larger, more complex SaaS platforms with substantial codebases, deep workflows, extensive API connectors/regulatory licenses, strong network effects, and extensive hardware infrastructure are likely to be more insulated.

Deterministic systems where precision is critical, non-negotiable, requiring it to be 100% all the time, are more likely to be more insulated, as “close enough” is simply unacceptable. Probabilistic systems, conversely, tend to tolerate some errors and accept good-enough performance, and are primarily focused on pattern recognition, content generation, basic automation, and simple decision-making. If an LLM can replicate your probabilistic product with 90% of the quality at 10% of the cost, you are likely not to have a sound business model any longer. Even having a great UI or UX won’t save you.

High-value, mission-critical, must-have software is likely to be more insulated than low-value, non-mission-critical, good-to-have software. Functions such as cybersecurity, payments, and infrastructure are likely to remain robust. Because when these go down, the business stops. Customers should continue to be willing to pay premium prices for quality and peace of mind, remain highly sticky, and rarely switch because the cost of failure is too high. They tend to have high gross retention (customers don’t leave), high net retention (customers spend more over time), and are willing to pay more as their business grows.

2. The Utilities Analyst Who Says The Data Center Demand Story Doesn’t Add Up (Transcript here) – Tracy Alloway, Joe Weisenthal, and Andy DeVries

Tracy: Interesting. One of the reasons we wanted to talk to you is because you have that contrarian take on the data center built out, and we wrote it up in the Odd Lots newsletter, which everyone should subscribe to. It got a lot of attention. Your analysis, interestingly, is just based on some pretty simple math. So why don’t you, just to start out with, why don’t you walk us through the calculations that you’re actually making to try to analyze how much capacity the utilities are taking on to actually power data centers?

Andy: As you said, it’s pretty simple math here. So data centers now are consuming around 45 GW of power. And you can switch between capacity and throughput – I’m going to stick with capacity. So 45 GW of power. And then there’s lots and lots of third party estimates for where they’re going to be in 2030, and they are centered around this, 90 GW, 95 GW. So you need to add 50 GW. For 2035, there’s a lot fewer estimates. You come around 160 GW. These estimates, they’re all over the place, they come from sell-side banks, they come from consultants, they come from everyone. BNEF has one. They’re I think one of the best out there.

Joe: Thank you.

Andy: We use them a lot. So that’s on the demand side on where you’re going to come out on these. Then you look at the supply – and everyone talks about the demand right – but then you look at the supply and all these tech bros are too cool to actually look at the supply and do utility analysis. Who wants to be a utility analyst? You were making fun of us before. So you look at the supply and these utilities are tracking all these data centers connecting to the grid because they’ve got to do a lot of work. Spend a lot of money on transmission, distribution, new substations, transformers, it’s a lot of work. But it boosts their earnings growth so they’re happy to talk about this. You look at where they’re at and where they see things coming, they’ve got around 140 GW of near-term supply. Kudos to the utilities, they break out what’s firm, committed, signed, contracted, versus pipeline behind it. Because there’s a lot of double, triple, quadruple counting. If you’re going to build a data center in the Southeast, you’re going to tell Duke, you’re going to tell Southern, you’re going to tell Dominion, you’re going to build one. So that’s the pipeline potential. But looking just at the firm, committed, whatever they want to call it, around 140 GW.

Now you got to PUE adjust that. When you connect a data center to the grid, you’ve got lights, you’ve got cooling. Those third party estimates I gave you are just for raw compute.

Tracy: Why did you split those out though? All data centers are going to need to be cooled down, right? What’s the point of splitting it out?

Andy: I’m not splitting up. I’m just adjusting it downward, because the third-party estimates are just compute. So you’re connecting to the grid, you’re going to ask for the lights, the cooling and everything. I want to go apples to apples versus the third party.

Joe: What does PUE stand for? 

Andy: Power usage efficiency. So they’re at 140 GW. So that power is down to 110 GW on apples to apples. Just to go back, you only need 50 GW on the demand side between now and 2030. The utilities are working at connecting 110 GW, so the utilities are working on already connecting almost as much as you need by 2035. Again, just to make sure we are on the same page, third party estimates 45 GW for data centers now, going to 95 GW. That’s 50 GW. Utilities are working on 110 GW. They don’t give timing for that. Some of it’s going to be past 2030. What I’m trying to say is there is a lot of supply of data centers coming and it’s very unclear if there’s going to be demand for this…

…Tracy: The wild card to me seems to be the demand forecasts. We’re already seeing those change pretty wildly. I know you mentioned Bloomberg NEF – they’ve raised their forecast, because of the data center buildout. They’ve raised their forecast of how much energy is actually needed. How much confidence do you have in those demand numbers, and how could they change over time?

Andy: Moderate confidence. Look where we’re right now. OpenAI built all the ChatGPT using 2 GW. All the big tech hyperscalers, they haven’t given their 2025 volumes yet, but if you take their 2024 volumes and then double it – and this is output, so I’m going to transfer it back to capacity – and you assume a 60% capacity factor, all the hyperscalers combine around 15 GW. That’s got to be over half the data center demand. To talk about 95 GW  – it’s a staggering number. Then you get more advances and Nvidia chip efficiency – obviously Jevon’s Paradox kicks in, you’ve had numerous guests talk about that – it’s just a lot of power.

Tracy: Can you just remind us 1 GW is enough to power what? I like these comparisons.

Andy: A million homes. It depends if you’re in Florida or the northeast. But generally speaking, that’s where you’re at…

…Andy: But then you don’t need as many new power plants as everyone’s saying.  Constellation’s CEO said on a call the other day. He said, “Use the Texas market.” He said, “87 GW peak market, you could add 10 GW to Texas tomorrow, which would be the equivalent of sending every single Nvidia chip for an entire year to Texas and running them 24/7. That’s 10 GW. You could run it right now, existing grid, existing plants for all but 40-50 hours a year.” We stress tested it. There are some coal plants that could ramp up capacity factor. There’s plenty of gas plants that can. I don’t know if it’s 40 hours, 100 hours, 140 hours, but it makes more sense to pay someone else not to run their chemical company, the refinery company, for 40-50 hours a year, rather than have the utilities go out and spend $10 billion connecting faraway wind farms. That’s the argument. We’ve come in the middle of it, but there is plenty of existing capacity on the grid that could ramp up to meet it. Then other guests have pointed out at Odd Lots, the peak demand of the grid is 850 GW. The overall size of the grid is are 1,200 GW and then you’re adding 50 GW a year of solar, and then you’re going to start adding 20 GW of gas. We’re going to handle it. I’m not really worried about any brownouts or anything.

3. Incentives > Intelligence: The Real Barrier(s) to Agentic AI – Abdullah Al-Rezwan

Such “disingenuous yet clever” strategy is actually a good glimpse of the barrier to agentic AI’s adoption. While most of us focus too much on technical capabilities of AI, we may still be underestimating the challenges related to (lack of) incentives of incumbents as well as legal frameworks for agentic AIs to flourish. “Ghosts of Electricity” had a very good piece explicitly laying out couple of real headaches:

“we highlight two main obstacles that stand in the way of AI agents becoming true digital partners. The first has to do with the design of the internet itself–the interface of nearly every website was meticulously optimized for humans. But what works for humans does not necessarily work for AI agents. Until AI can truly emulate every aspect of a human being, we will likely need to design a parallel internet for agentic commerce to work. But there’s reasons to suspect that this will not happen soon: some firms have little to gain, and potentially much to lose, from investing and facilitating a machine-readable web. This leads us to the second obstacle, which is even simpler: many use-cases for AI agents are illegal, or at least legally ambiguous. The rights around AI agents need to be clarified and developed in order for agents to participate meaningfully in economic transactions and interactions.”

In the piece, they substantiated these headaches with a couple of examples. Some excerpts below:

“Let’s say you tell your favorite AI tool (ChatGPT Atlas, Perplexity Comet, Claude, Gemini Antigravity) to purchase a concert ticket for you or to shop on Amazon. Take seat selection. The agent reaches the seat map and gets stuck because it can’t tell what’s actually available or what counts as a “good” choice. The map isn’t a simple list: seats change color when you hover, prices only appear after clicking, and availability updates every second as other people buy tickets. While the agent pauses to figure out what to do, the seat disappears, the page refreshes, and it loses its place. Every pause, waiting for pages to load, retrying after errors, handing control back to you, adds friction. What takes a human a few minutes to do turns into a brittle, ten-minute ordeal

4. The Slow Singularity – Abdullah Al-Rezwan

To understand why the future might be sluggish, the authors first had to decode the past. In a methodological twist that fits the subject matter perfectly, they employed OpenAI’s Deep Research to dig through economic history and construct a dataset of 150 essential tasks over the last century. This analysis revealed a counterintuitive “Zero Productivity Paradox” as switching a task from labor to capital contributes zero to Total Factor Productivity (TFP) growth at the exact moment it happens. This is because firms switch exactly when the costs are equal. The growth comes entirely from what happens after the switch: the task is now performed by a machine that improves exponentially faster than a human.

They estimate that while machine productivity on automated tasks grows at a blistering 5% annually, human task efficiency grows at a meager 0.5% and in some sectors, human efficiency appears to be declining. To prove how vital this dynamic is, they calculated a “frozen” counterfactual: if we had stopped automating new tasks in 1950, but allowed computers to keep getting faster at the things they were already doing, US economic growth would have essentially flatlined for the last 70 years…

…The same logic explains why the AI “singularity” is likely to be a slow burn rather than an explosion. The economy operates on a “weak link” principle. Production requires a chain of complementary tasks; you need high-speed coding, but you also need management, legal compliance, physical logistics etc. Because these tasks are interlinked, the economy is constrained by its slowest components. Even if AI automates cognitive tasks with infinite speed, total output remains bottlenecked by the essential tasks that still require slow-improving human labor.

5. The Hidden Book Value of Community Banks: Why Call Reports Matter More Than Public Financials – Dirt Cheap Banks

Call Reports exist for safety and soundness, not for investors. They are not designed to be friendly, summarized, or marketed. They are designed to tell regulators whether a bank can survive stress, fund itself, and absorb losses. That is exactly why they are so valuable.

The first thing to understand is structure. When you buy stock in a small community bank, you are almost always buying the holding company, not the bank itself. The holding company often has no real operations. It owns one asset, the bank. It might have a little cash, maybe some legal expenses, sometimes holding company debt, but that is it. The bank owns the loans, the deposits, the securities, the real estate, and the earnings power…

…Public financial statements typically show the holding company only, and often only once a year…

…Call Reports are different. They are filed quarterly by the bank itself. They show the full balance sheet, income statement, and capital position of the operating bank. If the bank earns money and retains it, equity goes up in the Call Report immediately, whether or not a dividend is paid to the parent. If securities move and AOCI changes, you see it. If credit costs rise, you see it. If loan growth accelerates, you see it.

When people ask which book value is the real one, the answer from decades of bank investing is simple. The bank level equity in the Call Report is the economic book value. That is what generates earnings. That is what a buyer would pay for in a sale. That is what regulators protect. The parent level equity is just an accounting wrapper…

…West Shore Bank Corporation $WSSH is a textbook case of how public financials can materially misstate economic reality for small community banks, and why Call Reports create an information advantage…

…At December 31, 2024, the consolidated balance sheet shows:

Total stockholders’ equity of approximately $48.2 million.

This is the number scraped by data aggregators. It is the number displayed on OTC Markets. It is the number most investors implicitly anchor to when thinking about book value.

With a current market capitalization of roughly $45 million, West Shore appears to be trading at or near book value based on these public financials. To a casual observer, the stock looks fairly valued. There is no obvious discount screaming off the page…

…In the Call Report, under Total bank equity capital, the number is dramatically higher.

As of the most recent Call Report dated 9/30/2025, total bank equity capital is approximately $73 million.

This is the capital base regulators use to determine whether the bank is well capitalized. It reflects retained earnings, balance sheet growth, and changes in AOCI on a quarterly basis.

Nothing magical happened between these two documents. There was no recapitalization. No asset sale. No accounting maneuver.

The difference exists because the two statements are answering different questions.

The annual report answers:

What does the holding company’s GAAP equity look like at year end?

The Call Report answers:

How much capital does the operating bank have today?

Those are not the same question, and in small community banks, the answers often diverge significantly over time.

Using the same $45 million market capitalization:

  • Based on public financials, West Shore appears to trade at roughly 0.9x to 1.0x book value
  • Based on Call Report data, West Shore is trading at approximately 0.6x bank-level book value

That is the entire disconnect.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Gemini) and Amazon. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the fourth quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Alphabet (NASDAQ: GOOG)

The Gemini app now has 750 million monthly active users (was 650 million in 2025 Q3); the Gemini app is seeing significantly higher engagement per user after the launch of Gemini 3 in December 2025; Alphabet offers the most extensive model portfolio in the world and leads across text, vision, and image-to-video model leaderboards; Gemini 3 Pro is state-of-the-art in reasoning and multimodal understanding; Gemini 3 Pro has the fastest adoption of any model in Alphabet’s history; Gemini 3 Pro has consistently processed 3x the number of daily tokens that 2.5 Pro has; Gemini 3 is powering Google Antigravity, a software development platform with more than 1.5 million weekly active users since its launch 2 months ago; Alphabet’s 1st-party AI models, including Gemini, now process 10 billion tokens per minute from direct APIs (was 7 billion in 2025 Q3); Gemini 3 is now integrated into AI Mode and AI Overviews in Google Search; more than 120,000 enterprises are using Gemini today; a sheer majority of the top SaaS companies in the world are using Gemini; management is confident of maintaining the innovation momentum with Alphabet’s 1st-party models; management is not seeing Gemini cannablising Google Search; management is not in any hurry to introduce advertising to the Gemini app; 

Our Gemini app now has over 750 million monthly active users. We are also seeing significantly higher engagement per user, especially since the launch of Gemini 3 in December…

…We offer the most extensive model portfolio in the world and lead across text, vision and image to video LMArena leaderboards. Gemini 3 Pro drives the state-of-the-art in reasoning and multimodal understanding. It has seen the fastest adoption of any model in our history. Since launch, Gemini 3 Pro has consistently processed 3x as many daily tokens on average as 2.5 Pro.

Our latest model powers Google Antigravity, our new development platform where agents can autonomously plan and execute complex software tasks. It already has more than 1.5 million weekly active users after launching just over 2 months ago.

Our first-party models like Gemini now process over 10 billion tokens per minute via direct API used by our customers, up from 7 billion last quarter…

…We have integrated Gemini 3 directly into AI mode in search. Now search can better understand your query, dive deeper on the web and generate interactive UI experiences. And last week, we upgraded AI Overviews to Gemini 3, giving users a best-in-class AI response at the top of the search results page…

…Today, more than 120,000 enterprises use Gemini, including AI unicorns like Lovable and OpenEvidence and global enterprises like Airbus and Honeywell. 95% of the top 20 and over 80% of the top 100 SaaS companies use Gemini, including Salesforce and Shopify. Gemini is becoming the AI engine for the world’s most successful software companies…

…We are obviously improving these models across many paradigms, right, on pretraining, post-training, test and compute and so on. And we are bringing multimodal models into the picture. We are bringing agentic capabilities, the coding area is showing a lot of progress. And obviously, integrating all of this together and offering a great customer experience for our — to our products as well as through our APIs to our cloud customers. To me, it feels like there’s a lot of headroom ahead. And as you’ve seen our trajectory over the past 2 years in terms of how we have been making progress. I think we are in a very, very relentless innovation cadence. And I think we are confident about maintaining that momentum as we go through ’26…

…People are obviously using search, experiencing AI Overviews and AI Mode as part of it and Gemini app as well. And the combination of all of that, I think, creates an expansionary moment. I think it’s expanding the type of queries people do with Google overall. And so overall, some of it all is what we see as a growth opportunity, and we haven’t seen any evidence of cannibalization there…

…In terms of the Gemini app today, we are focused on our free tier and subscriptions and seeing great growth, as Sundar discussed. But ads have always been part of scaling products to reach billions of people. And if done well, ads can be really valuable and helpful commercial information. And at the right moment, we’ll share any plans. But as we’ve said, we’re not rushing anything here.

Google Cloud saw accelerating growth in 2025 Q4; Google Cloud backlog grew 55% sequentially to $240 billion in 2025 Q4 (was $155 billion in 2025 Q3); Google Cloud was able to lower Gemini serving unit costs by 78% over 2025; Google Cloud had double the new customer velocity in 2025 Q4 compared to 2025 Q1; the number of Google Cloud deals in 2025 exceeding $1 billion surpassed the past 3 years combined; existing Google Cloud customers are outpacing their initial commitments by over 30%; nearly 75% of Google Cloud customers have used Google Cloud’s end-to-end vertically integrated AI stack; Google Cloud has 14 product lines that each exceed $1 billion in annual revenue; Google Cloud is offering a wide range of Alphabet’s 1st-party leading generative AI models to customers; 350 customers each processed more than 100 billion tokens in December 2025; revenue from products built on Alphabet’s 1st-party AI models was up 400% year-on-year in 2025 Q4; the integration of Gemini and Google Workspace is driving wins for Google Cloud; revenue from AI solutions built by Google Cloud’s partners increased nearly 300% year-on-year in 2025 Q4

Cloud significantly accelerated with revenues growing 48%, now on an annual run rate of over $70 billion. Backlog grew by 55% quarter-over-quarter to $240 billion, representing a wide breadth of customers driven by demand for AI products…

…Google Cloud’s backlog increased 55% sequentially and more than doubled year-over-year, reaching $240 billion at the end of the fourth quarter. The increase in backlog was driven by strong demand for our cloud products, led by our enterprise AI offerings from multiple customers….

…As we scale, we are getting dramatically more efficient. We were able to lower Gemini serving unit cost by 78% over 2025 through model optimizations, efficiency and utilization improvements…

…We are winning more new customers faster. We exited the year with double the new customer velocity compared to Q1…

…We are also signing larger customer commitments. The number of deals in 2025, over $1 billion surpassed the previous 3 years combined…

…We continue to deepen our relationships with existing customers who are outpacing their initial commitments by over 30%.

Nearly 75% of Google Cloud customers have used our vertically optimized AI from chips to models to AI platforms and enterprise AI agents, which offer superior performance, quality, security and cost efficiency. These AI customers use 1.8x as many products as those who do not, enabling us to diversify our product portfolio, deepen customer relationships and accelerate revenue growth. Our product line has multiple monetization levers spanning infrastructure, platform and high-margin AI-powered products and services with 14 product lines each exceeding $1 billion in annual revenue…

…We also offer our leading generative AI models, including Gemini, Imagen, Veo, Chirp and Lyria to cloud customers. In December alone, nearly 350 customers each processed more than 100 billion tokens. In Q4, revenue from products built on our generative AI models grew nearly 400% year-over-year, significantly accelerating from the prior quarter…

…Our integration of Gemini and Google Workspace is driving wins with global brands like Schwarz Group and public sector organizations like the U.S. Department of Transportation. We are also seeing momentum with independent software vendors. Revenue from AI solutions built by our partners increased nearly 300% year-over-year and commitments from our top 15 software partners grew more than 16x year-over-year.

Google Cloud has the widest variety of compute options, from NVIDIA’s GPUs to Alphabet’s own TPUs; Google Cloud will be the among the first cloud providers to offer NVIDIA’s latest Vera Rubin GPU; Alphabet has been working on its 1st-party TPUs for 10 years; Alphabet’s TPUs are being used by leading frontier AI labs (likely referring to Anthropic) and organisations in financial services, automotive, and public service; management seems to not be willing to sell TPUs to 3rd-party data centers

We have the industry’s widest variety of compute options. That includes GPUs from our partner, NVIDIA, who announced at CES, that we’ll be among the first to offer their latest Vera Rubin GPU platform, plus our own TPUs that we have been developing for a decade…

…We offer leading infrastructure for AI training and inference to our cloud customers with the industry’s widest variety of compute options from our own seventh-generation Ironwood TPU to the latest NVIDIA GPUs. Our 10-year track record in building our own accelerators with expertise in chips, systems, networking and software translates to leading power and performance efficiency for large-scale inference and training. Our cloud AI accelerators serve the leading Frontier AI labs, capital markets firms like Citadel Securities, enterprises like Mercedes-Benz and governments for high-performance computing applications…

…[Question] How should we think about the potential for TPUs to move outside of Google Cloud and into external data centers and develop as an incremental revenue stream?

[Answer] In terms of TPUs, I would think about it as it’s reflected in our overall part of what makes Google Cloud an attractive choice is the wide choice of accelerators we bring to bear here, and we meet customers in terms of what their needs are and the choice as well as other things we bring as part of Google Cloud, the end-to-end efficiencies in our data centers, all of that comes to bear. And that’s what you see in the strong momentum in Google Cloud. And given the overall investment we are making, we expect to be able to drive that momentum there. 

Alphabet’s management recently launched personal intelligence in AI Mode in Google Search; management recently introduced the Universal Commerce Protocol as a new open standard for agentic commerce; Google Search saw more usage in 2025 Q4 than ever before, with AI being an expansionary force; management has shipped 250 product launches within AI Mode and AI Overviews in Google Search in 2025 Q4; Gemini 3 is now integrated into AI Mode and AI Overviews in Google Search; management has made the transition from AI Overview to AI Mode completely seamless; daily AI Mode queries per user doubled in the US since launch; AI Overviews continue to perform well; queries in AI Mode are 3x longer than traditional searches, and a significant portion of queries in AI Mode lead to a follow-up question; people are searching in new ways beyond text, with 1 in 6 AI Mode queries being in non-text format; users of AI Mode can soon use a new checkout experience to buy directly because of the universal commerce protocol

In January alone, we have launched personal intelligence in AI mode in search and the Gemini app…

…And we laid the groundwork for shopping in the AI era by introducing a new open standard for agentic commerce, the Universal Commerce Protocol built alongside many retail industry leaders…

…Search saw more usage in Q4 than ever before as AI continues to drive an expansionary moment…

…We shipped over 250 product launches within AI Mode and AI Overviews just last quarter. We have integrated Gemini 3 directly into AI mode in search. Now search can better understand your query, dive deeper on the web and generate interactive UI experiences. And last week, we upgraded AI Overviews to Gemini 3, giving users a best-in-class AI response at the top of the search results page. We have also made the search experience more cohesive, ensuring the transition from an AI Overview to a conversation in AI Mode is completely seamless…

…First, once people start using these new experiences, they use them more. In the U.S., we saw daily AI Mode queries per user double since launch and AI Overviews continue to perform very well. Second, people are engaging in longer, more complex sessions. Queries in AI Mode are 3x longer than traditional searches. We are also seeing sessions become more conversational with a significant portion of queries in AI Mode now leading to a follow-up question. Third, people are searching in new ways beyond text. Nearly 1 in 6 AI Mode queries are now nontext using voice or images…

…We are building the era of agentic commerce and working with our partners to introduce the universal commerce protocol in our consumer products and across the web. We’ve received tremendous feedback from the industry. Soon, people can use a new checkout experience to buy directly in AI mode in Gemini from select merchants.

Alphabet’s management is seeing strong demand for its 1st-party enterprise AI agents; Alphabet has sold more than 8 million paid seats of Gemini Enterprise to 2,800 companies; Gemini Enterprise managed over 5 billion customer interactions in 2025 Q4, up 65% year-on-year

Leading enterprises are also driving strong demand for our enterprise AI agents. We have sold more than 8 million paid seats of Gemini Enterprise, our enterprise AI platform to more than 2,800 companies, including BNY and Virgin Voyages to streamline knowledge management and automate processes. Gemini Enterprise managed over 5 billion customer interactions in Q4, growing 65% year-over-year for customers, including Wendy’s, Kroger and Woolworths Group. 

Alphabet is Apple’s preferred cloud provider

We are collaborating with Apple as their preferred cloud provider and to develop the next generation of Apple Foundation Models based on Gemini technology.

1 million channels used Alphabet’s new AI creation tools in December 2025 each day; 20 million viewers used Youtube’s new Gemini-powered Ask tool in December 2025

On average, every day in December, over 1 million channels used our new AI creation tools to supercharge their creativity. During that same month, more than 20 million viewers used our new Ask tool powered by Gemini to learn more about the content they watched.

Waymo recently raised its largest investment round to date; Waymo surpassed 20 million fully autonomous trips in December 2025; Waymo is now providing 400,000 rides per week; Waymo recently launched its 6th market in Miami; Waymo will soon expand to the UK and Japan; Alphabet participated in Waymo’s latest investment round

This week, Waymo raised its largest investment round to date and is well positioned to continue its momentum with safety at the core. In December, we surpassed 20 million fully autonomous trips and are now providing more than 400,000 rides every week. Waymo continues to expand its service territory. Its sixth market, Miami, launched 2 weeks ago, and Waymo will soon expand its service to multiple cities across the U.S. and in the U.K. and Japan. The team has made incredible progress on important capabilities, including opening up public service to airports and freeways…

…Alphabet funded a significant portion of the $16 billion investment round that Waymo announced on Monday, which will allow the business to accelerate its global expansion.

Alphabet’s management is investing in AI to drive improvements across all areas of marketing; management thinks AI gives businesses the ability to reach more customers in more places than before; Gemini improves advertising quality, advertiser tools, and new advertising experiences; Gemini helps Alphabet evaluate advertising relevance with greater accuracy than before; Gemini helps Alphabet deliver ads on longer, more complex searches that were previously challenging to monetize; Gemini helps Alphabet improve understanding of non-English languages, thus helping businesses scale globally; Gemini helps businesses generate new advertising campaigns through a conversational experience; advertisers used Gemini to create 70 million creative assets in AI Max and PMax in 2025 Q4; Artizia used AI Max to achieve an 80% incremental uplift in conversion value in 2025 Q4; L’Oreal used AI Max in 2025 to increase revenue for DTC (direct to consumer) brands by 23%; management is in the early stages of experimenting with AI Mode monetization, with an example being Direct Offers, which allow advertisers to show exclusive offers to shoppers who buy directly in AI Mode

We’re investing in AI to drive significant improvements across all areas of marketing. We’re expanding the entire playing field that advertisers can compete on. AI gives businesses the ability to reach more customers in more places than ever before. Gemini uniquely positions us to bring the transformational benefits of AI to ads in 3 critical areas for our customers: ads quality, advertiser tools and new AI user experiences.

First, ads quality. We’ve been deploying Gemini models to improve query understanding at a rate of almost a launch per month for the last 2 years. These improvements drive better query matching, ranking and quality, making search ads even more effective. With Gemini across our ads quality stack, we evaluate relevance with greater accuracy than with previous generations of models. This has significantly improved our ability to systematically deliver more helpful high-quality ads, contributing to a meaningful reduction in irrelevant ads served. Gemini’s understanding of intent has increased our ability to deliver ads on longer, more complex searches that were previously challenging to monetize. Gemini models also have a significant impact on query understanding in non-English languages, expanding opportunities for businesses to scale globally.

Second, we’re building more agentic actions into our advertiser tools. Businesses can now leverage Gemini in conversational experiences within Ads and Analytics Advisor to identify and run recommended actions such as generating new campaigns. Advertisers use Gemini as a real-time partner to assemble creatives. In Q4 alone, they used Gemini to create nearly 70 million creative assets via text customization in AI Max and PMax. For instance, Aritzia, Canada’s premier fashion house used AI Max to find new high-value customers that traditional strategies miss, delivering an 80% incremental uplift in conversion value for Q4. L’Oreal, one of the first alpha testers, used AI Max in 2025 across 800 unique campaigns in 23 countries and 30 brands. AI Max enabled the L’Oreal Group to maximize its presence across the full consumer journey, fuel its consumer growth and increase revenue for DTC brands like NYX by 23%.

The third area is how we monetize new AI user experiences in search. We have significantly increased our focus on AI mode and are in the early stages of experimenting with AI mode monetization like testing ads below the AI response with more underway. For example, we announced Direct Offers, a new Google Ads pilot, which will allow advertisers to show exclusive offers for shoppers who are ready to buy directly in AI mode. This new type of sponsored content uses AI to match the right offer provided by the retailer to the right user.

Google Cloud had 48% revenue growth in 2025 Q4 (was 34% in 2025 Q3) driven by growth in GCP; GCP grew at a much higher rate than Google Cloud’s overall growth, driven by enterprise AI products, which have billions in quarterly revenue; the enterprise AI products included enterprise AI infrastructure (i.e. usage of TPUs and NVIDIA’s GPUs) and enterprise AI solutions; the core GCP, non-AI business was also a meaningful contributor to growth; Google Cloud operating margin was 30.1% (was 23.7% in 2025 Q3 and was 17.5% in 2024 Q4)

The Google Cloud segment delivered outstanding results in the fourth quarter as the business continued to benefit from strong demand for our enterprise AI products. Cloud revenue accelerated meaningfully and were up 48% to $17.7 billion. Revenues were driven by strong performance in GCP, which continued to grow at a rate that was much higher than cloud’s overall revenue growth rate…

…GCP’s performance was driven by accelerating growth in enterprise AI products, which are generating billions in quarterly revenues. We had strong growth in both enterprise AI infrastructure, driven by deployment of TPUs and GPUs and enterprise AI solutions, which benefited from demand for our industry-leading models, including Gemini 3. Core GCP was also a meaningful contributor to growth due to strong demand for infrastructure and other services such as cybersecurity and data analytics. We also had double-digit growth in Workspace, driven by an increase in average revenue per seats and the number of seats. Cloud operating income was $5.3 billion, more than doubling year-over-year, and operating margin increased from 17.5% in the fourth quarter of last year to 30.1%.

In terms of Alphabet’s outlook, management notes that Google Cloud is seeing significant demand, and that the demand/supply situation is still tight; management notes that Alphabet’s AI investments have already translated into strong performance in the business; management expects capex of $175 billion to $185 billion in 2026 (nearly double from $91.4 billion in 2025, which was itself up 65% from $55.4 billion in 2024, and 2024’s capex was up 69% from 2023); the capex will be for AI compute capacity to build frontier models, as well as for compute to (1) improve user experiences and drive higher advertiser ROI in Google Services, and (2) meet Google Cloud customer demands; management expects the growth rate in depreciation expense to accelerate in 2026 Q1 and meaningfully increase for the year; Google Cloud’s supply is tight even when it has been ramping up supply, and management expects tight supply throughout 2026; when management makes capex decisions, they go through a rigorous process of assessing the return on the investment; the capex in 2026 will be split 60-40 in terms of servers, and data centers and networking equipment; just over half of Alphabet’s compute capex in 2026 is expected to go towards the cloud business

In Google Cloud, we’re seeing significant demand for our products and services, which we expect to continue to drive strong growth despite the tight supply environment we’re operating in…

…The investment we have been making in AI are already translating into strong performance across the business, as you’ve seen in our financial results. Our successful execution, coupled with strong performance reinforces our conviction to make the investments required to further capitalize on the AI opportunity. For the full year 2026, we expect CapEx to be in the range of $175 billion to $185 billion with investments ramping over the course of the year. We’re investing in AI compute capacity to support Frontier model development by Google DeepMind, ongoing efforts to improve the user experience and drive higher advertiser ROI in Google Services, significant cloud customer demand as well as strategic investments in Other Bets. Keep in mind that the availability of supply, pricing of components and timing of cash payments can cause some variability in the reported CapEx number…

…We’ve been supply constrained even as we’ve been ramping up our capacity…

…I expect the demand we are seeing across the board across our services, what we need to invest for future work for Google DeepMind as well as for cloud, I think, is exceptionally strong. And so I do expect to go through the year in a supply-constrained way…

…We have a highly rigorous framework that we use internally where we look at all the needs for investment, whether it’s from our own organization or from external customers and have an estimate of what that investment could potentially yield, obviously, not just near term but long term as well. So we take that into consideration when we make the following decision. The first one is the total investment that we make across the company. This was, for example, in 2025, the $91 billion we invested in CapEx and our estimate for CapEx investment this year. So what’s the total envelope that we want to invest to ensure that we can drive both near-term and long-term growth for the company. And then the second way we use that framework is to just allocate these funds across the organization, determine where we should make these investments. And throughout the year, as you can imagine, we always look to understand where things are moving, whether it’s external dynamics or internal dynamics, and I’ve mentioned some of the supply chain pressures we’re seeing externally. So we look at this with a highly rigorous framework to make sure that we’re making the right decision.

It was exciting to see the fact that we’re already monetizing and you saw it in the results that we just issued this quarter, the investments that we’ve made in AI. It’s already delivering results across the business. I know it in cloud, it’s very obvious external, but you’ve heard the comments on the success we’re seeing in search, the comments from Sundar and from Philipp and then the Frontier model development that really serves as the foundation for the organization. We then also look at just the cash flow, cash flow generation and the health of our financials and the balance sheet. That’s important as well…

…Approximately 60% of our investment in 2025, and it’s going to be fairly similar in 2026, went towards machines, so the servers. And then 40% is what you referred to as long-duration assets, which is our data centers and network and equipment…

…For 2026, just over half of our ML compute is expected to go towards the cloud business.

In agentic use cases, Alphabet’s management thinks coding is the area where progress was most felt

I’ll take the agentic part first. I definitely think ’25 was more about laying the foundation, getting the models to start being more robust in agentic use cases. And obviously, coding is an area where the progress was the most felt.

The launch of the Universal Commerce Protocol (UCP) in January 2026 has been really well received; management is integrating UCP into all of Alphabet’s AI surfaces; management thinks 2026 is the year where consumers can actually experience agentic commerce; management sees the UCP making it much easier for (1) consumers to complete transactions, and (2) merchants to showcase their offerings

I think the launch of Universal Commerce Protocol at NRF in January with a bunch of partners, founding partners, I think has been super well received. So I’m excited now that we have laid the foundation of interoperability on which agentic commerce can work. And now we are integrating those experiences into Gemini, AI Mode and so on. So I think this is a year where you will see consumers actually being able to use all of this, and I’m excited about the opportunity ahead…

…Part of what’s been good in designing the Universal Commerce Protocol is it makes it much easier for users to complete transactions. But at the same time, it allows merchants to help showcase the range of their offerings, if they want to make promotions, et cetera. So all of that is built into the protocol.

About 50% of code used within Alphabet is written by AI agents that are then reviewed by engineers; Alphabet is employing AI widely within the company

About 50% of our codes are written by agents, coding agents, which are then reviewed by our own engineers. But certainly, it helps our engineers do more and move faster with the current footprint. We look at how we run the business across the organization. So using AI within the business to drive daily operations. It can be all the way from the engineering team to small teams within our back office, even within my finance team, for example, we deployed agents within our treasury organization. We’re deploying agents within how we run — how we pay and reconcile invoice, et cetera.

Alphabet’s management is seeing successful SaaS companies incorporate Gemini deeply into their products and internal processes; management thinks SaaS companies that seize the moment with AI can continue growing; management is seeing very robust token consumption growth by SaaS companies in 2025 Q4

[Question] It just seems like there’s a market belief that the software companies are kind of losing seat power, losing pricing power, and it looks like it could be a really terrible customer base. I can’t imagine that that’s actually going to happen. But could you just talk about it? You’re at the forefront of AI and the impact that that’s having on software companies.

[Answer] in terms of Gemini adoption and how — what this moment means for SaaS, et cetera. Look, at least from my vantage point, I definitely see we have very, very good SaaS customers who are leaders in their respective categories. And what I see the successful companies doing is they are definitely incorporating Gemini deeply in critical workflows, be it on improving their product experience and driving growth or using it to drive efficiency within their organizations. And I think it is an enabling tool, just like it has been an enabling tool for us across our products and services, be it Search, YouTube, et cetera. I think the companies who are seizing the moment, I think, have the same opportunity ahead. And at least we are excited about the partnerships we have there. And the momentum, if I look at it in terms of their tokens usage, et cetera, the growth has been very robust in Q4.

A major concern of Alphabet’s management at the moment is the ability to build AI compute capacity

I think specifically at this moment, maybe the top question is definitely around compute capacity, all the constraints, be it power, land, supply chain constraints, how do you ramp up to meet this extraordinary demand for this moment, get our investments right for the long term and do it all in a way that we are driving efficiencies and doing it in a world-class way.

Amazon (NASDAQ: AMZN)

AWS grew 24% year-on-year in 2025 Q4 (was 20% in 2025 Q3), and is now growing at its fastest pace in 13 quarters; AWS’s run rate has reached $142 billion (was $132 billion in 2025 Q3); AWS’s chips business, including Graviton and Trainium, are over $10 billion om annual revenue rate, and growing triple-digits; AWS is where most companies’ data and workloads reside, and why most companies want to run AI in AWS; AWS’s non-AI workloads are growing faster than expected; management thinks that if companies want to use AI well, their data and applications need to be hosted in the cloud, and this is driving cloud migration; AWS’s backlog is $244 billion in 2025 Q4, up 40% year-on-year (was $200 billion in 2025 Q3)

AWS growth continued to accelerate to 24%, the fastest we’ve seen in 13 quarters, up $2.6 billion quarter-over-quarter and nearly $7 billion year-over-year. AWS is now a $142 billion annualized run rate business, and our chips business, inclusive of Graviton and Trainium is now over $10 billion in annual revenue run rate, growing triple-digit percentages year-over-year…

…We consistently see customers wanting to run their AI workloads where the rest of their applications and data are…

…If you look at the capital we’re spending and intend to spend this year, it’s predominantly in AWS. And some of it is for our core workloads, which are non-AI workloads because they’re growing at a faster rate than we anticipated…

…If you really want to use AI in an expansive way, you need your data in the cloud and you need your applications in the cloud. Those are all big tailwinds pushing people towards the cloud…

…[Question] Maybe a few parts just on AWS. Can you speak to the current state of your revenue backlog as of Q4?

[Answer] I’ll start with the first one, which is on backlog, our backlog is $244 billion. That’s up 40% year-over-year. I think it’s up 22% quarter-over-quarter.

Amazon’s management is seeing that AI applications tend to use multiple models, as different models are better on different dimensions; Amazon Bedrock, AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps, makes it easy to use multiple models for inference; Amazon Bedrock now has a multi-billion dollar annualised revenue run rate, and customer spend was up 60% sequentially in 2025 Q4

Customers are realizing as they get further into AI that they need choice as different models are better on different dimensions. In fact, most sophisticated AI applications leverage multiple models, whether customers want frontier models like Anthropic’s Claude or open models like Mistral or Llama, Frontier Intelligence with lower cost and latency like Amazon Nova or video and audio models like TwelveLabs or Nova Sonic. Amazon Bedrock makes it easy to use these models to run inference securely, scalably and performantly. Bedrock is now a multibillion-dollar annualized run rate business and customer spend grew 60% quarter-over-quarter.

Amazon’s management sees that a lot of work is needed to post-train and fine tune an AI model before it can be used in an application; AWS’s SageMaker AI service, makes it easy for users to post-train and fine tune AI models

Customers sometimes think if they have a good model, they will have a good AI application. It’s not really true. It takes a lot of work to post-train and fine-tune a model for your application. Our SageMaker AI service, along with fine-tuning tools in Bedrock make this much easier for customers.

Enterprises using AI models are currently infusing their proprietary data into the models late in the process through fine tuning or post-training; Amazon’s management believe enterprises will want AI models to train on their proprietary data earlier in the process, through pre-training; AWS’s NovaForge service allows enterprises to mix their own proprietary data into the pre-training phase of Amazon’s 1st party frontier Nova models; NovaForge is the 1st of its kind feature

To date, companies have tried to shape models with their own data late in the process, usually with fine-tuning or post-training. There’s a debate in the industry about this, but we believe that enterprises will want models trained on their own data at an early stage of pretraining if possible. So their models have the best possible foundation for what matters most to each enterprise on which to learn and evolve. It’s a little like teaching a child of foreign language early in their life. That becomes part of their learning foundation moving forward, and it makes it easier to pick up other languages later in their life. To solve for this need, we just launched Nova Forge, which give customers early checkpoints on our Amazon Nova models, allows them to securely mix their own proprietary data with the models data in the pretraining stage and enables their own uniquely customized versions of Nova, what we call Novellas, trained with their data early in the process. This will be very useful for companies as they build their own agents on top of the model. There is nothing else out there like this today and a potential game changer for companies.

Amazon’s management is seeing customers want better price performance from AI chips; Amazon has landed over 1.4 million of its 1st-party AI chip, Trainium 2; Trainium 2 has 30%-40% better price performance than comparable GPUs (likely referring to NVIDIA’s GPUs); Trainium 2 is currently at a multibillion-dollar annualized revenue run rate; 100,000-plus companies are already using Trainium 2 as it is the majority of Bedrock’s usage; management recently launched Trainium 3, which is 40% more price performant than Trainium 2; management expects nearly all of Amazon’s supply of Trainium 3 to be committed by mid-2026; management is already seeing very strong interest for Trainium 4, which is under development; AI start-up Anthropic is training its next cloud model with Trainium 2 through AWS’s Project Rainier; Project Rainier started with 500,000 chips and is continuing to increase; Trainium 2 is currently fully subscribed; Trainium 4 is expected to launch in 2027; customers are already asking about Trainium 5; Anthropic is pleased with Project Rainier

Customers are starving for better price performance. And typically, and understandably, the dominant early leaders aren’t in a hurry to make that happen. They have other priorities. It’s why we’ve built our own custom silicon and Trainium, and it’s really taken off. We’ve landed over 1.4 million Trainium2 chips, our fastest ramping chip launch ever. Trainium2 is 30% to 40% more price performance than comparable GPUs and is a multibillion-dollar annualized revenue run rate business with 100,000-plus companies using it as Trainium is the majority underpinning of Bedrock usage today. We recently launched Trainium3, which is up to 40% more price performant than Trainium2. We’re seeing very strong demand for Trainium3 and expect nearly all of our Trainium3 supply of chips to be committed by mid-2026. And though we’re still building Trainium4, we’re seeing very strong interest already…

…You mentioned Project Rainier, Anthropic is building their next — they’re training their next cloud model on top of Trainium2. And that’s what Project Rainier is. So we talked about 500,000 chips there. You’ll see that continuing to increase. They’re also using a fair bit of Trainium2 for other workloads and their own APIs beyond just Project Rainier. But Trainium is a multibillion dollar annualized run rate business at this point, and it’s fully subscribed…

…There’s very substantial interest in Trainium4, which is coming in 2027. And we’re already having conversations about Trainium5…

…The Project Rainier has gone very well. I think Anthropic is quite pleased with it.

Amazon’s management thinks the primary way companies will derive value from AI will be agents; management thinks companies will use both their own agents and those built by others; management thinks it’s difficult to build agents, so they have launched Strands, which helps users build agents from any AI model; AI agents require a secure and scalable way to connect with multiple elements of a company’s tech stack, and management thinks this is a hard problem to solve; management has launched Bedrock AgentCore to help companies connect agents to the elements; customers are excited about Bedrock AgentCore; Amazon has built multiple agents for customers to use, including Kiro for coding, Amazon Quick for analytics, AWS Transform for software migration, and more; the number of developers using Kiro grew 150% sequentially in 2025 Q4; management is seeing customers get excited about fully autonomous agents and have launched such agents, such as Kiro for coding, AWS DevOps for operational problem solving, and AWS Security Agents for application security

The primary way companies will get value from AI is with agents, some their own, some from others, and there are several customer challenges that we’re well positioned to solve. It’s harder to build agents than it should be. For that, we’ve built Strands, a service enabling agents to be created from any model. Once agents are built, enterprises are apprehensive about deploying to production because these agents need to securely and scalably connect to compute, data, tools, memory, identity, policy governance, performance monitoring and other elements. This is a new and hard problem where a solution has not existed until we launched Bedrock AgentCore. Customers are quite excited about AgentCore, and it’s unlocking deployments.

Customers also want to leverage others’ useful agents, and we’ve built several, including Kiro for coding, Amazon Quick for knowledge workers to leverage their own data and analytics, AWS Transform for software migration and Amazon Connect for call center operations. We continue adding new capabilities and usage continues to grow quickly. For example, the number of developers using Kiro grew more than 150% quarter-over-quarter. 

In addition to agents that customers direct, customers are also becoming excited about agents that require less human interaction. They can be fully autonomous, run persistently for hours or days, scale out quickly and remember context. At this past AWS re:Invent, we launched Frontier Agents to do that. Kiro autonomous agents for coding tasks, AWS DevOps agents for detecting and resolving operational issues and AWS Security Agents for proactively securing applications throughout the development life cycle, and they’re already making a big difference for customers.

Rufus, Amazon’s AI shopping assistant, can now research products, track prices and auto buy; Rufus can now shop tens of millions of items in other online stores and make purchases for customers; Rufus has 300 million customers in 2025; management thinks agentic commerce will be a great experience for consumers; customers who use Rufus are 60% more likely to complete a purchase; management thinks Amazon will eventually have relationships with 3rd-party agents that have commerce capabilities, but the commerce capabilities need to be a lot better than what’s available now; management thinks that consumers will prefer a commerce agent from a retailer they are familiar with, over a horizontal agent that also has commerce capabilities, and this is why management is optimistic about Rufus

Our Agentic AI shopping assistant, Rufus, has rapidly expanded. Rufus can research products, track prices and auto buy, purchasing a product in our store when it reaches your set price. It can also now shop tens of millions of items in other online stores and make purchases for customers using our Agentic Buy for Me feature. Last year, more than 300 million customers used Rufus…

…I’m very optimistic about the customer experience that will ultimately be what customers use for Agentic shopping. And I think it’s good for customers. I think it’s going to make it easier for them…

…Customers who use Rufus are about 60% more likely to complete a purchase…

…We will have relationships with third-party horizontal agents that can enable shopping as well. We have to collectively figure out a better customer experience. It’s still — these horizontal agents don’t have any of your shopping history. They get a lot of the product details wrong, they get a lot of the pricing wrong. And so we have to try to find a customer experience together that’s better and a value exchange that makes sense for both parties. But I’m very hopeful that we’ll get there over time…

…I think you’re going to have to look at as time goes on, which types of — which shopping agents are consumers going to use. And it kind of reminds me in some ways of the early days of kind of all the search engines that were referring traffic to retailers. And it’s still a relatively small portion of the overall traffic and sales. But of that fraction, you have to ask how many consumers are going to prefer using a horizontal agent where it’s kind of a middle person between the retailer and the consumer versus wanting to use a great agent from that retailer that has all its shopping history and that has all the data right there and makes it easy if you’re just spearfishing for something to shop for it right there or if you want to do discovery, you can do it there, and it’s got the best data on shopping. I think a lot of customers are ultimately going to choose to use a great shopping agent from that retailer. Because if you think about what consumers really want in retail in a retailer, they want really broad selection. They want low prices. They want really fast delivery. And then they want a retailer that they can trust and that takes care of them. And I think horizontal agents are pretty good at aggregating selection, but retailers are much better at doing all 4 of those items. And so I’m very optimistic that people will use our shopping agent.

The usage of AI has helped Amazon deliver highly relevant and useful advertisements for customers; Prime Video ads continued to grow and had meaningful contribution to Amazon’s advertising revenue growth; Prime Video had an average ad-supported audience of 315 million in 2025, up from 200 million in early-2024; management recently launched Ads Agent which helps brands to create and optimize campaigns at scale and target effectively; management recently launched Creative Agent, which creates full funnel ad campaigns for advertisers through a conversational interface, shortening campaign creation from a week to hours

Sponsored products advertising in our store continues to be our largest ads offering and the combination of trillions of shopping, browsing and streaming signals with advanced AI and machine learning led us to deliver highly relevant and useful ads for customers…

…We recently announced our Ads Agent, which lets brands use AI to create and optimize campaigns at scale, implement effective campaign targeting and quickly create actionable insights. And our Creative Agent lets advertisers research, brainstorm and generate full funnel ad campaigns from concept to completion using conversational guidance in Amazon’s retail data, transforming what was a week-long process into just hours.

Amazon’s management expects 2026 capex to be $200 billion (was $128 billion in 2025, and $83 billion in 2024); most of the capex will be for Amazon’s AI needs; management is seeing really high demand for AWS’s core and AI features; AWS is monetising compute capacity the moment it is installed; management has deep experience producing high return on invested capital (ROIC) with AWS capex, and they are confident the AI capex will also generate high ROIC; for 2026 Q1, revenue growth is expected to be 11%-15% and operating income growth is expected to be between -10% and 17%; one way AWS’s ROIC for AI capex is already showing up is in the expansion of its operating margin; the vast majority of AWS’s capex has been for compute capacity that is consumed by external customers; AWS, as well as the other cloud providers, could actually grow faster if they had more supply of AI compute

We expect to invest about $200 billion in capital expenditures across Amazon, but predominantly in AWS because we have very high demand, customers really want AWS for core and AI workloads, and we’re monetizing capacity as fast as we can install it. We have deep experience understanding demand signals in the AWS business and then turning that capacity into strong return on invested capital. We’re confident this will be the case here as well…

…Q1 net sales are expected to be between $173.5 billion and $178.5 billion. This guidance anticipates a favorable impact of approximately 180 basis points from foreign exchange rates. As a reminder, global currencies can fluctuate during the quarter. Q1 operating income is expected to be between $16.5 billion and $21.5 billion…

…On the investments we’re making, as Andy said earlier, we are putting into service with customers all capacity that we’re getting and it’s immediately useful. And we’re also seeing a long arc of additional revenue that we see from other customers and backlog and commitments that people are anxious to make with us, especially for AI services. So you can see that’s working its way into our P&L, both through CapEx and also through our operating margin in AWS. AWS is 35% operating margin through Q4, up 40 basis points year-over-year…

…The vast majority of our — the capital that we spend and the capacity that we have is consumed by external customers. We have — Amazon has always been a very large AWS customer, a very helpful AWS customer because they’re very demanding, and they use the services very expansively and stretch the limits as we launch things. So they’ve always been a very important big customer, but always a very small fraction of the total, and that’s true today in AI as well as the overall AWS business…

…So we’re growing at really an unprecedented rate yet, I think every provider would tell you, including us that we could actually grow faster if we had all the supply that we could take. And so we are being incredibly scrappy around that.

Amazon’s management thinks that inference will be the majority of AI workloads in the long run

I think some of the things that you will see over time in the AI space is you’re going to keep seeing all of the inference services, which is going to be the majority of the long-term AI workloads is going to be inference. You’re going to see the inference keep getting optimized.

It appears that Amazon’s management is willing to bring free cash flow to negative to aggressively invest in AI, as they see it as an unusually large opportunity

[Question] Are there any financial guardrails or governors in place that we should think about around the spend just in terms of operating income growth or positive free cash flow?

[Answer] I think this is an extraordinarily unusual opportunity to forever change the size of AWS and Amazon as a whole. I think it also is an extraordinary opportunity for companies to change all their customer experiences and for start-ups to be able to build brand-new experiences and businesses that would have taken much longer to try to accomplish before that they can do right now. And so we see this as an unusual opportunity, and we are going to invest aggressively here to be the leaders because like we’ve been in the last number of years and like I think we will be moving forward.

Market demand for AI compute currently looks like a barbell to Amazon’s management, with AI labs on one end spending a lot on compute for just a handful of applications, and with enterprises on one end that are using AI for productivity purposes; the middle of the barbell are production AI workloads from enterprises that are under evaluation; management thinks the middle part of the barbell will be the largest and most durable aspect of market demand for AI compute, but it has yet to materialise and it’s only a matter of time

The way I would describe what we see right now in the AI space is it’s really kind of a barbelled market demand where on one end, you have the AI labs who are spending gobs and gobs of compute right now, along with what I would consider a couple of runaway applications. And then at the other side of the barbell, you’ve got a lot of enterprises who are getting value out of AI in doing productivity and cost avoidance types of workloads. These are things like customer service or business process automation or some of the fraud pieces. And then in that middle of the barbell are all the enterprise production workloads. And I would say that the enterprises are in various stages at this point of evaluating how to move those, working on moving those and then putting them into production. But I think that middle part of the barbell very well may end up being the largest and the most durable. And I would put in the middle of that barbell, too, by the way, I would put just the altogether brand-new businesses and applications that companies build that right from the get-go run in production on top of AI…

…When I look at this and what’s happening, it’s kind of unbelievable if you look at the demand of what you’re seeing already with AI, but the lion’s share of that demand is still yet to come in the middle of that barbell. And that will come over time. It will come as you have more and more companies with AI talent as more and more people get educated with the AI background, as inference continues to get less expensive, and that’s a big piece of what we’re trying to do with Trainium and our hardware strategy. And as companies start to have success in moving those workloads to — further and further success in moving those workloads to run on top of AI.

Almost every conversation Amazon’s management is having with companies regarding AWS starts with AI; management thinks that the AI movement will eventually involve many, many more companies than what’s seen today

There’s a number of AI labs, but almost every company you talk to, almost every conversation we have on the AWS side, starts with AI…

…This AI movement is not going to be a couple of companies. It’s going to be thousands of companies over time.

Amazon has over 1,000 AI applications internally that are in production or being developed, and these applications are used in all areas of Amazon’s business

Internally, we have all sorts of ways that we are using AI. We have over 1,000 AI applications that we’ve either deployed or in the process of building, and they range from our shopping assistant in Rufus that we were just talking about to Alexa+, which is a really large-scale generative AI application to applications in our fulfillment network that allow us to have more accurate forecasting predictions to how we do customer service and our customer service chatbot to how we are making it much easier for brands to create advertisements and to optimize all their campaigns across the full funnel of advertising options we have to — in live sports, if you watch Thursday Night Football, you can see defensive alerts, which predict which player is going to blitz or pocket health.

AWS added 3.9 GW of compute capacity in 2025, and that is more than any other company in the world; the 3.9 GW of compute capacity AWS added in 2025 is twice what AWS had in 2022; management expects AWS’s compute capacity to double by 2027; AWS added 1.2 GW of compute capacity in 2025 Q4

In 2025, AWS added more data center capacity than any other company in the world…

…If you look in the last 12 months, we added 3.9 gigawatts of power. Just for perspective, that’s twice what we had in 2022 when we were an $80 billion annual run rate business. We expect to double it again by the end of ’27. We added 1.2 gigawatts of power in Q4, just quarter-over-quarter.

Apple (NASDAQ: AAPL)

The consumer response to AirPods Pro 3 has been amazing; AirPods Pro 3 has a live translation feature; management has been hearing powerful stories of people using live translation to communicate seamlessly across languages

The response to AirPods Pro 3 has been amazing. Customers are raving about the rich immersive sound quality, the unmatched level of active noise cancellation and the noticeably improved comfort that makes them effortless to wear. Features like live translation are also changing the way people can communicate by helping users connect across languages in real time and making everyday conversations feel more natural and accessible…

…And as I touched on earlier, we are hearing powerful stories of people using live translation to communicate seamlessly across languages.

The majority of enabled-iPhone users were using Apple Intelligence in 2025 Q4 (FY2026 Q1); management has introduced dozens of features in Apple Intelligence since launch; Apple Intelligence now supports 15 languages; one of Apple Intelligence’s most popular features is Visual Intelligence; management thinks Apple’s products are the best platforms in the world of AI because of Apple silicon; Apple is collaborating with Google to build the next generation of Apple’s foundation models that will power Apple Intelligence; management determined that Google’s AI technology was the most capable for building Apple foundation models; even with the collaboration with Google, Apple Intelligence will continue to run on-device and in Private Cloud Compute; management sees both on-device and cloud inference as important; a growing percentage of Apple’s overall iPhone installed base is AI-capable

During the quarter, we were excited to see that the majority of users on enabled iPhones are actively leveraging the power of Apple Intelligence.

Since the launch of Apple Intelligence, we’ve introduced dozens of features, including writing tools and cleanup and made it available in 15 languages. These AI experiences are personal, private, integrated across our platforms and relevant to what our users do every day. We are bringing intelligence to more of what people already love about our products so we can make every experience even more capable and effortless. One of our most popular features is Visual Intelligence which helps users learn and do more than ever with the content on their iPhone screen, making it faster to search, take action and answer questions across their apps. And as I touched on earlier, we are hearing powerful stories of people using live translation to communicate seamlessly across languages.

And these are just some of the many powerful AI features that are enabling our users to do remarkable things with our products, which are far and away the best platforms in the world for AI. That’s in no small part because of the extraordinary power and performance of Apple silicon. 

Building on our efforts in the AI space, we are also collaborating with Google to develop the next generation of Apple foundation models. This will help power future Apple Intelligence features, including a more personalized series coming this year. We’re incredibly excited for what’s to come with so many new experiences to unlock…

…We basically determined that Google’s AI technology would provide the most capable foundation for AFM — I’m sorry, Apple foundation models. And we believe that we can unlock a lot of experiences and innovate in a key way due to the collaboration. We’ll continue to run on the device and run in Private Cloud Compute and maintain our industry-leading privacy standards in doing so…

…[Question] When you think about how Apple might manage AI, do you see that evolving towards more edge AI or on device services versus cloud-based AI?

[Answer] We see both being important, the on-device and the private cloud compute. And so we don’t see it as an either/or we see it as both…

…[Question] Can you speak at all to roughly what portion of your iPhone or overall active device installed base is now AI capable?

[Answer] We don’t provide that specific number, but it is a growing number, as you can imagine in our installed base.

Apple’s management will continue with Apple’s hybrid approach when it comes to capital expenditure for AI data centers (of using its own data centers as well as those of 3rd-parties)

Just speaking of CapEx, in general, as you know, we have a hybrid model for CapEx. And so I think that what happens is our CapEx can be volatile, independent of kind of the volume and the performance of our business.

The use of Apple’s own chips in its products provides both strategic as well as direct value, and has impacted the company’s gross margin in a positive way

As far as impact on gross margin, we have been, as you know, investing in core technologies like our own silicon, our own modem. And certainly, while those do provide opportunities for cost savings and can be reflected in margins, they also importantly provide the differentiation that’s really important for our products as well and give us more control of our road map. So I think there’s a lot of strategic value to it, but also we are seeing investments in our core technologies impacting gross margin in a positive way.

ASML (NASDAQ: ASML)

ASML’s management has seen the company’s customers become more positive in their medium-term outlooks, driven by demand for AI; ASML’s customers, for both Logic and Memory (DRAM) chips, are building capacity, and this has translated into orders for ASML’s EUV systems; ASML’s Logic customers are becoming more comfortable about the long-term sustainability of AI demand; ASML’s Memory (DRAM) customers are ramping up capacity for their advanced nodes, and these nodes require more EUV layers; management sees a strong belief in ASML’s customers that AI demand is real, and these customers are adding major capacity, starting in 2026

If you listen to our customers, both what they say publicly, but also what they told us, it’s pretty clear that customers over the past couple of months have actually become more positive in their assessment of the medium-term market perspectives as they see it. I think it’s primarily on the basis of the more robust view that they have when it comes to demand for AI, which seems to be more sustainable from their vantage point. That recognition has led some of our customers to really invest in capacity and gear up their plans for medium-term capacity expansion…

…The market outlook has notably improved in the last few months. This is especially true when it comes to the build-up of the capacity for AI applications, being data centers or other infrastructure. Now, we start to see that this build-up is also translating into need for capacity at our advanced customers. This is true for Logic. This is true for DRAM. This starts to translate also into orders for our most advanced technology, especially EUV. So in the last few months we have seen our DRAM customers, our Logic customers, starting to accelerate their planning-capacity and having this discussions with us. 

If I look at Logic first, so there we see our customers starting to be more comfortable about the sustainability of the long-term AI demand. This means that they are more willing to accelerate their capacity-planning. They are transitioning also from 4nm technology to 3nm technology, which is going to be more demanding in terms of advanced technology. Finally, of course, the ramp of 2nm is going on and I would say is accelerating in order to fulfill the future need of mobile and HPC applications.

When I look at DRAM, there also the demand is very strong for HBM, of course, but also for DDR. This most probably will lead to a very tight supply, at least in 2026 and most probably beyond that. So we see our customer ramping 1b, 1c nodes, which are going to be critical for that demand. And on those nodes, we see them increasing basically the amount of EUV layers. We have talked about that in the past. We see that happening very strongly right now…

…  I think a strong belief that the AI demand is real and a preparation for that, with on the short term a major addition of capacity. This will start in 2026 and will last beyond that. 

ASML’s management is seeing multi-beam inspection becoming more critical as the use of 3D structures in advanced Logic and Memory chips increase; management expects ASML’s E-beam inspection system to gain more traction in 2026

On E-beam inspection, multi-beam is becoming more and more critical. 2025 was also a good year for this product. Allowing us to mature the technology, demonstrate initial value with our customers. We expect also that product to have more traction in 2026…

…With the continuing increase of 3D structures in advanced Logic and Memory, we see more adoption of our multi e-beam inspection system to detect optically non-visible yield-limiting defects.  

ASML’s management continues to see strong growth for the semiconductor market, especially for advanced chips, in the long-term, driven by AI; management sees higher lithography intensity in ASML’s customers’ manufacturing processes; ASML’s management sees AI driving much faster growth in demand for advanced memory and advanced logic chips compared to non-AI memory and non-AI logic chips; AI demand is driving demand for not just more transistors per chip, but more wafers as well

One of the key points we made at our Capital Markets Day, November 2024, was that AI applications will require more advanced technology in DRAM and Logic and will drive basically some of our most advanced products. I think that this is being confirmed as we speak. The last few months have pointed basically exactly to that dynamic. We also see that the progress we continue to make on our cost of technology with EUV is driving for more litho-intensity. And that’s, again, something that has been confirmed in the last few months…

…You see the historical growth of memory logic, which is about 6%, 7% year-on-year…

…What you see with AI is that when we look at advanced logic, when we looked at advanced memory the growth on those segments is going to be more than 20% year-on-year for the foreseeable future. And this is really what is going to drive basically more demand on lithography. Why is that? So we’ve talked in the past a lot about Moore’s low, of course. And Moore’s Law is law that say that every couple of years, we need to double the number of transistors per chips. And that law has been true for many, many years for PC for mobile application. Now when you look at AI and this started to happen in 2010, the curve is far more aggressive. When you look at the most advanced AI product today, NVIDIA products, for example, the request is not to grow 2x every 2 years, but in the last few years to grow 16x every 2 years. So you see a major acceleration basically of the need for silicon. And of course, we provide that in 2 different ways. We provide that with scaling by making transistors small. We can put more transistor per chips. And this has been a good way basically to provide more transistor and follow Moore’s law for many, many years, but that’s not enough anymore. And if you cannot put enough transistor per unit of area per chips, then the only option will be to make more wafers. And that’s a bit what we see happening with AI…

…I pick one example, and I picked it from NVIDIA because all of you are, of course, very much aware of what’s happening there. Today, on the Blackwell system, you need about 2.5 wafers to create the product. If you look at 2027 on the revenue product, this number will go up to 10 wafers. So to provide the same product to their customer, NVIDIA will need 4x more wafer than today.

ASML’s management thinks AI will have a big effect on overall GDP (gross domestic product)

The effect AI can have on the overall GDP is pretty big. In fact, if you look at the U.S., even in 2025, AI was accounting for a very large part of the growth, and we expect that basically to be applied to the entire worldwide GDP.

ASML’s management sees AI demand driving demand for even ASML’s more mature DUV (deep ultra-violet) lithography systems; management continues to drive the roadmap for ASML’s DUV lithography systems

What’s interesting with AI is that this basically touch on all products. Of course, AI is going to require very advanced chips, and this is going to drive EUV, for example. So this year will be a big year for EUV, Roger will talk about that. It’s going to drive advanced inspection tool. But at the same time, AI needs a lot of data generation, a lot of sensor, and this will be still created by the use of more mature technology such as DUV. So AI will have also this effect basically to really drive our entire product portfolio in the coming years…

…We continue to drive the road map both on Immersion, where we have launched our 2150, which basically give us sub-nanometer accuracy and more than 300 wafer per hour. Productivity is important. Productivity, of course, a way to get capacity. So we continue to drive that on immersion, I think the example of the NXT:870B, which is a KrF system is even more spectacular because there we have been capable to achieve more than 400 wafer per hour. And that tool today is creating a lot of interest at our customer because productivity, again, is capacity.

ASML is making good progress in its partnership with AI startup Mistral (reminder that ASML had invested in Mistral in 2025)

Back in the end of the summer when we announced our collaboration but also our investment in Mistral. The rationale there was to get AI in ASML and to get the very best people, the very best competence in ASML in order to be able to first strengthen our core competencies, read putting AI in our product, support the connected market, to offer some of those capability to our customers and also create new opportunity basically moving forward. That’s a project we are going to talk more about in ’26, in ’27. We are making great progress with Mistral, our partner.

ASML’s management thinks memory is more likely to be the bottleneck for AI today

It’s difficult to say if logic or DRAM is the bottleneck for AI today. I will still pick mostly memory at this point of time. And the reason for that is that it comes to memory, the demand for high bandwidth memory, which is the AI memory is extremely high. But the demand for DDR memory, which is for mobile PC is also very high. And as a result, we have seen basically the price of DRAM going up significantly in the last few weeks. Therefore, there’s a need for capacity. And our memory customers are moving very aggressively.

ASML’s management is already planning for hyper-NA EUV systems (this are systems that are even more advanced than high-NA EUV), but there’s a lot of flexibility when it comes to the timeline for introducing hyper-NA

We talk about low NA, we talk about High NA I think we talked about Hyper NA because we see that in the future, there may be a need for even a more advanced litho system. And we could end up in a war, I’m talking 10 years from now where the customer use basically each one of those 3 systems. Now this being said, when you look 10 years ahead, it’s very difficult to know exactly when this will happen. And in order to not have to answer that question today, what we did is develop a program, which we call high productivity platform. So Roger mentioned it as one of the key program in EUV, and that program basically consists in defining an EUV platform that will come to the market early next decade, and that will be able to support Low NA, this major productivity improvement. We look at more than 400 wafer per hour. High NA, also with major improvement and potentially Hyper NA. So we’re designing a platform basically that we’ll be able to receive ultimately low NA optic, high NA optic, hyper NA optic. This give us basically the full flexibility over time to decide exactly when and how we should introduce hyper NA.

Mastercard (NYSE: MA)

Mastercard’s management sees agentic commerce as being in the early days; Mastercard launched Mastercard Agent Pay in 2025 and has now enabled US card issuers to participate; management will enable Mastercard’s global issuer base to work with Agent Pay by the end of 2026 Q1; management is working with the entire agentic commerce ecosystem across all regions; although agentic commerce is still early, management thinks it will come fast; management thinks agentic commerce can affect tokenisation of payments very positively

For us, Agentic Commerce represents another avenue to enable payment choice with the same trust that we always deliver. It’s early days, but we are ready. You remember, last year, we launched Mastercard Agent Pay, a framework designed to foster trust in Agentic transactions. We have now enabled our U.S. issuers to participate in Agent Pay, and we are working to enable our global issuer base by the end of the first quarter…

…We’re actively working with ecosystem participants to adopt Agentic Commerce across all regions…

…In Asia, we’re partnering with Anthem on card-based tokenized payment solutions for Agentic payments. In the U.K., we’re consulting clients such as Lloyds Banking Group, Elavon and Santander on Agentic Commerce innovations. And in the UAE, we’re piloting Agentic payments with the leading retail and entertainment group, Majid Al Futtaim. And with banks, merchants and digital players, we continue to position them for success in this new era of commerce, whether it be through consulting, security, data-driven insights or new loyalty programs, we are there…

…What an exciting space and might be one of those use cases, AI-driven use cases that meet our reality much faster than other AI use cases out there. So I think Agentic Commerce is going to come fast. So this whole idea of a consumer using an agent to drive — have a better commerce journey, I think that just resonates with people. You get better quality insights, you get better recommendations…

…You could also see that the application of services. For example, tokenization get to see a very different path than it might see without Agentic Commerce. So these are all aspects that make me very excited about this.

Meta Platforms (NASDAQ: META)

Meta rebuilt the foundations of its AI program in 2025 and will soon start shipping new AI models and products; management expects Meta to steadily push the frontier over the course of the year as it ships new AI models; management expects to use AI models developed by Meta Superintelligence Labs (MSL) to build compelling AI products; management has already used MSL’s models to build AI dubbing of videos into local languages; Meta now supports 9 different languages and hundreds of millions of people are watching AI-translated videos daily; the translated videos are driving incremental time spent on Instagram; the AI dubbing tool will support more languages throughout 2026; management is pleased with the current progress of Meta Superintelligence Labs, but it’s a long-term effort

In ’25, we rebuilt the foundations of our AI program. Over the coming months, we’re going to start shipping our new models and products. I expect our first models will be good, but more importantly, we’ll show the rapid trajectory that we’re on. And then I expect us to steadily push the frontier over the course of the year as we continue to release new models…

…We expect to use the models developed by Meta Superintelligence Labs to deliver compelling and differentiated AI products. One area we’re already seeing promise is with AI dubbing of videos into local languages. We are now supporting 9 different languages with hundreds of millions of people watching AI translated videos every day. This is already driving incremental time spent on Instagram, and we plan to launch support for more languages over the course of this year…

…We’re about 6 months into building MSL. I’m very pleased with the quality of the team. I think we have the most talent-dense research effort in the industry and some of the early indicators look positive. But look, I think that this is going to — is a long-term effort, right? We’re not here to do this to ship like one model or one product. We’re doing a lot of models over time and a lot of different products.

Meta’s management’s vision with AI is to build personal super intelligence; management thinks what makes agents valuable is the context they can see, and Meta’s agents can provide a uniquely personal experience

Our vision is building personal super intelligence. We’re starting to see the promise of AI that understands our personal context, including our history, our interests, our content and our relationships. A lot of what makes agents valuable is the unique context that they can see. And we believe that Meta will be able to provide a uniquely personal experience.

Meta’s management is merging LLMs (large language models) with the AI recommendation systems of the company’s social media platforms and advertising system; management thinks the introduction of LLMs to the recommendation systems will significantly improve the performance of the already-powerful recommendation systems, and have a positive implication for commerce activity taking place on Meta’s platforms

We’re also working on merging LLMs with the recommendation systems that power Facebook, Instagram, Threads and our ad system. Our world-class recommendation systems are already driving meaningful growth across our apps and ads business, but we think that the current systems are primitive compared to what will be possible soon. Today, our systems help people stay in touch with friends, understand the world and find interesting and entertaining content. But soon, we’ll be able to understand people’s unique personal goals, and tailor feeds to show each person content that helps them improve their lives in the ways that they want. This also has implications for commerce. Our ads today help businesses find just the right very specific people who are interested in their products. New Agentic shopping tools will allow people to find just the right very specific set of products from the businesses in our catalog. We’re focused on making these experiences work across both our feeds and across business messaging, significantly increasing the capabilities of WhatsApp over time.

Meta’s management has simplified Instagram’s ranking architecture to enable more efficient model scaling and this has led to a 30% year-on-year increase in watch time on Instagram Reels in the USA in 2025 Q4; video time on Facebook grew double-digits year-on-year in 2025 Q4, and views of organic feed and video posts were up 7% as a result of optimisations in ranking; Meta is now surfacing 25% more reels published on the day compared to 2025 Q3; the prevalence of original content in the US on Instagram was up 10 percentage points in 2025 Q4, and 75% of recommendations are now from original posts; Threads saw a 20% lift in time-spent from improvements to its recommendation models; management sees a lot of opportunity for Meta to achieve additional gains in engagement on its apps through scaling the complexity and amount of training data in its models, and the introduction of LLMs to the recommendation systems; management is developing Meta’s next generation recommendation systems and the work includes building new model architectures from the ground up on top of LLMs; the improved engagement in 2025 Q4 across Meta’s platforms was from multiple optimisations

Instagram Reels had another strong quarter with watch time up more than 30% year-over-year in the U.S. Engagement is benefiting from several optimizations we made to improve the quality of recommendations including simplifying our ranking architecture to enable more efficient model scaling. This unlocks the ability for our systems to consider longer interaction histories to better identify a person’s interests.

On Facebook, video time continued to grow double digits year-over-year in the U.S., and we’re seeing strong results from our ranking and product efforts on both feed and video surfaces. The optimizations we made in Q4 drove a 7% lift in views of organic feed and video posts on Facebook, resulting in the largest quarterly revenue impact from Facebook product launches in the past two years…

…On Facebook, our systems are surfacing over 25% more reels published that day than the prior quarter. On Instagram, we grew the prevalence of original content in the U.S. by 10 percentage points in Q4 with 75% of recommendations now coming from original posts. 

Threads is also seeing strong momentum again, benefiting from recommendation improvements. The optimizations we made in Q4 drove a 20% lift in threads time spent.

We see a lot of opportunity to drive additional gains. This includes scaling the complexity and amount of training data we use in our models while continuing to make our systems more responsive to people’s real-time interest. We’re also focused on incorporating LLMs to understand content more deeply across our platform, which will enable more personalized recommendations.

Another big area of investment this year is developing the next generation of our recommendation systems. We have several big bets on this front, including building new model architectures from the ground up that will work on top of LLMs, leveraging the world knowledge and reasoning capabilities of an LLM to better infer people’s interests…

…We launched several ranking improvements in Q4 on Facebook and Instagram that drove incremental engagement. And there isn’t really one single launch that is driving most of the gains. It’s multiple optimizations to our recommendation systems that are helping us make more accurate predictions about what will be interesting to each person…

…We’re going to continue to make recommendations even more adaptive to what a person is engaging with during their session. So the recommendations we surface are more relevant to what they’re interested in at that moment.

Meta’s management thinks AI will enable a surge in new, immersive and interactive media formats, leading to more interactive feeds in the company’s social media platforms

Soon, we’ll see an explosion of new media formats that are more immersive and interactive and only possible because of advances in AI. Our feeds will become more interactive overall. Today, our apps feel like algorithms that recommend content. Soon, you’ll open our apps, and you’ll have an AI that understands you and also happens to be able to show you great content or even generate great personalized content for you…

…Video will continue to be here for a long time. It’s going to continue growing, it’s not going anywhere, just like photos and text in many ways, continue to grow even as the market continues to grow beyond that. But I don’t think that video is the ultimate kind of final format. I just — I think that this is going to get — we’re going to get more formats that are more interactive and immersive and you’re going to get them in your feeds. So you can imagine this, I mean there’s obviously a lot of details to fill in on this, but you can imagine people being able to, easily through a prompt, create a world or create a game and be able to share that with people who they care about and you see it in your feet and you can jump right into it and you can engage in it. And there are 3D versions of that, and there are 2D versions of that and Horizon, I think, fits very well with the kind of immersive 3D version of that.

Sales of Meta’s AI glasses tripled in 2025; the glasses are some of the fastest-growing consumer electronics products in history; management thinks most people who wear glasses today will switch to AI glasses in the future; management is directing most of Meta’s investments in the Reality Labs segment to AI glasses and wearables; management thinks Reality Labs’ losses in 2026 will be similar to 2025, and will be the peak going forward

Sales of our glasses more than tripled last year, and we think that they’re some of the fastest-growing consumer electronics and history. Billions of people wear glasses or contacts for vision correction and I think that we’re in a moment similar to when smartphones arrived, and it was clearly only a matter of time until all those flip phones became smartphones. It’s hard to imagine a world in several years where most glasses that people wear aren’t AI glasses.

For Reality Labs, we are directing most of our investment towards glasses and wearables going forward while focusing on making Horizon a massive success on mobile and making VR a profitable ecosystem over the coming years. I expect Reality Labs losses this year to be similar to last year, and this will likely be the peak as we start to gradually reduce our losses going forward while continuing to execute on our vision.

Meta’s management wants Meta to continue investing significantly in AI infrastructure, and has established the Meta Compute division to deliver the infrastructure for the company; management sees long-term investments in silicon and energy as an important part of Meta Compute’s work; management continues to build AI infrastructure for Meta that is flexible; management expects the cost per gigawatt of Meta’s AI infrastructure to decrease significantly over time; management is deploying a variety of chips in Meta’s AI infrastructure; Meta’s ads retrievable engine, Andromeda, can now run on chips from NVIDIA, AMD, and Meta (MTIA); MTIA is currently running inference  workloads on Meta’s core ranking and recommendation models, but management will extend MTIA in 2026 Q1 to also cover training workloads; management expects Meta to have sufficient cash flow to fund its infrastructure investments in 2026, but they are also looking for external financing that may lead to net-debt on the balance sheet; with Meta’s current and planned compute capacity, management is exploring businesses beyond ads; management continues to have a robust ROI-driven process when planning investments for its AI models 

We will continue to invest very significantly in infrastructure to train leading models and deliver personal super intelligence to billions of people and businesses around the world. I recently announced Meta Compute with the belief that being the most efficient at how we engineer, invest and partner to build our infrastructure will become a strategic advantage. Dina Powell McCormick also joined us as President and Vice Chairman, and she will lead our efforts to partner with governments, sovereigns and strategic capital partners to expand our long-term capacity, including ensuring positive economic impact in the communities that we operate in around the world. An important part of Meta Compute will be making long-term investments in silicon and energy. We will continue working with key partners while advancing our own silicon program. We’re architecting our systems that we can be flexible in the systems that we use, and we expect the cost per gigawatt to decrease significantly over time through optimizing both our technology and supply chain…

…We’re working to meet our silicon needs by deploying a variety of chips that optimally support each of our different workloads. To that end, in Q4, we extended our Andromeda ads retrievable engine, so it can now run on NVIDIA, AMD and MTIA…

…In Q1, we will extend our MTIA program to support our core ranking and recommendation training workloads in addition to the inference workloads it currently runs…

…As we invest in infrastructure to meet our business needs, we continue to prioritize maintaining long-term flexibility so we can adapt to how the market develops. We’re doing so in several ways, including changing how we develop data center sites, establishing strategic partnerships, contracting cloud capacity and establishing new ownership structures for some of our large data center sites.

We have a strong net cash balance and expect our business will continue to generate sufficient cash to fund our infrastructure investments in 2026, which is reflected in our expectations. Nonetheless, we will continue to look for opportunities to periodically supplement our strong operating cash flow with prudent amounts of cost-efficient external financing, which may lead us to eventually maintain a positive net debt balance…

…[Question] It just seems like you’re going to have a tremendous amount of capacity. How do you think about expanding your opportunities beyond ads, things like subscriptions or licensing cloud models?

[Answer] We are focused on things beyond ads, I think the numbers make it so that for the next couple of years, ads are going to be, by far, the most important driver of growth in our business. So that’s why, as we’re working on this, we have a balance of new things that we’re trying to do, while also investing very heavily and making sure that all of the work that we’re doing in AI improves both the quality and business performance of the core apps and businesses that we run there…

…A year ago on this call, I think I talked about the set of investments we were making in 2025. as part of our 2025 budgeting process across our ads performance and organic engagement initiatives. And those investments have generally paid off, and we feel really good about kind of the — the process we ran in terms of using projected ROI to stack rank investments, make sure that we had a robust measurement system funded things that were positive ROI and then tracking how they performed over the course of the year. And we are — we’ve just finished running our 2026 budgeting process, and we have funded a similar set of investments, which we expect will enable us to continue delivering strong revenue growth in 2026.

Meta’s management thinks AI will dramatically change how the company works in 2026; management is investing in AI tooling (i.e. agents) for employees; management is seeing AI helping single persons accomplish projects that used to require big teams; management has seen agentic coding tools help increase Meta’s output per engineer by 30% since the beginning of 2025, with even stronger gains seen in; management thinks AI agents will have a profound positive impact on the productivity of the technology sector and the whole economy

I think that 2026 is going to be the year that AI starts to dramatically change the way that we work…

…We’re investing in AI native tooling so individuals at Meta can get more done, we’re elevating individual contributors and flattening teams. We’re starting to see projects that used to require big teams now be accomplished by a single, very talented person…

…A big focus of this is to enable the adoption and advancement of our AI coding tools where we’re seeing strong momentum. Since the beginning of 2025, we’ve seen a 30% increase in output per engineer with the majority of that growth coming from the adoption of agentic coding, which saw a big jump in Q4. We’re seeing even stronger gains with power users of AI coding tools, whose output has increased 80% year-over-year. We expect this growth to accelerate through the next half…

…There’s a big delta between the people who do it and do it well and the people who don’t. And I think that’s going to just be a very profound dynamic for, I think, across the whole sector and probably the whole economy going forward in terms of the productivity and efficiency with which we can run these companies, which I think — my hope is that we can use that to just get a lot more done than we were able to before.

Meta’s year-on-year advertising conversion growth accelerated through 2025 Q4; management expects further gains in 2026, driven by further integration of AI across all layers of the marketing and customer engagement funnel; management continues to scale the complexity and size of Meta’s models for selecting what advertising to show; in 2025 Q4, management doubled the number of GPUs used to train Meta’s ads-ranking GEM model, and adopted a new sequence learning model architecture to process longer sequences of user behaviour; Meta’s recent initiatives to improve its advertising models drove a 3.5% lift in ad clicks on Facebook, and a 1% gain in conversions on Instagram in 2025 Q4; management launched a new run time model across Instagram Feed stories and reels in 2025 Q4 that drove a 3% increase in advertising conversion rates; Meta has continued making progress with Lattice, its unified model architecture for advertising ranking models; management consolidated models for Facebook Stories and other services into the overall Facebook model in 2025 Q4, and this drove a 12% increase in advertising quality; management expects to consolidate more models in 2026 compared to what Meta has done in the previous 2 years; Meta’s ads retrievable engine, Andromeda, can now run on chips from NVIDIA, AMD, and Meta (MTIA); Andromeda’s compute efficiency has tripled in 2025 Q4; Meta does not typically use GEM for inference because it is costly; management thinks Meta’s larger advertising models can benefit from having more compute; management expects to meaningfully scale up the cluster used for training GEM in 2026; management expects to further improve the transfer of knowledge from GEM to run time models; this is the 1st time management has found a recommendation model architecture that scales with similar efficiency as LLMs, and they are hopeful this architecture can be scaled up while preserving an attractive ROI (return on investment)

We’re seeing very strong results from the ad performance investments we made throughout 2025 with year-over-year conversion growth accelerating through the fourth quarter. We expect the set of investments we’re making in 2026 will enable us to drive further gains as we continue to integrate AI across all layers of the marketing and customer engagement funnel.

The first area is our ad system where we’re continuing to scale the complexity and size of our models to better select which ads to show. In Q4, we doubled the number of GPUs we used to train our GEM model for ads ranking. We also adopted a new sequence learning model architecture, which is capable of using longer sequences of user behavior and processing much richer information about each piece of content. The GEM and sequence learning improvements together grow a 3.5% lift in ad clicks on Facebook and a more than 1% gain in conversions on Instagram in Q4. This new sequence learning architecture is significantly more efficient than our prior architectures which should enable us to further scale up the data, complexity and compute we use in our future ranking models to deliver performance gains.

As we scale up our foundational ads models like GEM, we are also developing more advanced models to use downstream of them at run time for ads inference. In Q4, we launched a new run time model across Instagram Feed stories and reels, resulting in a 3% increase in conversion rates in Q4.

We continue to progress on our model unification efforts under Lattice as well. After seeing strong success with the consolidation of Facebook feed and video models in the first half of 2025. In Q4, we consolidated models for Facebook Stories and other services into the overall Facebook model. This, along with a series of back-end improvements drove a 12% increase in ads quality. And in 2026, we expect to consolidate more models than we had in the prior two years as we continue to evolve our systems towards running a smaller number of highly capable models…

… To that end, in Q4, we extended our Andromeda ads retrievable engine, so it can now run on NVIDIA, AMD and MTIA. This, along with model innovations, enabled us to nearly triple Andromeda’s compute efficiency…

…We don’t typically use our larger model architectures like GEM for inference because their size and complexity would make it too cost prohibitive. So the way that we drive performance from those models is by using them to transfer knowledge to smaller lightweight models used at run time. But I would say that we think that there is room for our larger models to benefit from having more compute. And I think as we scale up the compute available to those models, and the foundational models in different areas that power the different stages of ads ranking and recommendation, we expect that we will see gains coming from that…

…In 2026, we’re expecting to meaningfully scale up GEM training to an even larger cluster, increasing the complexity of the model, expanding the data that we trained it on, leveraging new sequence, learning architecture that we had begun deploying in Q4. And we’re also going to further improve how we transfer the learnings from our GEM foundation models to the runtime models that we’re using…

…This is the first time we have found a recommendation model architecture that can scale with similar efficiency as LLMs. And we’re hoping that this will unlock the ability for us to significantly scale up the size of our ranking models while preserving an attractive ROI.

Meta’s video generation tools have hit a combined revenue run rate of $10 billion in 2025 Q, and the sequential growth was nearly 3x higher than the growth Meta’s overall ads revenue; Meta’s latest incremental attribution feature was rolled out in 2025 Q4 and it drove a 24% increase in incremental conversions; the incremental attribution feature is already at a multibillion-dollar annual run rate just 7 months after launch

The combined revenue run rate of video generation tools hit $10 billion in Q4, with quarter-over-quarter growth outpacing the increase in overall ads revenue by nearly 3x. We are also seeing very good results from our incremental attribution feature, which optimizes for incremental conversions in real time. Our latest model rollout in Q4 is driving a 24% increase in incremental conversions versus our standard attribution model, and this product has already achieved a multibillion-dollar annual run rate just 7 months since launching.

Meta’s click-to-message revenue grew more than 50% year-on-year in the US in 2025 Q4; paid messaging within Whatsapp crossed a $2 billion annual run rate in 2025 Q4; management is seeing good early traction with business AI agents in Mexico and the Philippines, with over 1 million weekly conversations between people and business AI taking place

Click to message ads revenue growth accelerated in Q4 with the U.S. up more than 50% year-over-year, driven by strong adoption of our website to message ads which direct people to a business’s website for more information before choosing to launch a chat. Paid messaging within WhatsApp continues to scale as well, crossing a $2 billion annual run rate in Q4…

…We’re seeing good early traction with our business AIs in Mexico and the Philippines, with over 1 million weekly conversations between people and business AI is now happening on our messaging platforms. This year, we will expand availability of our business AIs to more markets, while also extending their capabilities so they not only answer questions on topics like product availability, but can help people get things done right within WhatsApp.

Meta recently acquired Manus; Manus already has a significant number of businesses paying a subscription fee; management thinks the integration of Manus into Meta’s advertising and business products can have really powerful effects

I don’t think either of us mentioned that the Manus acquisition in the upfront comments, I mean that is going to — is a good example of — you have a significant number of businesses that already pay a subscription to basically use their tool to accelerate their business results and integrating that kind of thing into our ads and business managers, so that way we can just offer more integrated solutions for the many, many millions of businesses that use and rely on our platforms is going to be really powerful, both for accelerating their results using the existing products that we have and I think adding new lines as well.

Meta’s management continues to see the company as being capacity-constrained when it comes to AI compute; management expects the capacity-constrain to last for most of 2026, but there are efforts within the company to mitigate the impacts of the constrain 

[Question] You’ve talked about being capacity constrained internally and not having enough compute to sort of achieve the goals you have on the platform on a product standpoint. I want to know if we can get any update on currently how you think about your own internal needs for compute against that road map?

[Answer] We do continue to be capacity constrained. Our teams have done a great job ramping up our infrastructure through the course of 2025. But demands for compute resources across the company have increased even faster than our supply. So we expect over the course of 2026 to have significantly more capacity this year as we add cloud. But we’ll likely still be constrained through m uch of 2026 until additional capacity from our own facilities comes online later in the year. With that said, I think we have done a good job internally mitigating the impact of compute constraints on our business. I expect that will continue to be the case in 2026. We’re continuing to focus on increasing our infrastructure efficiency in several ways, including by optimizing workloads, improving infrastructure utilization, diversifying our chip supply and just investing in efficiency improvements as part of our core technology development efforts in areas like content and ads ranking.

Meta’s management continues to think that it is critical for Meta to build its own frontier AI models

I think the question was around how important is it for us to have a general model. The way that I think about Meta is we’re like a deep technology company. Some people think about us as we build these apps and experiences, but the thing that allows us to build all these things is that we build and control the underlying technology that allows us to integrate and design the experiences that we want and not just be constrained to what others in the ecosystem are building or allow us to build. So I think that this is a really fundamental thing where my guess is that Frontier AI for many reasons, some competitive, some safety oriented are not going to always be available through an API to everyone. So I think like it’s very important, I think, to be able to have the capability to build the experiences that you want if you want to be one of the major companies in the world that helps to shape the future of these products. So that I think is — it’s going to be, I think, important from a business perspective.

Microsoft (NASDAQ: MSFT)

Microsoft’s management thinks AI is just starting to diffuse broadly into society, which would then lead to substantial growth in the company’s total addressable market (TAM)

We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build.

When building Microsoft’s AI infrastructure, management is aware of the heterogeneous nature of different workloads, and is optimising for tokens per watt per dollar to decrease total cost of ownership (TCO); Microsoft has been able to increase throughput by 50% in OpenAI inference, which is one of Microsoft’s highest volume workloads; Microsoft recently connected two GPU clusters through an AI WAN (wide area network) to build a first-of-its-kind AI data center

When it comes to our cloud and token factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads. We are building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment specific needs for all customers, including the long tail. The key metric we’re optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon systems and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing, powering our Copilots. And another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI WAN to build a first of its kind AI super factory. Fairwater’s 2-storey design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.

Microsoft’s AI infrastructure utilises chips from NVIDIA, AMD, and itself (Maia); management recently introduced Maia 200, which has 30% better TCO compared to other leading AI chips; management will be using Maia 200 for inferencing and synthetic data generation for its AI research team, and for production inference workloads; Microsoft has been building its own chips for a long time; Microsoft’s own AI models will all be optimised for Maia 200

At the silicon layer, we have NVIDIA and AMD and our own Maia chips, delivering the best all up fleet performance, cost and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator. Maia 200 delivers 10-plus petaFLOPS at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this starting with inferencing and synthetic data gen for our Superintelligence Team as well as doing inferencing for Copilot and Foundry…

…We’ve been at this in a variety of different forms for a long, long time in terms of building our own silicon…

…We’re obviously round-tripping and working very closely with own super intelligence team with all of our models, as you can imagine, whatever we build will be all optimized for Maia.

AI workloads require both AI accelerators (i.e. GPUs) and CPUs; Microsoft’s own Cobalt 200 CPU delivers 50% higher performance compared to the previous version

And given AI workloads are not just about AI accelerators, but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well. Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom build processor for cloud-native workloads.

Microsoft’s management sees AI agents as the new app platform that comes with the platform shift to AI; management thinks customers will need a model catalog, tuning services, harness for orchestration, and more, to deploy AI agents; more than 80% of the Fortune 500 have built active agents with Copilot Studio and/or Agent Builder; management thinks the proliferation of AI agents will create a new, significant growth opportunity for Microsoft; to meet the new growth opportunity, management has introduced Agent 365 for organisations to extend their existing governance, identity, security and management to agents; many of Microsoft’s technology partners are already integrating Agent 365; Agent 365 is the first product of its kind that allows cross-cloud agent cloud

Like in every platform shift, all software is being rewritten. A new app platform is being born. You can think of agents as the new apps and to build, deploy and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability and security. It starts with having broad model choice…

…We are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code/no-code tools.

As agents proliferate, every customer will need new ways to deploy, manage and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security and management to agents. That means the same controls they already use across Microsoft 365 and Azure, now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. 

Microsoft’s management sees the company’s customers wanting to use multiple AI models; management thinks Microsoft offers the broadest selection of models among the cloud hyperscalers; Microsoft already has more than 1,500 customers using Anthropic and OpenAI’s models on Foundry; management is seeing more customers choosing geographic-specific AI models

Our customers expect to use multiple models as part of any workload that they can fine tune and optimize based on cost, latency and performance requirements. And we offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5.2 as well as Claude 4.5. Already over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere as more customers look for sovereign AI choices, and we continue to invest in our first-party models, which are optimized to address the highest value customer scenarios such as productivity, coding and security. As part of Foundry, we also give customers the ability to customize and fine-tune models. 

Microsoft’s management thinks one of the most important considerations for companies when working with AI is their need to capture the tacit knowledge they possess inside of model weights as their core IP; Fabric’s annual revenue run rate was over $2 billion in 2025 Q4 (FY2026 Q2), and quarterly revenue was up 60% year-on-year; customers spending more than $1 million per quarter on Foundry was up 80% year-on-year in 2025 Q4 (FY2026 Q2); more than 250 customers are on track to process 1 billion tokens on Foundry in February 2026 alone; Foundry is a great on-ramp for Microsoft’s other cloud computing services, as most of Foundry’s customers are using additional Azure solutions

Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge, that means connecting their agents to systems of record and operational data, analytical data as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our unified IQ layer, spanning Fabric, Foundry and data powering Microsoft 365. In the world of context engineering, Foundry knowledge and Fabric are gaining momentum. Foundry Knowledge delivers better context with automated source routing an advanced agentic retrieval while respecting user permissions. And Fabric brings together end-to-end operational real-time and analytical data.

2 years since it became broadly available, Fabric’s annual revenue run rate is now over $2 billion with over 31,000 customers, and it continues to be the fastest-growing analytics platform on the market with revenue up 60% year-over-year. All of the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. And over 250 customers are on track to process over 1 trillion tokens on Foundry this year…

…Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale.

Microsoft’s own consumer Copilot agent experiences span a wide variety of domains; daily users of the Copilot app are up 3x year-on-year in 2025 Q4 (FY2026 Q2); users are able to make purchases directly in the Copilot app because of the Copilot Checkout feature; Microsoft 365 Copilot, which is Microsoft’s agentic experience for enterprises, has unmatched accuracy and the quality of its response had the highest sequential increase to-date in 2025 Q4 (FY2026 Q2); Microsoft 365 Copilot’s average number of conversations per user doubled year-on-year in 2025 Q4 (FY2026 Q2); Microsoft 365 Copilot’s daily active users was up 10x year-on-year in 2025 Q4 (FY2026 Q2); management is seeing strong momentum with Researcher Agent and Agent Mode; Microsoft 365 Copilot seat additions was up 160% year-on-year in 2025 Q4 (FY2026 Q2); there are now 15 million paid Microsoft 365 Copilot seats; the number of customers with >35,000 seats in Microsoft 365 Copilot tripled year-on-year in 2025 Q4 (FY2026 Q2); management is seeing strong growth across GitHub Copilot, with Copilot Pro Plus subs for individual developers up 77% sequentially in 2025 Q4 (FY2026 Q2), and paid Copilot subscribers up 75% year-on-year; Microsoft has the GitHub Copilot SDK and recently added a dozen new security Copilot agents; Dragon Copilot is a leader in its category and is serving 100,000 medical providers; Dragon Copilot documented 21 million patient encounters in 2025 Q4 (FY2026 Q2), up 3x year-on-year

In consumer, for example, Copilot experiences span chat, news, feed, search, creation, browsing, shopping and integrations into the operating system, and it’s gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year. And with Copilot checkout, we have partnered with PayPal, Shopify and Stripe, so customers can make purchases directly within the app.

With Microsoft 365 Copilot, we are focused on organization-wide productivity. Work IQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization. It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications and their history and memory all within an organization security boundary. Microsoft 365 Copilot’s accuracy and latency powered by Work IQ is unmatched, delivering faster and more accurate work grounded results than competition, and we have seen our biggest quarter-over-quarter improvement in response quality to date. This has driven record usage intensity with average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming true daily habit with daily active users increasing 10x year-over-year.

We’re also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as Agent Mode in Excel, PowerPoint and Word…

…It was a record quarter for Microsoft 365 Copilot seat adds, up over 160% year-over-year. We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users…

…The number of customers with over 35,000 seats tripled year-over-year. Fiserv, ING, NASA, University of Kentucky, University of Manchester, U.S. Department of Interior and Westpac, all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees…

…Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter, and all up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year…

…GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition and xAI in the context of customers GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows…

…And we’re going beyond that with GitHub Copilot SDK. Developers can now embed the same run time behind Copilot CLI, multi-model, multistep planning tools, MCP integration, Ops streaming directly into their applications. In security, we added a dozen new and updated security Copilot agents across Defender, Entra, Intune, and Purview…

…To make it easier for security teams to onboard, we are rolling out security copilot to all our E5 customers and our security solutions are also becoming essential to manage organization’s AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9x year-over-year…

…In health care, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows… All up, we helped document 21 million patient encounters this quarter, up 3x year-over-year.

1/3 of Microsoft’s cloud and AI-related capex in 2025 Q4 (FY2026 Q2) are for long-lived assets that will support monetisation over the next 15 years and more, while the other 2/3 are for CPUs and GPUs, driven by strong AI- and Azure-related demand; Azure is still capacity-constrained, and management wants to balance Azure demand for compute with 1st party demand for compute; the ROI for Microsoft’s capex sometimes shows up in increased revenue for Microsoft’s software business (i.e. non-Azure business) too; some of Microsoft’s AI compute is also allocated for R&D; the useful lives of Microsoft’s GPUs continue to be quite matched with the duration of their contracts; Microsoft becomes more efficient with delivery as its GPUs age, so margins actually improve over time

Capital expenditures were $37.5 billion, and this quarter, roughly 2/3 of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond. This quarter, total finance leases were $6.7 billion, and were primarily for large data center sites. And cash paid for PP&E was $29.9 billion…

…As we spend the capital and put GPUs specifically, it applies to CPUs, the GPUs more specifically, we’re really making long-term decisions. And the first thing we’re doing is solving for the increased usage in sales and the accelerating pace of M365 Copilot as well as GitHub Copilot, our first-party apps. Then we make sure we’re investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you’ve seen from us and products over the past a bit is coming because we are allocating GPUs and capacity to many of the talented AI people we’ve been hiring over the past years. Then, when you end up, is that, you end up with the remainder going towards serving the Azure capacity that continues to grow in terms of demand…

…As an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot and you should think about GitHub pilot, you should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value. I mean if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Copilot, which are all by the way incremental businesses and TAMs for us. And so we don’t want to maximize just 1 business of ours, we want to be able to allocate capacity while we’re sort of supply constrained in a way that allow us to essentially build the best LTV portfolio…

…You got to think about compute is also R&D…

…When you think about average duration, I think what you’re getting to is — and we need to remember, is it, average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApps portfolio, are shorter dated, right, 3-year contracts. And so they have, quite frankly, a short duration. The majority then that’s remaining are Azure contracts are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5 years. And the way to think about that is the majority of the capital that we’re spending today, and a lot of the GPUs that we’re buying are already contracted for most of their useful life…

…To state this in case it’s not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at delivery. So where you’ve sold the entirety of its life, the margins actually improved with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.

Commercial RPO (remaining performance obligation) is now $625 billion, up 110% from a year ago (was $392 billion in 2025 Q3); the weighted average duration of the RPO is 2.5 years; the RPO has significant customer-concentration risk with OpenAI, as OpenAI accounts for 45% of the RPO; the non-OpenAI part of the RPO was up 28% year-on-year in 2025 Q4 (FY2026 Q3); the average duration of RPOs for Azure are much longer than the average duration for M365 contracts

Commercial remaining performance obligation, which continues to be reported net of reserves increased to $625 billion, and was up 110% year-over-year with a weighted average duration of approximately 2.5 years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio…

…When you think about average duration, I think what you’re getting to is — and we need to remember, is it, average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApps portfolio, are shorter dated, right, 3-year contracts. And so they have, quite frankly, a short duration. The majority then that’s remaining are Azure contracts are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5 years. And the way to think about that is the majority of the capital that we’re spending today, and a lot of the GPUs that we’re buying are already contracted for most of their useful life…

Azure grew revenue by 39% in 2025 Q4 (FY2026 Q2) (was 40% in 2025 Q3); Azure’s revenue growth was slightly better than expected; Azure was capacity-constrained in 2025 Q4 (FY2026 Q2) and management wants to balance Azure demand for compute with 1st party demand for compute

In Azure and Other Cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations with ongoing efficiency gains across our fungible fleet, enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments and geographic regions, and demand continues to exceed available supply…

…Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation and continued replacement of end-of-life server and networking equipment.

Netflix (NASDAQ: NFLX)

Netflix started using new AI tools in 2025 to help advertisers create custom advertising based on Netflix’s intellectual property; Netflix started using AI models in 2025 to speed up advertising campaign planning; management continues to invest in sales and go-to-market for the advertising business

In 2025, we began testing new AI tools to help advertisers create custom ads based on Netflix’s intellectual property, and we plan to build on this progress in 2026. We also introduced automated workflows for ad concepts and used advanced AI models to streamline campaign planning, significantly speeding up these processes. 

Netflix is using AI to improve subtitle localisation; Netflix is using AI to help with merchandising

In content production and promotion, we’re using AI to improve subtitle localization, making it easier for our titles to reach more viewers around the world. Additionally, we’re implementing AI-driven tools to help with merchandising, which improves our ability to connect members with the most relevant titles for them to watch.

PayPal (NASDAQ: PYPL)

PayPal’s management’s vision with agentic commerce is to create a universally trusted catalog for AI agents to access, discover, and transact with; PayPal recently connected with early-adopter merchants for agentic commerce; PayPal recently went live with agentic purchasing through Perplexity and Microsoft Copilot; PayPal will acquire Cymbio for its Store Sync technology; management does not expect agentic commerce to move the needle for PayPa in 2026

Let me quickly share some of our latest developments in agentic commerce. Our vision is to create a universally trusted catalog that AI agents can access, discover and transact with safely and securely. Through our Store Sync offering, we are already connecting early adopters like Abercrombie & Fitch, Fabletics, PacSun and Wayfair with agentic chat platforms to allow consumers to discover, evaluate and purchase items within the chat. We went live with agentic purchasing through Perplexity ahead of Thanksgiving, and we are now also live on Microsoft Copilot…

Store Sync is enabled through a partnership with Cymbio, which we have agreed to acquire to bring this technology in-house. Agentic won’t materially impact 2026 growth. But as AI-powered shopping scales, our aim is to become the default payment option. This is only the beginning, and we are collaborating closely with the major AI platforms as we build agentic commerce capabilities together. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management expects capex for 2026 to be US$52 billion to US$56 billion, up 27%-37% from 2025 (2025’s capex was US$41 billion); most of the capex for 2026 will be for advanced process technologies; TSMC’s capital expenditure is always in anticipation of growth in future years; management now thinks a long-term gross margin of 56% and higher is achievable (previously was 53% and higher); TSMC’s capex in the last 3 years was ~US$100 billion, and the next 3 years is expected to be much higher; management thinks TSMC can earn a high-20% ROE through the cycle; management expects TSMC to shoulder greater capex for its customers; management has raised the revenue growth forecast for AI accelerators for 2024-2029 to mid-to-high-50% CAGR (previous guidance was mid-40%); management now expects 25% revenue CAGR in USD-terms for 2024-2029 (previously was 20% CAGR), driven by all 4 technology platforms; the 25% revenue CAGR projection is conservative

At TSMC, a higher level of capital expenditures is always correlated to the high-growth opportunities in the following years. With our strong technology leadership and differentiation, we are well positioned to capture the multiyear structural demand from the industry megatrends of 5G, AI and HPC. In 2025, we spent USD 40.9 billion as compared to USD 29.8 billion in 2024 as we began to raise our level of capital spending in anticipation of the growth that will follow in the future years. In 2026, we expect our capital budget to be between USD 52 billion and USD 56 billion as we continue to invest to support our customers’ growth. About 70% to 80% of the 2026 capital budget will be allocated to advanced process technologies. About 10% will be spent for specialty technologies and about 10% to 20% will be spent for advanced packaging, testing, mask making and others…

…As a result, in the last 3 years, our CapEx dollars amount totaled USD 101 billion, but is expected to be significantly higher in the next 3 years…

…We believe a long-term gross margins of 56% and higher through the cycle is achievable, and we can earn an ROE of high 20s percent through the cycle. By earning a sustainable and healthy return, even as we shoulder a greater burden of CapEx investment for our customers, — we can continue to invest in technology and capacity to support their growth while delivering long-term profitable growth to our shareholders.

…We expect 2026 to be another strong growth year for TSMC and forecast our full year revenue to increase by close to 30% in U.S. dollar terms…

…We raised our forecast for the revenue growth from AI accelerated to approach a mid- to high 50% CAGR for the 5 years period from 2024 to 2029, underpinned by our technology depreciation and broad customer base, we now expect our overall long-term revenue growth to approach 25% in U.S. dollar terms for the 5-year period starting from 2024. While we expect AI accelerators to be the largest contributor in terms of our incremental revenue growth. Our overall revenue growth will be fueled by all 4 of our growth platform, which are smartphone, HPC, IoT and automotive in the next several years…

…I think that fundamental thing position TSMC to be very good future growth, let me say that, 25% CAGR as we projected, and we used to be conservative. You know that.

TSMC’s management thinks Foundry 2.0 was up 16% in 2025, and is expected to grow 14% in 2026, supported by robust AI demand

Concluding 2025, the Foundry 2.0 industry, which we define as all logic wafer manufacturing, packaging, testing, mask making and others increased 16% year-over-year…

…We forecast the Foundry 2.0 industry to grow 14% year-over-year in 2026, supported by robust AI-related demand.

TSMC’s management thinks recent developments in the AI market are very positive; AI accelerator revenue accounted for high-teens of total revenue for TSMC in 2025; management sees increasing AI adoption in consumers, enterprises, and sovereigns; management has received very strong demand signals from TSMC’s customers and the customers’ customers; management’s conviction in the AI megatrend remains strong; management is disciplined when planning for capacity; TSMC’s lead-time has now increased to 2-3 years; management is very nervous about AI demand, but they have talked a lot with TSMC’s customers and customers’ customers in recent months to understand AI demand, and management is satisfied with the evidence they show of AI helping their businesses; a hyperscaler active in social media (most probably Meta Platforms) achieved very positive ROI from AI; TSMC is using AI internally to improve productivity and even just 1%-2% of productivity improvement would have paid off for TSMC’s AI investments; management sees AI growing into people’s daily life and they think there’s a real long-term trend

Recent development in the AI market continue to be very positive. Revenue from AI accelerator accounted for high teens percent of our total revenue in 2025.

Looking ahead, we observe increasing AI model adoption across consumer, enterprise and sovereign AI segment. This is driving need for more and more computation, which supports the robust demand for leading-edge silicon. Our customers continue to provide us with a positive outlook. In addition, our customers’ customers who are mainly the cloud service providers are also providing strong signals and reaching out directly to request the capacity to support their business. Thus, our conviction in the multiyear AI megatrend remains strong, and we believe the demand for semiconductor will continue to be very fundamental.

As a foundry, our first responsibility is to fully support our customers with the most advanced technology and necessary capacity to unleash their innovations. To address the structural increase in the long-term market demand profile, TSMC was closely with our customer and our customer and customer to plan our capacity. This process is continuous and ongoing in addition as process technology complexity increases the engagement lead time with customers is now at least 2 to 3 years in advance. Internally, as we have said before, TSMC employs a disciplined capacity planning system to assess the market demand from both top-down and bottom-up approaches. We focus on the overall addressable megatrend to determine the appropriate capacity to build. Based on our assessment, we are preparing to increase our capacity and stepping out our CapEx investment to support our customers’ future growth…

…Whether the AI demand is real or not. I’m also very nervous about it. You bet because we have to invest about USD 52 billion to USD 56 billion for the CapEx, right? If we didn’t do it carefully, and that would be big disaster to TSMC for sure.

So of course, I spend a lot of time in the last 3, 4 months talking to my customer and end customers’ customer. I want to make sure that my customers demand are real. So I talked to those cloud service providers, all of them. The answer is that I’m quite satisfied with the answer. Actually, they show me the evidence that the AI really help their business. So they grow their business successfully and healthy in their financial return. So I also double check their financial status. They are very rich. That sounds much better than TSMC. So no doubt, I also asked specifically that what’s application, right? I mean that’s — for one of the hyperscalers, they told me that, that helped their social media software. And so the customer continue to increase. So I believe that. 

And with our own experience in the AI application, we also help to our own fab to improve the productivity. As I mentioned, 1 time say that 1% or 2% productivity improvement, that is free to the TSMC…

…I believe in my point of view, the AI is real, not only real, it’s starting to grow into our daily life. And we believe that is kind of — we call it AI megatrend, we certainly would believe that. So you — another question is can the semiconductor industry to be good for 3, 4, 5 years in a row, I’d tell you the truth, I don’t know. But I look at the AI, it looks like it’s going to be like an endless, I mean, that for many years to come.

All of TSMC’s AI customers in the US are asking for a lot of support from TSMC’s Arizona fab; TSMC’s capacity in the US is very tight, probably going into 2027, and management is working hard to narrow the gap

All my customer and AI customers in the U.S., so they ask a lot of support from the U.S. fab. So because of that, we have to speed up our fab expansion in Arizona…

…The capacity is very tight. We work very hard to narrow the gap so far. Probably this year, next year, we have to work extremely hard to narrow the gap, okay? We just bought a second land in Arizona. That gives you a hint. That’s what we plan to do because we need it. We are going to expand many fabs over there and this giga-fab cluster can help us to improve the productivity, to lower down the cost and to serve our customers in the U.S. better.

TSMC’s management has seen the hyperscalers solve power constraints when building AI data centers through long-term forward planning; TSMC’s customers are telling TSMC that chips are their bottleneck when building AI data centres

[Question] We see that the AI semiconductor growth has seen very strong growth. And I believe all of your customers and customers’ customers very desperate to add more capacity support from TSMC. But I’m just wondering, how does TSMC evaluate the potential power electricity supply for data center?

[Answer] Talking about build a lot of AI data center all over the world, I use one of my customers’ customers I answer because I ask the same question. They told me that they plan this one 5, 6 years ago already. So as I said, those cloud service providers are smart, very smart. If I knew that, I will — anyway. So they say that they work on the power supply 5, 6 years ago. So today, their message to me is silicon from TSMC is a bottleneck and ask me not to pay attention to all others because they have to solve the silicon bottleneck first.

TSMC’s management thinks it will be 2028 or 2029 when the company can match demand and supply for AI chips

[Question] My question is really on AI. I mean, TSMC has been supply constrained for your AI customers, I think, since 2024, and it sounds like 2026 is another year where we’re going to see challenges. Do you think the CapEx you’ve laid out for this year. TWD 52 billion to TWD 56 billion, could that mean that we start to see supply and demand more in balance in 2027?

[Answer] If you build a new fab, it takes 2 and 3 years — 2 to 3 years to build a new fab. So even we start to spend the TWD 52 billion to TWD 56 billion, the contribution to this year almost none and to 2027, a little bit. So we actually are looking for 2028, 2029 supply. 

Tesla (NASDAQ: TSLA)

Tesla’s management thinks the growth of AI and robotics will usher in an era of universal high income

With the advent or with the continued growth of AI and robotics, I think we actually are headed to a future of universal high income, not universal basic income, but universal high income. I mean there’s going to be a lot of change along the way, but that is what I see as the most likely outcome.

Tesla will be investing heavily in capex in 2026 as it builds out vehicle autonomy and increase production of Optimus at scale, along with investing in its own AI chips; Tesla’s capex for 2026 is expected to be higher than $20 billion (was slightly below $9 billion in 2025); management’s planned capex amount of $20 billion or more does not include the 1st-party semiconductor fab that they are thinking of developing; management thinks Tesla will be in an investment cycle for some time; Tesla has sufficient cash resources to fund the capex, and it also has recurring-revenue services such as robotaxi that will help grease the wheels with banks when it comes to financing

This year is going to be a huge investment year from a CapEx perspective. And at the moment, we are expecting that CapEx would be in excess of $20 billion. We’ll be paying for 6 factories namely, the refinery, LFP factories, Cybercab, Semi, a new Megafactory, the Optimus factory. On top of it, we’ll also be spending money for building our AI compute infrastructure, and we’ll continue investing in our existing factories to build more capacity. And then also the related infrastructure along with it. And we’ll also further expand our fleet of Robotaxi and Optimus…

…Just keep in mind that we’re not — none of these numbers, which I shared of $20 billion factors in anything to do with the solar fab or the semiconductor chip fab…

…I think we’re getting into this investment phase because we have big aspirations. And when you look at it, some of these aspirations are — I call them as infrastructure play, especially if you have to do a chip fab, and we have to do a solar cell manufacturing fab, those are infrastructure plays and that funding takes a little bit longer. And you would be in an investment cycle for a little bit longer…

…How are we going to fund it? Initially, obviously, we have over $44 billion of cash and investments on the books. So we’ll use our internal resources, but there are ways where we can fund it, especially when we look at the Robotaxi fleet because any time you have a consistent stream of cash flow, you can go and get money from the banks. And we have had conversations with banks about it. And that is something how we’re going to do it.

Tesla will soon be stopping the production of the Model S and X and shift their production spaces to the production of Optimus, with the long-term goal of 1 million units annually; management will unveil Optimus 3 in a few months; management expects the manufacturing ramp for Optimus to be longer than for regular products because the supply chain for Optimus has to be built entirely from scratch; Optimus 3 will be a general-purpose robot that can learn by observing human actions; management thinks Optimus 3 will have a significant positive impact on US GDP; the Optimus robots are currently not used in Tesla’s factories in a material way, and any usage is for learning purposes for the robot; management expects significant volume of Optimus production to come only towards the end of 2026; management thinks that Optimus’s form factor – it looks like a human – will make it very easy to teach Optimus how to handle human tasks; management believes Tesla’s biggest competitors in the humanoid robot market will be from China; management believes Optimus far exceeds the capability of any robot under development in China; management thinks designing a hand with the required dexterity is the hardest engineering challenge with a robot; other major engineering challenges are a real-world AI model, and scaling production; management thinks Tesla is the only company in the world that can solve all 3 engineering challenges

We expect to wind down S and X production next quarter and basically stop production of Model S and X next quarter. We’ll obviously continue to support the Model S and X programs for as long as people have the vehicles. But we’re going to take the Model S and X production space in our Fremont factory and convert that into an Optimus factory, which will — with the long-term goal of having 1 million units a year of Optimus robots in the current S, X space in Fremont…

…We’ll probably unveil Optimus 3 in a few months. And I think it’s going to be quite surprising to people. It’s an incredibly capable robot…

…There’s really nothing from the existing supply chain that exists in Optimus. Everything is designed from physics first principles. So that means the normal S-curve of manufacturing ramp will be longer for Optimus than it is for products that have at least some portion of an existing supply chain. Like when everything is new, the production rate will be proportionate to the least lucky, least confident part of the entire supply chain. And if there’s 10,000 things that need to go right, it’s — it only takes one to be slow to lag that. But — so it will be sort of a stretched out S-curve. But I’m confident that we’ll get to 1 million units a year of — in Fremont of Optimus 3…

…Optimus 3 really will be a general-purpose robot that can learn by observing human behavior, so you can like demonstrate a task or literally verbally describe a task or show it a task, even show it a video and it will be able to do that task…

…I think, long term, Optimus will have a very significant impact on the U.S. GDP, like it will actually move the needle on U.S. GDP significantly…

…We have had Optimus do some basic tasks in the factory. But as we iterate our new versions of Optimus, we deprecate the old versions. And so it’s not — I wouldn’t say it’s like — it’s not in usage in our factories in a material way. It’s more so that the robot can learn. We wouldn’t expect to have any kind of significant Optimus production volume until probably the end of this year…

…It looks like a human. People could be easily confused that it’s a human. And this helps our strategy for the AI too because you can learn from how humans do these tasks and it’s very easy to teach the robot in the same way as opposed to previous robots…

…I do think that the — by far the biggest competition for humanoid robots will be from China. China is incredibly good at scaling manufacturing, actually quite good at AI, as you can see from the open source — or not the open source, but the sort of — I guess, some of them are open actually. But basically, the models that China is distributing for free are actually quite good and they keep getting better. So China is very good at AI, very good at manufacturing and will definitely be the toughest competition for Tesla. We — to the best of our knowledge, we don’t see any significant competitors outside of China…

…We think Optimus will be much more capable than any robot that we are aware of under development in China. So we think we’ll be ahead in terms of the real-world intelligence, the electromechanical dexterity, especially the hand design, which is by far the hardest thing in the robot. And in fact, I’d tell you there’s really 3 hard things about humanoid robots. Building an incredible hand that has the same degrees of freedom and dexterity as a human hand is an incredibly difficult engineering challenge. Then there’s the real-world AI and scaling production. Those are the 3 hardest problems by far for humanoid robots. I think we’re — Tesla has — is the only company that actually has all 3 of those components.

Tesla is now able to do its first robotaxi rides in Austin, Texas without a safety monitor; management thinks the amount of fully autonomous rides from Tesla will be increased dramatically every month going forward; management thinks there’s substantial economic opportunity for Tesla in the form of existing Tesla vehicle owners (for those who own vehicles with AI4 hardware versions) adding their vehicles to an autonomous fleet; depending on regulations, management expects Tesla to have fully autonomous vehicles in dozens of cities to half of the US by the end of 2026; revenue and cost per mile metrics for the robotaxi business are still not meaningful at the moment; management thinks autonomous vehicles will significantly change the global market size for automobiles; management is using Tesla’s vast network of charging and service centers that only the company has to prepare for the demand for robotaxi and autonomous vehicles; Tesla now has over 500 vehicles in the robotaxi fleet between the Bay Area and Austin in the US

We’re able to do our first rides with no safety monitor in the car in Austin. These are paid rides. So these are just sort of randomly selected paid rides with no safety monitor. And I think maybe — as of maybe yesterday or so, we actually don’t — we don’t even have a chase car or anything like that. So these are just cars with no people in them and no one is following the car in Austin. So we obviously are being very cautious about this because we want to have no injuries or serious accidents along the way. So I think it makes sense to be very cautious, but you’ll see the amount of autonomy increased dramatically, I think, every month essentially…

…There will also be an opportunity, something we’ve talked about for a long time for existing owners of Teslas to add or subtract their cars to the fleet, kind of like how Airbnb works where you can add or subtract your house to the Airbnb inventory. And I think probably the value of the Tesla — the sort of partial — a few people adding or subtracting the cars to Tesla autonomous fleet is probably a little underweighted by a lot of people because we’ve got millions of cars with AI4 that can do this. So that — it might potentially — I think it will provide an opportunity for a lot of customers to earn more by lending their car to the fleet than their lease cost to Tesla, yes, which is kind of — it’s kind of like you get — in that scenario, you basically get paid to own a Tesla…

…We expect to have fully autonomous vehicles in probably, I don’t know, somewhere between a 1/4 and 1/2 of the United States by the end of the year, pending regulatory approval. A big factor would be if there’s some kind of federal preemption for autonomous vehicles. In the absence of that, you kind of have to go on a city-by-city or state-by-state basis. But nonetheless, even if it is city by city, state by state, we expect to be in, I don’t know, dozens of cities, dozens of major cities by the end of the year…

…We’re still in the early phase of our fleet deployment and are still doing a lot of validation testing, the revenue and cost per mile metrics are not meaningful to discuss at the moment…

…[Question] Today, there are approximately 90 million cars sold globally each year. Does Tesla have a view based on its Robotaxi ambition what this number will be in 5 or 10 years?

[Answer] Obviously, autonomy and Cybercab are going to change the global market size and mix quite significantly. I think that’s quite obvious. General transportation is going to be better served by autonomy as it will be safer and cheaper…

…We’re using our vast network of charging and service centers that really only Tesla has in this space to jump start our infrastructure build-out needs to get ahead of Robotaxi and autonomous vehicle demand. And we expect that because of this network, we are the only company capable of scaling at the rate that is needed for the tsunami of autonomy that is coming…

…One other thing people forget that we’ve been deliberate on all this in the sense that we have the supporting infrastructure already being in place, whether it’s service centers, charging. Yes, we’ll have to augment as the fleet grows, depending upon the density of where the demand is and whatnot. But it’s not something like we just stumbled upon it and we’re starting to do. We’ve been at it for years. Yes, not every city is designed the same way. Same thing. Our infrastructure is also not the same in every city. But you have to give us credit that it’s been a journey.

…In terms of Robotaxi vehicles carrying paid customers, I think we’re well over 500 at this point between the Bay Area and Austin.

There are many countries where Tesla is selling vehicles where the latest version of FSD (Full Self Driving) software is not available; FSD now has nearly 1.1 million paid customers globally, and 70% of them paid upfront; in 2026 Q1, Tesla transitioned fully to a subscription model for FSD; a variant of the autonomous software used for robotaxi was recently shipped to customers of Tesla’s consumer vehicles with v14 (version 14) of FSD, and there was a lot of happy feedback from customers

We saw increase in demand leading to record deliveries in smaller countries like Malaysia, Norway, Poland, Saudi Arabia and Taiwan, while continued strength in the rest of APAC and EMEA. We, therefore, ended 2025 with a bigger backlog than in recent years. Note that none of these countries have the latest version of FSD supervised available yet…

…FSD adoption continued to improve in the quarter, reaching nearly 1.1 million paid customers globally. Of these, nearly 70% were upfront purchases. It is important to note that beginning this quarter, we are transitioning fully to a subscription-based model for FSD. Therefore, net additions to this figure will primarily be via subscription model and in the short term will impact automotive margins…

…A variant of the software that’s used for the Robotaxi service was shipped to customers with V14, and customers saw a huge jump in performance, like a lot of happy feedback from customers. So — and since then, we have improved the software significantly as well.

Tesla’s Cybercab vehicles for the robotaxi fleet are designed to accommodate just 2 passengers or less because 90% of vehicle miles travelled are with that number of passengers; the Cybercab model will not have a steering wheel or pedals, so it’s fully autonomous; management expects to start production of Cybercab in April 2026, with a typical S-shape curve for the production ramp; in time, management expects to be producing several times more Cybercabs per year than all of Tesla’s other vehicles combined; management thinks that 1%-5% of miles driven in the future will be performed by humans; the Cybercab has a different design to traditional passenger vehicles and it is super optimised for minimum cost per mile and a much higher duty cycle; management expects a Cybercab vehicle to be used 50-60 hours a week compared to 10-11 hours a week for a human-driven car; management is designing larger vehicles for the Cybercab in the future

And over 90% of vehicle miles traveled are with 2 or less passengers now, which is why we designed Cybercab that way…

…The Cybercab, which is a dedicated 2-seater, dedicated Robotaxi. It’s a little confusing with the terms, Robotaxi and Cybercab, sorry about the confusion. But — and in fact, in some states, we’re not allowed to use the word cab or taxi. So it’s going to get even more strange. It’s going to be like cyber vehicle or something, cyber car. But the Cybercab, which is a specific vehicle model that we’re making, does not have a steering wheel or pedals. So this is clearly — there’s no fallback mechanism here. It’s like this car either drives itself or it does not drive. And we expect to start production in April. As always, it’s an S-curve of — the production rate is an S-curve. So it starts off very slowly and then grows exponentially, then you hit the linear and then ultimately, it asymptotes at whatever your target volume is. So — but we would expect over time to make far more Cybercabs than all of our other vehicles combined. Given that 90% of distance driven or distance being — distance traveled exactly, no longer driving, is 1 or 2 people. I think it’s like 80% is just one. So it would mean that long-term Cybercab — we would make several times more Cybercabs per year than all of our other vehicles combined…

…The vast majority of miles traveled will be autonomous in the future. I would say, probably less than — I’m just guessing, but probably less than 5% of miles driven will be where somebody is actually driving the car themselves in the future, maybe as low as 1%…

…The whole design of Cybercab was to optimize the fully considered cost per mile of autonomous driving. And it’s a different design problem than if you’re trying to design cars for people who will be driving versus being driven. And — so Cybercab is, like I said, super optimized for minimum cost per mile and also for a much higher duty cycle. So we would expect Cybercab to be used probably 50 or 60 hours a week instead of the 10 or 11 hours a week that a driven vehicle is used. So typically, people might drive their car for 1.5 hours a day on average, so it’s like 10 hours per week out of 168. But I think an autonomous vehicle is likely to be used probably 5x as often, which means that you need to design the vehicle for much more wear and tear per unit time and much more resilience…

…We will have larger vehicles in the Cybercab in the future that are designed for full autonomy. And we’ve actually shown pictures of this, and in fact, have shown prototypes. So this is not exactly a secret. In fact, we’ve given people rides in them. So we’re not keeping this — hiding this light under a bushel here.

Tesla’s management thinks getting the design for Tesla’s AI5 chip right is the most important thing for the company at the moment; management is confident that AI5 will be a very good chip; management expects AI6 to follow AI5 in under a year, and for AI6 to be a much better chip than AI5; management’s priority with Tesla’s chips is for internal usage as they believe that chip production will be the key limiting factor for Tesla’s growth in the next few years; Tesla is currently using its own AI4 chips in its data centers and is conducting training of its AI models with both NVIDIA chips and the AI4 chips; management thinks Tesla needs to build its own fab to (1) solve production constraints at the major leading-edge fabs, and (2) reduce geopolitical risk; if Tesla were to build its own fab, it will be in the USA and will include logic chips, memory chips, and advanced packaging capabilities; management thinks that some people are underestimating the geopolitical risks related to advanced fabs (likely referring to the situation involving TSMC, Taiwan, and China); management thinks memory chips will be a bigger limiter to Tesla’s growth than logic chips; there are currently no advanced memory fabs in the USA; management will be making a big announcement on Tesla’s fab in the future; management has sufficient plans to solve for Tesla’s chip supply for the next 3 years, but anything beyond that is fuzzy

I tend to spend time on whatever the most critical issue is for the company and completing the AI5 chip design and having it be a great chip is arguably the #1 most critical thing to get done, which is why I’m spending more time on that than currently anything else at Tesla…

…I do think AI5 will be a very good chip. And I feel quite confident about the design at this point. And then AI6, which will follow that, it will be — aspirationally would follow that in under a year will be yet another big leap beyond AI5…

…In terms of selling it outside of Tesla, we first need to make sure we have enough chips for all of our vehicle production and all of our Optimus production, and then we will actually use the AI5 chips in our data centers…

…When I look ahead at, let’s say, what’s the limiting factor for Tesla growth, if you go, say, 3 or 4 years out, I think it actually is chip production. Is there enough AI logic and enough AI — enough memory, enough RAM for our volume. And right now, I see that as being the thing that probably limits our growth in 3 or 4 years, which probably imply that we’re not selling chips outside of Tesla because we need them.

…We already use the AI4 chips in our data centers. So when we do training, it’s a combination of the AI4 chips and NVIDIA hardware primarily that we do training with…

…This is definitely going to be a sort of a controversial thing, but I think Tesla needs to build a Terafab. And I mentioned this at the shareholder meeting. But even when you look at the output of — the best case output of all of our key suppliers and I would say even beyond suppliers like strategic partners like Samsung, TSMC and Micron, and we say like what’s the most you could possibly make, then it’s not enough. So we — I think in order to remove the constraint, the probable constraint in 3 or 4 years, we’re going to have to build a Tesla Terafab, a very big fab that includes logic, memory and packaging domestically. And that’s actually also going to be very important to ensure that we are protected against any geopolitical risks. I think people may be underweighting some of the geopolitical risks that are going to be a major factor in a few years…

…I think if we don’t do the Tesla Terafab, we’re going to be limited by supplier output of chips. And I think maybe memory is an even bigger limiter than AI logic. So for example, we have chip supply deals with TSMC in Arizona and Samsung in Texas. But currently, there are no advanced memory fabs at scale in the United States. They are zero, literally zero. Hopefully, Micron will have something going in a few years because they’re all headquartered in Idaho, where they make a lot of potato chips, but we need to make computer chips, too…

…Quite frankly, it would be crazy not to try the Terafab. We’ll have a bigger announcement on this in the future…

…We do have a solution for logic and memory for, let’s say, the next roughly 3 years. But if you start going beyond 3 years, we look at the scaling plans and how many fabs are getting built and especially if you factor in geopolitical uncertainty, there’s always risk that maybe the best chips don’t arrive that people were expecting to arrive.

Tesla recently invested in Elon Musk’s AI startup, xAI; Tesla is collaborating with xAI on AI technology and in fact, Tesla vehicles are already utilising xAI’s model, Grok; management believes Grok will be very useful for managing Tesla’s potentially massive robotaxi fleet; management sees Grok as a model that could be useful for also managing a fleet of Optimus robots 

On January 16, 2026, Tesla entered into an agreement to invest approximately $2 billion to acquire shares of Series E Preferred Stock of xAI as part of their recent publicly-disclosed financing round. Tesla’s investment was made on market terms consistent with those previously agreed to by other investors in the financing round. As set forth in Master Plan Part IV, Tesla is building products and services that bring AI into the physical world. Meanwhile, xAI is developing leading digital AI products and services, such as its large language model (Grok). In that context, and as part of Tesla’s broader strategy under Master Plan Part IV, Tesla and xAI also entered into a framework agreement in connection with the investment. Among other things, the framework agreement builds upon the existing relationship between Tesla and xAI by providing a framework for evaluating potential AI collaborations between the companies. Together, the investment and the related framework agreement are intended to enhance Tesla’s ability to develop and deploy AI products and services into the physical world at scale. This investment is subject to customary regulatory conditions with the expectation to close in Q1’2026…

…Even today, if you look at Tesla vehicles, we are using Grok in there…

…Grok will be very helpful in, say, maximizing the efficiency of the management of a large autonomous fleet. So I mean, if you’ve got an autonomous fleet that’s in the future 10 million vehicles or tens of millions of vehicles, then optimizing the efficient use of that fleet, Grok will be, I think, way better than any heuristic solution or sort of manually managed solution.

And if you say you’re managing, say, a large team of Optimus robots to build a factory or build a refinery — say hypothetical — it’s a hypothetical example, a rare earth or refinery, which we do desperately need in America, then you say, well, like what’s going to organize the Optimus robots to build that ore refinery that would — you need — kind of need an orchestra conductor. And so then Grok would be kind of the orchestra conductor for the Optimus robots to build the — hypothetically and — it might not be hypothetical in the future. I’m just saying it’s not currently in our plans.

Tesla’s management believes that Tesla’s AI model has the highest intelligence density per gigabyte, by far, in the world

I think one of the metrics one to consider for any given AI model is the intelligence per gigabyte, especially when you’re constrained on RAM, having an AI that has very high intelligence density per gigabyte. So you could say like, for a given number of gigabytes, how much functionality can you get out of it? I actually think Tesla is ahead of the rest of the world in intelligence density of AI by an order of magnitude or more. Like this is going to sound like a pretty bold statement, but I kind of know what the intelligence efficiency of the big models are like Grok and like to be honest — and a bunch of the other models. And Tesla AI is like in terms of its memory efficiency, more than an order of magnitude better.

Visa (NASDAQ: V)

Visa’s Intelligent Commerce solution uses Visa Tokens as the foundation for agentic payments; Visa is working with more than 100 partners in the global commerce ecosystem to enable agentic commerce and over 30 partners are already building in Visa’s sandbox; Visa recently expanded into B2B agentic payments with Ramp; Visa recently entered into an agreement with AWS for Visa Intelligent Commerce to help developers build agentic commerce solutions; Aldar is integrating with Visa Intelligent Commerce to provide recurring payments services; Visa’s Trusted Agent Protocol helps bring trust to agentic commerce; Visa recently partnered with Cloudflare and Akamai on Trusted Agent Protocol; Visa is currently building interoperability between Visa Intelligent Commerce and Google’s Universal Commerce Protocol; Visa’s agentic solutions are already live in the US and CEMEA (Central Europe, Middle East, and Africa); Visa’s agentic solutions are currently in pilot phase in Asia and Europe, with Latin America and the Caribbean (LAC) soon to come

One of those [ capabilities ] that is enabled with Visa Tokens is an important area of innovation, agentic commerce. Our Visa Intelligent Commerce solution utilizes tokens and their configurability as the core underlying foundation for agentic payments. We’re working to enable agentic commerce with more than 100 partners across the commerce ecosystem globally. Over 30 partners are actively building in our sandbox with multiple agents and agent enablers running live production transactions and more partners expected in the future.

Just this quarter, we expanded into B2B agentic payments with Ramp, streamlining corporate bill payments, enabling their business customers to capture cash back on card payments and optimizing working capital. We also reached an agreement with AWS to make Visa Intelligent Commerce available on AWS marketplace to support developers building agentic commerce solutions connecting secure, automated payment workflows at scale through blueprints for workflows such as travel bookings or retail purchases. In our CEMEA region, Aldar, leading real estate developer, investor and manager is integrating Visa Intelligent Commerce to make reoccurring payments such as property service charges on their Live Alder app.

Our Visa Trusted Agent Protocol continues to help define the connectivity and data elements required to bring trust to the agentic environment. In Q1, we announced partnerships with leading Internet security players, first Cloudflare and then Akamai, who collectively serve millions of businesses globally, including 9 of the world’s top 10 retailers. In addition, we are building interoperability between key elements of Visa Intelligent Commerce and Google’s new Universal Commerce Protocol as part of our global effort to help ensure that Visa transactions are securely supported as different protocols evolve. Our agentic solutions are live in the U.S. and CEMEA and we are initiating pilot programs in Asia Pacific and Europe. LAC is soon to follow where we have already begun token enrollment for agentic commerce with issuers. We believe that we are well positioned to be the infrastructure provider and key enabler in agentic commerce so that every agent interaction is trusted and secure.

The AI-powered Visa Account Attack Intelligence solution has scored 60 billion transactions and identified 600 million suspicious transactions in the last 12 months; Visa Account Attack Intelligence has prevented more than $10 billion of fraud in LAC (Latin America and the Caribbean) in the last 6 months

Another AI-powered solution, Visa Account Attack Intelligence was announced in 2024 in the U.S. to help clients prevent enumeration attacks which are when bad actors systematically initiate e-commerce transactions to obtain valid payment credentials. The results of this solution in the U.S. have been impressive, with over 60 billion transactions scored and nearly 600 million suspicious transactions identified in the last 12 months…

…In LAC, for example, in just 6 months, we have almost 90% of clients already activated and have prevented more than $10 billion of fraud.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Mastercard, Meta Platforms, Microsoft, Netflix, PayPal, TSMC, Tesla, and Visa. Holdings are subject to change at any time.