The Latest Thoughts From American Technology Companies On AI (2026 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2026 Q1 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the first quarter of 2026 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Alphabet (NASDAQ: GOOG)

Gemini Enterprise has 40% sequential growth in paid monthly active users in 2026 Q1; Gemini 3.1 Pro is pushing the frontier in reasoning, multimodal understanding, and cost; there are now a wide variety of models in the Gemini 3.1 family to meet different developer needs; Gemini 3.1 Flash Live is powering conversational features in search and the Gemini app, and speech-to-text is now available in 70 languages; Gemini 3.1 Pro has delivered a big upgrade to Alphabet’s Deep Research product; the Lyria 3 model has generated over 150 million songs since its launch in the Gemini app; Nano Banana 2 has generated 1 billion images in half the time of Nano Banana 1; management recently launched Gemma 4, Alphabet’s best open model to date, and it has been downloaded more than 50 million times in a few weeks; Nano Banana 2 was recently integrated into the Gemini app to enable personalised image creation; Gemini is now integrated with Google Maps, so users can converse with Google Maps via chat

Gemini Enterprise is seeing tremendous momentum with 40% growth quarter-over-quarter in paid monthly active users…

…Gemini 3.1 Pro continues to push the frontier in reasoning, multimodal understanding and cost. We have quickly expanded the Gemini 3.1 series of models to offer more choices for developers, including our cost-efficient Flash models. 3.1 Flash Live, our latest audio model, has improved precision and reasoning, making voice interactions more natural and intuitive. It’s now powering conversational features in search and the Gemini app. Speech-to-text is now available in 70 languages. And with 3.1 Pro, our Deep Research agent got a big upgrade, including MCP support and native visualizations.

Our generative media models are incredibly popular. Lyria 3 has generated over 150 million songs since launching on the Gemini app. Nano Banana 2 reached 1 billion images in nearly half the time of Nano Banana 1. And Veo 3.1 Lite is our most cost-efficient video model to date.

On top of this, we launched Gemma 4, our most intelligent open model. It’s been downloaded over 50 million times in just a few weeks. In fact, our open models have now been downloaded over 500 million times…

…This month, we integrated Nano Banana 2 to make personalized image creation possible in the Gemini app. Maps recently got its most significant upgrade in over a decade with Gemini. Users can now have a conversation with Maps and get more personalized suggestions and intuitive directions.

Alphabet’s management thinks Google Cloud has the widest variety of compute options with Alphabet’s custom TPUs and Axion CPUs, and NVIDIA GPUs; Google Cloud will be among the first cloud providers to offer NVIDIA’s Vera Rubin NVL72 systems; Alphabet recently introduced the 8th generation of TPUs that has a training variety and an inference variety; TPU 8t, the training variety, offers 3x the processing power and 2x the performance of the previous generation; TPU 8i, the inference variety, has 80% better performance per dollar in inference compared to the previous generation; Alphabet’s TPUs are powering the company’s AI research in both training and tooling; management will begin to deliver TPUs to select customers in their own data centers to expand the TPU opportunity; management expects to recognise most of the revenue of external TPU shipments in 2027; management does think about the ROIC of external TPU shipments compared to internal deployment

Our custom TPUs, Axion CPUs and the latest NVIDIA GPUs continue to form the industry’s widest variety of compute options. NVIDIA GPUs are a core part of our AI accelerator portfolio and will be among the first to offer NVIDIA Vera Rubin NVL72 in addition to the Blackwell and Hopper-based instances already available.

At Cloud Next, we introduced our 8-generation TPUs, individually specialized for training and serving and able to take on the most demanding agentic workloads. TPU 8t provides high-performance model training with 3x the processing power of Ironwood and 2x the performance. TPU 8i delivers cost-effective, low-latency inference with 80% better performance per dollar than the prior generation. This exceptional infrastructure powers our world-class AI research that includes models and tooling, which continue to progress really well.

Our TPUs continue our leadership in performance, cost and power efficiency for customers like Thinking Machines Lab, Hudson River Trading and Boston Dynamics. As TPU demand grows from AI labs, capital markets firms and high-performance computing applications, we’ll begin to deliver TPUs to a select group of customers in their own data centers in the hardware configuration to expand our addressable market opportunity…

…We expect to begin recognizing a small percent of the revenues from these agreements later this year with the vast majority of revenues to be realized in 2027. It is important to keep in mind that revenues from TPU hardware sales will fluctuate from quarter-to-quarter, depending on when TPUs are shipped to customers…

…On the second question around TPUs, obviously, I would — we do think about it as what are we doing through Google Cloud to help our customers? And that’s the framework with which we think about it. In that context, there are situations where it makes sense. For example, you take customers like capital markets where they are running this highly performant AI workloads. They wanted TPUs in their data centers. So there are — and those trends are true across a diverse set of industries and in certain cases, frontier AI labs, too. And so we are opportunistic about it. But I do think we step back and think about it overall as the opportunity for Google Cloud. A lot of it is providing infrastructure through cloud. At times, it is direct sales of TPU hardwares to a select group of customers. But again, we do take ROIC approach. And some of it helps us get more economies of scale, scale in our overall compute environment as well. And so helps us invest in the cutting edge, which we need to do in the next generation as well.

Alphabet is using Antigravity, the company’s 1st-party agentic coding solution, to manage fully autonomous digital task forces

With Antigravity, we are shifting to truly agentic workflows. Our engineers are now orchestrating fully autonomous digital task forces and building at a faster velocity. Much more to come here. 

Google Search queries are at an all-time high, driven by AI; AI Overviews is driving overall search growth; AI Mode is seeing strong growth in both users and usage globally; management recently shipped agentic experiences in Google Search, such as restaurant booking, to new countries; management recently shipped the multi-modal capability, Search Live (where users can have voice conversations AI while sharing their phone’s camera feed to study surroundings), globally; search latency has been reduced by 35% in the past 5 years despite the new AI features introduced in Google Search; management has reduced the cost of responses by AI Overviews and AI Mode by 30% since they were upgraded to Gemini 3

I continues to drive search usage and queries are at an all-time high. We continue to invest in improvements to AI Overviews, which are driving overall search growth and we are also seeing strong growth in both users and usage of AI Mode globally…

…We also shipped agentic experiences like restaurant booking to new countries and new multimodal capabilities like Search Live globally…

…Even as we have brought new AI features into our results page, we have reduced search latency by more than 35% over the past 5 years. And since upgrading AI Overviews and AI Mode to Gemini 3, we have reduced the cost of core AI responses by more than 30%, thanks to continued hardware and engineering breakthroughs.

Alphabet’s management thinks a key point of Google Cloud’s differentiation is its 1st-party solutions across the enterprise AI stack; Google Cloud’s enterprise AI solutions became Google Cloud’s primary growth driver for the first time in 2026 Q1; revenue from products built on Alphabet’s GenAI models was up 800% year-on-year in 2026 Q1; new customer acquisition doubled in 2026 Q1 from a year ago; the number of $100 million to $1 billion deals doubled year-on-year in 2026 Q1; Google Cloud customers outpaced initial commitments by 45% in 2026 Q1, accelerating from 2025 Q4; Google Cloud recently introduced new capabilities across its vertical AI stack, including a new Gemini Enterprise AI Platform that helps users build and manage agents; Gemini Enterprise paid monthly active users was up 40% sequentially in 2026 Q1; the partner ecosystem for Gemini Enterprise had 9x year-on-year growth in 2026 Q1 in seats sold by partners and number of partners using Gemini Enterprise internally; 330 Google Cloud customers processed over 1 trillion tokens each over the last 12 months, with 35 processing over 10 trillion tokens each

Google Cloud is differentiated because we are the only provider to offer first-party solutions across the entire enterprise AI stack…

…Our enterprise AI solutions have become our primary growth driver for cloud for the first time. In Q1, revenue from products built on our GenAI models grew nearly 800% year-over-year. We are winning new customers faster with new customer acquisition doubling compared to the same period last year. We are seeing strong deal momentum, doubling the number of $100 million to $1 billion deals year-on-year and signing multiple $1 billion-plus deals…

…Customers outpaced their initial commitments by 45%, accelerating over last quarter.

At Cloud Next last week, we introduced hundreds of new capabilities across our vertically optimized AI stack that are designed to work together for our enterprise customers. We introduced a new Gemini Enterprise Agent Platform that empowers users to build, orchestrate, govern and optimize agents with the controls that enterprise customers need. Along with new capabilities in Gemini Enterprise app like Projects, Canvas, Long-Running agents and Skills, every employee can build agents.

In Q1, Gemini Enterprise paid monthly active users grew 40% quarter-over-quarter. That includes major global brands like Bosch, Citi Wealth, Merck and Mars Inc. Our partner ecosystem plays an increasingly critical role in driving Gemini Enterprise adoption. We saw 9x year-over-year growth, both in seats sold with partners and in the number of partners adopting it for internal use…

…Over the past 12 months, 330 Google Cloud customers each processed over 1 trillion tokens. 35 reached the 10 trillion token milestone.

Gemini is applied in Youtube for better matching and discovery between brands and creators; Gemini now powers YouTube Creator Partnerships; management has made it easier for advertisers to buy premium advertising space on Youtube; Supergoop! partnered with a YouTube creator for a Shorts and CTV campaign and it led to a 93% lift for a product and a 55% overall brand lift.

We are applying Gemini to drive better matching and discovery between brands and creators of all sizes. And Gemini now powers YouTube Creator Partnerships, a centralized platform integrated directly into YouTube Studio for creators and Google Ads for advertisers. 

We’ve also made it easier to buy premium ad space in top-tier podcast shows by curating the most watched podcasts into popular genres. For example, Supergoop! partnered with YouTube creator, Liza Koshy on a multi-format shorts and long-form CTV campaign, resulting in a 93% lift for their Glowscreen product and a 55% overall brand lift.

Waymo has so far launched in 6 new cities in 2026 and is currently in 11 major US cities; Waymo is now providing 500,000 rides per week (was 400,000 in 2025 Q4)

Waymo is on a great trajectory. It launched in Nashville a few weeks ago, that makes 6 new cities so far in 2026 and operations in 11 major U.S. cities in total. Waymo also surpassed 500,000 fully autonomous rides per week, doubling in less than a year.

Alphabet’s management is accelerating the deployment of Gemini across the company’s entire advertising infrastructure; the deployment of Gemini has led to new performance breakthroughs in advertising quality, advertiser tools, and new AI user experiences; Alphabet is making significant strides in improving relevance even when there isn’t a direct user query; advertising in Discover is getting better aligned with unique user interests; promoted pins in Maps are deeply relevant to user surroundings, location of interest, history and intent; Alphabet’s advertising relevance has increased by nearly 10%; Gemini is now powering Smart Bidding to more accurately match user intent to an advertiser’s product; management launched AI Max to help advertisers adapt to a new conversational way of searching by consumers; AI Max was moved out of beta earlier in April 2026; Hilton EMA used AI Max to capture 33% more clicks at 20% of the spend, and to increase average booking value by 55%; Etsy used AI Max to increase search volume by 10% with 15% of queries being net new; more than 30% of customer search spend now uses AI Max or Performance Max, and advertisers using the tools enjoy more conversions for the same spend; management is reinventing advertising formats for AI-native experiences; direct offers in AI mode are resonating with users; management is testing a new advertising format in AI Mode that displays retailers who sell recommended products in the AI Mode’s answer to a query; management launched Universal Commerce Protocol (UCP) in January 2026; UCP has new members consisting of major technology companies; brands such as Sephora, Macy’s and Ulta Beauty have already rolled out UCP; Ulta Beauty recently launched agentic commerce experiences in AI Mode and the Gemini App; management has received great feedback on UCP and they think UCP will power a new checkout experience in AI Mode, Search, and the Gemini app

We are accelerating the deployment of Gemini across our entire ads infrastructure to help businesses reach more customers in more places than ever before. This is driving significant improvements across all areas of marketing and continues to fuel new performance breakthroughs across 3 areas critical for our customers’ success, ads quality, advertiser tools and new AI user experiences.

First, ads quality. AI is boosting our ability to deeply understand user intent for a given search query and to find the most relevant ad. Even when we don’t have a direct user query, we’re making significant strides in improving relevance. In Discover, new AI models and classifiers are driving higher relevance by better aligning ads with unique user interests. In Maps, we’re using Gemini to ensure promoted pins are deeply relevant to user surroundings, location of interest, history and intent. This work is improving ads relevance by nearly 10%, leading to significant increase in user engagement. We’re pairing this strengthened prediction-driven relevance with bottom-of-funnel precision. Over the past year, we’ve made over 20 improvements to search and shopping bid strategies. Smart Bidding now uses Gemini to match user intent to an advertiser’s product and services more accurately and further drive performance. This level of granularity was previously impossible to achieve at scale.

Second, on advertiser tools, where Gemini helps advertisers drive more efficient and effective campaigns. People no longer search in fragments. They search conversationally and share more context. We launched AI Max to help advertisers adapt to this new way of searching. And earlier this month, it moved out of beta with improved performance quality across targeting and creative capabilities. Take Hilton EMA, they captured 1/3 more clicks for 1/5 of the spend while simultaneously increasing the average booking value by 55%. And Etsy saw a 10% search volume uplift with 15% of those queries being net new to their business. We see significant opportunity as advertisers continue to make good progress on AI readiness and the adoption of AI tools. For instance, more than 30% of our customer search spend now uses AI-enabled campaigns, AI Max or Performance Max. And these advertisers are seeing more conversion for the same spend.

Third, how we monetize new AI user experiences in search? We aren’t just bringing existing ad formats into AI experiences. We are reinventing ads for this new era. Direct offers in AI Mode are resonating with users and continue to receive positive customer feedback. Gap, L’Oreal and Chewy are just some of the latest partners who have now signed up to test this Google Ads pilot.

We’re also exploring new formats for retailers. AI Mode already surfaces organic product recommendations based on the user’s query and we’re now testing a new ad format that displays retailers who sell those recommended products. In addition, the retail industry is rapidly coalescing around the open source Universal Commerce Protocol, or UCP, we launched in January in partnership with the ecosystem. Last week, we welcomed Amazon, Meta, Microsoft, Salesforce and Stripe as new members to the UCP Tech Council. They joined founding members, Shopify, Etsy, Target, Wayfair and Google to further accelerate the transition towards an agentic future. Partners like Sephora and Macy’s have joined companies like Ulta Beauty, who are already rolling out UCP and can now redefine consumer journeys from discovery to checkout. Ulta Beauty just last week launched agentic commerce within AI Mode and Search and the Gemini app. Shoppers can now review product recommendations, compare options and complete streamlined checkout for eligible purchases directly within AI Mode and Gemini…

…We’ve received tremendous feedback so far from hundreds of top tech companies, payments partners, retailers, really interested in integrating. And it will help power a new checkout experience in AI Mode, in Search and the Gemini app and allowing shoppers to actually check out from select merchants, right as they’re researching on Google and going through this journey.

Google Cloud had 63% revenue growth in 2026 Q1 (was 48% in 2025 Q4) driven by growth in GCP; GCP grew at a much higher rate than Google Cloud’s overall growth; Google Cloud’s growth was driven by AI solutions and AI infrastructure; Google Cloud operating margin was 32.9% (was 30.1% in 2025 Q4 and was 17.8% in 2025 Q1); Google Cloud backlog grew nearly 100% sequentially to $462 billion in 2025 Q4 (was $240 billion in 2025 Q4); most of Google Cloud’s backlog are GCP contracts, and just over 50% of the backlog is expected to be recognised as revenue in the next 2 years; Google Cloud’s impressive margin improvement was driven by leverage from revenue growth, and management’s insistence on running an efficient organisation

 Cloud revenues accelerated across all key areas and were up 63% to $20 billion. Revenue growth was driven by strong performance in GCP, which continued to grow at a rate that was much higher than cloud’s overall revenue growth rate. The largest contributor to cloud’s growth this quarter was AI solutions, driven by strong demand for industry-leading models, including Gemini 3. In addition, we had strong growth in AI infrastructure due to continued deployment of TPUs and GPUs and core GCP continues to be a sizable contributor driven by demand for infrastructure and other services such as cybersecurity and data analytics. Workspace again delivered strong double-digit revenue growth, driven by an increase in the number of seats and the average revenue per seat. Cloud operating income was $6.6 billion, tripling year-over-year and operating margin increased from 17.8% in the first quarter of last year to 32.9%.

Google Cloud’s backlog nearly doubled sequentially, reaching $462 billion at the end of the first quarter. The increase was driven by strong demand for enterprise AI offerings and the inclusion of TPU hardware sales that Sundar referenced earlier. The majority of the backlog is related to typical GCP contracts and we expect to recognize just over 50% of the backlog as revenue over the next 24 months…

…[Question] There’s a thesis out there that AI revenues are a lower margin in general but we are seeing margins improve. So more insights on just the cloud business and what’s driving that margin expansion.

[Answer] There are pushes and pulls across the business, including within cloud specifically. And I would start with the top line. When we see this robust strong revenue growth, both in Cloud and Google Services, it does provide leverage all the way down to the bottom line within the income statement. And you know we’ve been working hard to ensure we have — we’re running a productive and efficient organization. And it’s not just how we operate the business but even in areas such as our technical infrastructure, where we are investing the significant CapEx investments in our data centers and servers, we are looking at how we drive scientific process innovation within that organization. And that is reflected both in Cloud and Google Services as we allocate costs based on based on consumption. In the past, I did talk about the depreciation associated with these investments that is hitting both Google Cloud and Google Services. Google Cloud expanded margin quite significantly from a year ago, as you’ve seen in our numbers that we’ve just previewed. And a lot of it, again, is the top line growth that Google Cloud is providing or producing as well as an incredibly efficient way of running the business.

Alphabet’s management has raised capex guidance for 2026 to $180 billion to $190 billion (was previously $175 billion to $185 billion; 2025’s capex was $91.4 billion, which was itself up 65% from $55.4 billion in 2024, and 2024’s capex was up 69% from 2023); management is seeing unprecedented demand for AI compute; Alphabet’s investments in AI compute are delivering strong growth; management expects 2027’s capex to be much higher than 2026’s; management is investing in capex based on tangible demand signals and a ROIC framework; Google Cloud remains constrained by supply and would have grown faster in 2026 Q1 if supply was higher

…We will begin to deliver TPU hardware to a select group of customers in their own data centers. We expect to begin recognizing a small percent of the revenues from these agreements later this year with the vast majority of revenues to be realized in 2027. It is important to keep in mind that revenues from TPU hardware sales will fluctuate from quarter-to-quarter, depending on when TPUs are shipped to customers…

…Wiz will be reported in the Google Cloud segment. And second, we expect a low single-digit percentage point headwind to cloud’s operating margin for the remainder of 2026 related to the acquisition…

…We are updating our full year 2026 CapEx guidance range to $180 billion to $190 billion, up from our previous estimate of $175 billion to $185 billion to now include investment related to the acquisition of Intersect, which closed in March.

We are seeing unprecedented internal and external demand for AI compute resources. The investments we are making in AI is delivering strong growth as evidenced by the record revenue and backlog growth in Google Cloud and strong performance in Google Services. Looking ahead, these strong results reinforce our conviction to invest the capital required to continue to capture the AI opportunity. As a result, we expect our 2027 CapEx to significantly increase compared to 2026. In terms of expenses, as we’ve discussed previously, the significant increase in our investment in technical infrastructure will continue to put pressure on the P&L in the form of higher depreciation expense and related data center operations costs such as energy. We also expect to continue hiring in key investment areas such as AI and cloud and are investing in marketing to support our AI products…

…You’ve seen us over the past several years increase CapEx every year. And we have done it very thoughtfully to meet the demand that we are seeing, both from external customers as well as demands across the organization. And you’re seeing the proof point, the ROIC on that in terms of just the growth rate we’re seeing, whether it’s growth rate within search or certainly the cloud business and the opportunity we have within the cloud backlog…

…I do think looking ahead, our ability to invest in this moment and stay at the frontier, I think puts us in a strong position. And I think we are doing it based on tangible demand signals we are seeing. And it’s not just on the revenue side but I’m talking from a ROIC framework and that’s what is helping us navigate this moment responsibly…

…We are compute constraint in the near term. And as an example, our cloud revenue would have been higher if we were able to meet the demand.

Amazon (NASDAQ: AMZN)

AWS grew 28% year-on-year in 2026 Q1 (was 24% in 2025 Q4) and is now growing at its fastest pace in 15 quarters; AWS’s run rate has reached $150 billion (was $142 billion in 2025 Q4); the last time AWS grew at a similar rate, it was half its current size; AI’s growth is unprecedented; the 1st 3 years of AWS’s AI revenue run rate was $15 billion, 260x larger than AWS’s run rate in its 1st 3 years; management thinks customers are choosing AWS for AI for 4 reasons, namely, (1) AWS’s broader capabilities, (2) customers want their AI inference to be at where their other applications and data reside, and this happens to be in AWS, (3) customers want to consume non-AI services as they grow their AI usage, and AWS has a broad set of offerings, and (4) AWS has the strongest security and operational performance; AWS has won many new enterprise customers since 2025 Q4’s earnings call, including OpenAI, Anthropic, Meta Platforms, and NVIDIA; AWS continues to see strong growth in non-AI workloads as enterprises focus on cloud migrations; management is seeing customers who want to benefit from AI accelerate their migration to the cloud; management is seeing a strong correlation in customers’ AI spend and core growth in AWS; AWS’s AI revenue is growing triple digits year-on-year; AWS operating income in 2026 Q1 was $14.2 billion, reflecting 37.7% operating margin (was 35.0% in 2025 Q4 and 39.5% in 2025 Q1); AWS’s backlog is $364 billion in 2026 Q1 with significant sequential growth (was $244 billion in 2025 Q4), and the backlog has reasonable breadth and does not include a recent $100 billion deal with Anthropic

AWS growth continued to accelerate, up 28% year-over-year, the fastest growth rate in 15 quarters, up $2 billion quarter-over-quarter, the largest Q4 to Q1 AWS revenue increase ever. AWS is now a $150 billion annualized revenue run rate business. It’s very unusual for a business to grow this fast on a base this large. And the last time we saw growth at this clip, AWS was roughly half the size. We’ve never seen a technology grow as rapidly as AI…

…3 years after AWS launched, it had a $58 million revenue run rate. In the first 3 years of this AI wave, AWS’ AI revenue run rate is over $15 billion, nearly 260x larger.

There are several reasons customers are choosing AWS for AI. First, we’ve built broader capabilities than others…

…Second and another reason customers continue choosing AWS is that as they expand their use of AI, they want their inference to reside near their other applications and data and much more of it resides in AWS than any place else. Third, as customers expand their AI usage, they also want to consume additional non-AI services, and they’re choosing AWS because we’ve built the broadest and most capable core offerings by a wide margin. We offer thousands of features across compute, storage, databases, analytics, security and more, and Gartner consistently recognizes AWS’ leadership across their major cloud evaluation areas. Fourth, AWS is the strongest security and operational performance of any AI and infrastructure provider and start-ups, enterprises and governments continue to choose AWS as the foundation for their most critical workloads…

…Since last quarter’s call, we’ve announced new agreements with OpenAI, Anthropic, Meta, NVIDIA, Uber, U.S. Bank, Fox, Southwest Airlines, U.S. Army, Bloomberg, Cerebras, AT&T, Nokia, Fundamental, The National Geographic Society, PGA TOUR and many more…

…Moving to our AWS segment. Revenue was $37.6 billion and growth accelerated 480 basis points to 28% year-over-year, driven by both core and AI services. We continue to see customers increase cloud migrations and scale their use of AWS core services. Customers seeking the full benefit of AI are accelerating their transition to the cloud. We also see a strong correlation between AI spend and core growth. As customers spend more on AI, we see a corresponding demand increase in core. We expect this to increase over time as customers move more AI workloads into production, strengthening demand for our core services…

…Our AI revenue is growing triple digits year-over-year…

…AWS operating income was $14.2 billion and reflects our strong growth, coupled with our focus on driving efficiencies across the business…

…The backlog for Q1 is $364 billion. That does not include the recent deal that we announced with Anthropic for over $100 billion. There’s reasonable breadth in that as well. It’s not just 1 customer or 2 customers.

AWS’s chips business, including Graviton and Trainium, grew 40% sequentially in 2026 Q1; the chips business is now at a $20 billion annual revenue rate (was $10 billion in 2025 Q4), and growing triple-digits; if AWS sold its chips as a stand-alone business, its annual revenue run rate would be $50 billion; AWS’s custom silicon business is now 1of the top 3 data center chip businesses in the world; Anthropic and OpenAI both recently signed very large multi-year commitments for Trainium; Trainium now has $225 billion in revenue commitments; Trainium 2 has 30% better price-performance than competitor GPUs and is largely sold out; Trainium 3, which only started shipping at the start of 2026, is 30%-40% more price-performant than Trainium 2 and is nearly fully subscribed; Trainium 4 is already been reserved despite being 18 months from broad availability; Amazon Bedrock runs most of its inference on Trainium; Meta Platforms has committed to using tens of millions of AWS’s Graviton CPUs; Amazon management sees massive demand for CPUs as agentic AI, post-training, and inference scales up; Graviton has 40% better price-performance than other x86 CPUs; Graviton is used by 98% of the top 1,000 AWS EC2 customers; AWS is bringing in more Trainium chips than NVIDIA GPUs, but NVIDIA remains an important partner; management expects Trainium to eventually save AWS tens of billions of dollars of capex annually and provide several hundred basis points of operating margin; management believes that people will always want choice in models and chips; management is currently not interested in selling Trainium racks to 3rd party data centers, but thinks AWS could do so in the next few years

Our chips business continues to grow rapidly and is larger than what a lot of folks thought. We saw nearly 40% quarter-over-quarter growth in Q1, and our annual revenue run rate is now over $20 billion and growing triple-digit percentages year-over-year…

…If our chips business was a stand-alone business and sold chips produced this year to AWS and other third parties as other leading chip companies do, our annual revenue run rate would be $50 billion. As best as we can tell, our custom silicon business is now one of the top 3 data center chip businesses in the world, the speed at which we’ve gotten here is extraordinary…

…We’ve recently shared very large multiyear, multi-gigawatt Trainium commitments from the 2 leading AI labs in the world in Anthropic and OpenAI as well as an increasing number of companies like Uber betting on Trainium. And we now have over $225 billion in revenue commitments for Trainium. Our Trainium2 chip has about 30% better price performance than comparable GPUs and is largely sold out. Trainium3, which just started shipping at the start of 2026 and is 30% to 40% more price performance than Trainium2 is nearly fully subscribed. And much of Trainium4, which is still about 18 months from broad availability has already been reserved. Amazon Bedrock, which is used expansively by over 125,000 customers, runs most of its inference on Trainium and almost 80% of the Fortune 100 companies are using Bedrock.

We also just announced that Meta is committed to using tens of millions of Graviton cores. Graviton is our industry-leading CPU chip, which allows Meta to run the CPU-intensive workloads behind agentic AI with the performance and efficiency they need at their scale. AI is commonly seen as a GPU story, but the rise of agentic workloads, real-time reasoning, code generation, reinforcement learning and multistep task orchestration is driving massive CPU demand as well. As AI systems shift from answering questions to taking actions and as post-training and inference scale up, the compute required pulls heavily on CPUs. That’s why Meta chose Graviton, which delivers up to 40% better price performance than any other x86 processors and now used by 98% of the top 1,000 EC2 customers…

…While the largest number of AI chips we’re bringing in are Trainium, we continue to have a deep partnership with NVIDIA. We have immense respect for them, continue to order substantial quantities. We’ll be partners for as long as I can foresee, and we’ll always have customers who want to run NVIDIA on AWS, and we will also have a very large chips business ourselves. Customers always want choice. It’s always been true and always will be true…

…At scale, we expect Trainium will save us tens of billions of dollars of CapEx each year and provide several hundred basis points of operating margin advantage versus relying on others’ chips for inference…

…But the one thing you learn over and over again with every technology, it was true in databases, it was true in analytics. It was true in models. It’s true in chips, too, by the way, is that customers want choice. There is not one tool to rule the world, and they want choice…

…On the question about Trainium and the notion of our selling racks over time, I do think that’s very much a possibility. Always, we have to balance — we have such demand right now for Trainium, and we have such demand from various companies who will consume as much as we make that we have to decide how much we’re going to allocate to the existing demand and customers and how much we’re going to save to sell as racks. And for our existing customers that we sell Trainium to, how many will be Trainium plus running on our cloud infrastructure versus just the chips themselves. But I expect over time, there’s a good chance we’re going to sell racks over the next couple of years.

Amazon’s management remain confident in the returns generated by the company’s capex; much of the capex spent in 2026 will be installed in future years; customers have already committed to substantial portions of the 2026 capex; management sees attractive margins and ROIC (return on invested capital) for the 2026 capex; AWS has to spend more short-term capex the faster it grows, since AWS needs to spend on land, power, chips etc 6-24 months in advance of monetisation; AWS’s capex often fund assets with years and decades of useful lives; AWS’s capex generate attractive cumulative free cash flow and ROIC a few years after being in service; Amazon’s free cash flow in the early years of high-growth periods for AWS is limited until the early capacity is monetized and revenue growth outpaces capex growth, and management has seen this cycle in AWS’s first big growth wave and expects similar positive outcomes from the current wave; management expects to continue making significant investments in AI; management has no change on Amazon’s 2026 capex plan (original guidance for 2026 was for $200 billion, and this is up from $128 billion in 2025, and $83 billion in 2024); management first saw the trend of rising input prices for capex in 2025 H2 and has been working with suppliers to get supply; management is seeing rising memory prices be a push-factor for companies to shift from on-premise to the cloud

We continue to be confident in the long-term CapEx investments we’re making. Of the AWS CapEx we intend to spend in 2026, much of which will be installed in future years, we have high confidence this will be monetized well as we already have customer commitments for a substantial portion of it and that it will yield compelling operating margins and ROIC…

…The faster AWS grows, the more short-term CapEx we will spend. AWS is to lay out cash for land, power, buildings, chips, servers and networking gear in advance of when we can monetize it, typically 6 to 24 months before we start billing customers depending on the component. However, these CapEx investments fund assets with many year useful lives, 30-plus years for data centers, 5 to 6 years for chips, servers and networking gear. The free cash flow and ROIC for these investments are cumulatively quite attractive a couple of years after being in service. However, in times of very high growth like now, where the CapEx growth meaningfully outpaces the revenue growth, the early years free cash flow is challenged until these initial tranches of capacity are being monetized and revenue growth outpaces CapEx growth. We’ve been through this cycle with the first big AWS growth wave and like the results. We expect to feel similarly about this next wave with much larger potential downstream revenue and free cash flow…

…We will continue to make significant investments, especially in AI, as we believe it to be a massive opportunity with the potential to drive long-term revenue and free cash flow…

…I don’t have an update on — a new update on capital. Our plan is largely the same…

…Everybody knows that the cost of these components, particularly memory has skyrocketed. And we’re just in a stage where there’s just not enough capacity for the amount of demand. We have worked very closely with our strategic partners. We saw this trend happening early in the kind of the middle of the latter part of last year, and we’ve worked with our strategic suppliers here to get a significant amount of supply. And so we’re working very closely with them. I think the team has been very scrappy. I think we’ve done a good job in making sure that we’re not capacity constrained there, but we’re watching that very closely.

One of the interesting things that we see right now with the change in price and in supply on things like memory is that it is a further impetus pushing companies who have on-premises infrastructure into the cloud. And it’s because a meaningful part, these suppliers are prioritizing their very largest customers which cloud providers are. And so we have seen a number of conversations we’ve been having with enterprises for many months where it’s just been slower in getting the transformation plan to move to the cloud accelerate rapidly just because we have a lot more supply than what others have.

SageMaker, AWS’s model-building service, reduces training time of models by up to 40%; Bedrock, AWS’s fully-managed service for companies to build upon frontier models, had 170% sequential growth in customer spend in 2026 Q1; Bedrock processed more tokens in 2026 Q1 than all prior years combined; OpenAI’s latest models are already, or will soon be, available on Bedrock; Amazon management recently added the Amazon Bedrock Managed Agents feature, which helps organizations build generative AI applications and agents at production scale;  Amazon Bedrock Managed Agents is powered by OpenAI, and OpenAI is seeing unprecedented demand for the product; Amazon management believes companies will derive the most value from AI from agents; Strands, AWS’s open source AI agents SDK (software development kit) has been downloaded more than 25 million times, with downloads up 3x sequentially in 2026 Q1; AgentCore is used to deploy an agent every 10 seconds; AWS has turnkey agentic solutions, including Kiro and Quick; Kiro, AWS’s coding agent, saw users double sequentially in 2026 Q1 and enterprise usage 10x; Quick, AWS’s AI assistant, has seen new customers grow 4x sequentially in 2026 Q1; management recently launched the Quick desktop app, which helps improve productivity of users; Amazon Bedrock now has 125,000 customers; 80% of the Fortune 100 are using Amazon Bedrock; AWS delivered 4x improvement in Trainium 2’s token throughput for Bedrock, leading to more capacity to serve customers; management thinks having OpenAI’s models on Bedrock is a big deal; Bedrock is already serving 3rd-party models from all the non-OpenAI key players; management believes that people will always want choice in models and chips; management believes that most of the work being done with models in the future will be of the stateful variety; Bedrock Managed Agents is a feature unique to AWS 

We’ve built broader capabilities than others. That includes model building with SageMaker, which reduces training time by up to 40%, high-performance inference with the leading selection of frontier models in Bedrock, which saw 170% growth in customer spend quarter-over-quarter and processed more tokens in Q1 than all prior years combined.

We’re excited to make OpenAI’s models available in Bedrock. Yesterday, we added OpenAI’s GPT-5.4 model with 5.5 coming soon. Yesterday, we also started the preview of Amazon Bedrock Managed Agents powered by OpenAI, the Stateful Runtime Environment that enables any organization to build generative AI applications and agents at production scale. We believe that modern agentic applications will be stateful, and this new technology will rapidly accelerate agentic AI adoption. OpenAI has said they’re already seeing unprecedented demand for this new product, and we’re seeing heavy customer interest as well.

Most of the value companies derive from AI will be through agents. In AWS customers can build agents with their proprietary data and Strands, which has been downloaded more than 25 million times and saw 3x more downloads quarter-over-quarter. Customers can deploy agents with enterprise scale, security and reliability with AgentCore, which is being used to deploy an agent as frequently as every 10 seconds. We also offer turnkey agents for coding, software migrations, business operations and knowledge workers in Kiro, Transform, Connect and Quick, and they continue to resonate with customers. The number of developers using Kiro more than doubled quarter-over-quarter and enterprise customer usage increased nearly 10x. Customers have used Transform to save over 1.56 million hours of manual effort when migrating and modernizing their workloads. The number of new customers using Quick has grown more than 4x quarter-over-quarter, and we just announced our Quick desktop app yesterday. It’s very compelling as it can query your e-mail, calendar, Slack, local files and several other applications you use every day to flag important communications, retrieve and summarize information, make recommendations, compose and send communications to others and create agents that highlight or automatically do work that you used to have to do yourself. You can easily keep refining your preferences and Quick’s advanced knowledge graph enables its AI agents to automatically learn from your interactions to become more personalized over time…

…Amazon Bedrock, which is used expansively by over 125,000 customers, runs most of its inference on Trainium and almost 80% of the Fortune 100 companies are using Bedrock…

…Bedrock has been a significant growth driver. In 2025, we delivered 4x improvements in Trainium2’s token throughput. And since the majority of Bedrock’s workloads run on Trainium, these efficiency gains directly translate into more capacity to serve customers…

…The fact that we’re going to have all of the OpenAI models available in Bedrock is a big deal. It’s a big deal for customers. And we have — we obviously have a very large amount of AI being done in Bedrock today on the models we have and this is Anthropic and Llama and Mistral and a host of others. But the one thing you learn over and over again with every technology, it was true in databases, it was true in analytics. It was true in models. It’s true in chips, too, by the way, is that customers want choice. There is not one tool to rule the world, and they want choice…

…Most of the model work and most of the AI has been done in these stateless models, kind of tokens in and tokens out. And while I think there will continue to be lots of work done that way, I think the future of using these models is a stateful model, a stateful API. And that’s because when you’re building agents, you’re building AI applications, you don’t want to start a new every time you interact with the model. You want to store state. You want to store identity, you want to store what the conversation or the actions have been, you want to reach out and do a little bit of compute here. You want to have the tools to be able to reach — the models reach out to the different tools to accomplish different tasks. And that only happens if you’re able to store state. And so the Bedrock Managed Agents that we collaborated with and invented with OpenAI that we just announced a preview of yesterday is also — I think that’s the future of how these agents are going to be built. It’s something that nobody else has, and I think it’s very exciting to our customers.

Amazon is able to deliver items faster while lowering its cost to serve, and management sees meaningful opportunities to further improve the fulfillment network’s productivity; Amazon’s latest generation of robotics offers a step change in efficiency; management is deploying the latest generation of robotics in both new and existing fulfillment facilities, and early results are positive

Overall unit growth of 15% continues to outpace our cost to operate the fulfillment network as outbound shipping costs grew 12% year-over-year and fulfillment expense grew 9% year-over-year, both on an FX-neutral basis. As our network efficiency improves, we’re able to deliver items faster and improve the customer experience while at the same time lowering our cost to serve. Looking ahead, we see meaningful opportunities to further enhance productivity across our global fulfillment network, all while continuing to raise the bar in delivery speed. We will keep optimizing inventory placement to shorten distance traveled, reduce touches per package and improve consolidation rates.

Alongside these efforts, we deploy robotics and automation, which have been integral to our operations for decades. Our latest generation technologies offer a step change in efficiency, which we’re deploying in both new and existing facilities. All of our U.S. large-format fulfillment center launches in 2026 will have this latest generation technology. We’re seeing early positive results with improved site safety, higher productivity and lower cost to serve.

Amazon management recently launched Health AI, a personal health agent

We launched Health AI, a 24/7 AI-powered personal health agent backed by One Medical clinicians that gives U.S. customers instant clinical guidance and takes action with their permission from booking appointments to managing prescriptions to facilitating medical treatment with a real One Medical provider.

Rufus, Amazon’s AI shopping assistant, saw monthly active users grow 115% year-on-year in 2026 Q1, and engagement increase by 400%; Rufus has improved a lot over the past year

Rufus, our agentic AI shopping assistant continues to resonate with customers. Rufus can research products, track prices and auto buy products in our store when they reach a set price. Monthly active users are up over 115% and engagement is up nearly 400% year-over-year…

…If you haven’t checked out Rufus in a while, it’s really substantially improved over the last year.

Amazon management recently launched Seller Central, an AI-powered insights-hub for sellers on Amazon; the initial response to Seller Central has been very strong

We recently introduced a new AI experience for sellers in Seller Central that dynamically generates a custom, personalized visualization of data, key insights and scenarios tailored to the sellers’ goals. It’s early, but the initial response and feedback are very strong.

Amazon’s management recently expanded Creative Agent to more countries; Creative Agent is Amazon’s agentic offering that helps advertisers plan and execute the entire advertising creative process; management recently launched sponsored products and brand prompts in Rufus; 20% of shoppers interacting with brand prompts in Rufus carry on the conversation

Our Ads team also continues to invent and deliver for advertisers with AI. For example, we expanded Creative Agent, an agentic partner that plans and executes the entire ad creative process to Canada, France, Germany, India, Italy, Spain and the U.K. And we recently introduced Sponsored Products and Brand Prompts in Rufus that help brands showcase products and customers make more informed buying decisions. It’s early, but we’re seeing nearly 20% of shoppers who interact with the Brand Prompts in Rufus continue the conversation about that brand.

Amazon’s management recently expanded early access to Alexa+ to Mexico, UK, Italy, and Spain; compared to the previous Alexa, users are completing 3x more purchases on device, streaming 25% more music, and using smart home functionality 50% more 

Alexa+ early access expanded to millions more Prime members in Mexico, the U.K., Italy and Spain. Customers are loving Alexa+, talking to Alexa twice as much and for longer durations across a wider breadth of topics, completing purchases on devices 3x more, streaming music 25% more and using smart home functionality 50% more than Alexa classic.

Amazon’s management continues to be very bullish on agentic commerce; management thinks agentic commerce will be very good for customers and Amazon in the long run; agentic commerce is currently only a small fraction of referrals from search engines; management thinks the user-experience with agentic commerce from 3rd-party agents is still poor, as pricing and product information are often wrong, and the agents don’t have personalization data and shopping history; management is working with 3rd-party agent providers to improve the experience; management continues to think that the agentic shopping assistant that will prevail will come from existing retailers that customers already have a good relationship with, and management is attempting to build Rufus to be the prevailing agentic shopping assistant; management thinks agentic commerce will be a great thing for Amazon’s advertising business because of 2 reasons, namely, (1) agentic AI will drive greater volume of advertising, and (2) agentic commerce provides multiple opportunities to surface relevant products to customers

We are very bullish on what agentic commerce will look like. I think it’s going to be very good for customers in the long term. I think it will be good for us, too…

…We’ll do a lot of work with third-party horizontal agents to try and make that customer experience better. And by the way, I do think today, it reminds me in some ways the stage we’re in of what we saw in the early days of search engines and they’re trying to refer business to e-commerce. It’s never been a giant part of the referrals to our e-commerce business. But over the years, the experience got better. And what you see with agentic commerce is it’s a small fraction of what we see with the search engine referrals, but the experience just hasn’t gotten great with these third-party horizontal agents yet. They’re not often able to get the pricing right or the product information right. They don’t have any personalization data or any shopping history. And so we do want to see that get better with third-party horizontal agents. We’re having conversations with all those folks to try and make that better and find something that works for customers and all the companies.

And then it will be interesting over time which agents customers choose to use. I happen to think that if you’re going to a particular retailer that you’d like to do business with and you like to shop from, if they have a great agentic shopping assistant, you’re going to often start there because it’s where you’re doing your shopping, it’s easier to — they have better product information. They have better information about what other customers like you are buying. You can make all sorts of changes to how your account and your shipping information is working there. And so that’s what we’re aiming to make Rufus be is we’re aiming to have it be the best shopping assistant anywhere, and I think we’re on that path…

…On the Agentic Commerce and how that impacts advertising, I actually believe that we’re going to like this for advertising. I think it’s going to be good for customers, and it’s going to be good for our business. And I think, first of all, the first thing to remember is the way that our ads team has built tools and agents themselves is making it so much easier to do advertising. If you look at small and medium-sized businesses that had to take weeks and months to do creative and to pick the right audience, all of that is just — it’s so much faster and so much easier because of our advertising agentic tools. And you no longer have to take as much time or spend as much money building the creative.

So I think there are going to be a lot more advertising — advertisers with the rise of what’s happening in AI. And then if you look at the Agentic Commerce experiences, if you look at any of these agentic experiences, they tend to be multi-turn conversations where you’re not interacting with one search and getting an answer. You tend to find that you’re asking questions, you’re narrowing questions, it’s asking you questions on what you want. And in that process of having multi turns, there are multiple opportunities to surface relevant products to customers, many of which will be organic and some of which will be sponsored. And it also gives rise to opportunities like sponsored prompts.

In the 2025 Q4 earnings call, Amazon’s management said market demand for AI compute looked like a barbell with AI labs on one end spending a lot on compute for just a handful of applications, and with enterprises on the other end using AI for productivity purposes; now, management is starting to see enterprises using AI for brand-new experiences

The AI labs are spending an incredible amount of money on compute at this point and in compute, both on the AI side as well as on the core side. And the models that they’re building and the companies that have successful generative AI applications are certainly spending a lot. And there are several of those labs. But we also see quite a bit of enterprise adoption and usage of AI. As I’ve said before, the largest absolute place that we see enterprises having success is in projects that are around cost avoidance and productivities. These are things like automating customer service or business process automation or fraud or things of that sort. But the number of projects that we’re working with across enterprises and that we’re now starting to see to come to production around brand-new experiences, trying to figure out how to reinvent their current experiences, but using inference and AI to be smarter, also very significant. So we’re seeing the adoption in both of those segments.

Amazon’s management sees a giant impact on how AI will shape Amazon’s business internally; management believes AI will completely reinvent Amazon’s current customer experiences in the fullness of time; management is aware of the innovator’s dilemma that can trap Amazon in reinventing AI-native customer experiences, and is actively avoiding the trap; Amazon swapped the engine of a service running at full tilt with a team of just 5 people who used agentic coding tools to build the new engine in 65 days; the engine would previously have taken 40-50 people a year to rebuild

On the use of AI internally and for our current businesses, I think that the shortest first summary I could give you, Colin, is that I do not see a place in any of our businesses or any of the ways that we do work where we’re not going to have giant impact on what we do. I think I’ve long had this belief that while you can add incrementally to a lot of your existing customer experiences, different agentic and AI experiences, I really believe that in the fullness of time, and I don’t know if that’s 3 years from now or 5 years from now or it could be sooner, too, that all of these customer experiences we know are going to be completely reinvented…

…It’s tricky for — if you have an existing business that’s doing well. But you have to look at every single one of your customer experiences and you have to be able to carve off resource for that team to think anew about what would the future customer experience look like if you started from scratch today, and if you had all the technologies like AI available to you when you started. And that is what we’re doing in every single one of our experiences…

…If you look at one of our services, we swapped out the engine of the service while we are also running the service full tilt. And normally, that would have taken 40 or 50 people about a year to do, and we took 5 really smart people, AI forward-thinking people building on agentic coding tools and those 5 people rebuilt it in 65 days. Like that is a very different world of operating. And that’s the world I think we’re heading to over the next few years.

Apple (NASDAQ: AAPL)

The iPhone 17 family contains the A19 and/or the A19 Pro chips, which include neural accelerators to deliver strong AI capabilities

During the quarter, we welcomed iPhone 17E, the newest addition to what is already the strongest iPhone lineup we’ve ever had. It brings outstanding performance and core iPhone experiences at a remarkable value for everyone from enterprise teams to consumers. Across the lineup, this is the most powerful, capable and versatile iPhone family we’ve ever created. That starts with the latest in Apple silicon for iPhone, A19 and A19 Pro, which include neural accelerators in the GPU to deliver a huge boost to AI performance

Apple’s management thinks the Mac is the best platform for AI, with Apple’s in-house chips giving Macs the ability to run advanced AI models on-device; the MacBook Air now comes with the M5 chip, which enables the product to run AI models on device; the MacBook Pro has even more advanced versions of the M5 chip in M5 Pro and M5 Max

From Mac Mini to MacBook Pro and everything in between, Mac is the best platform for AI with Apple Silicon delivering exceptional performance, industry-leading efficiency and the ability to run advanced models locally in ways that simply weren’t possible before…

…We’ve also further improved MacBook Air, already the world’s most popular laptop with M5, making everyday tasks faster and more responsive than ever. MacBook Pro reaches new heights with M5 Pro and M5 Max, delivering extraordinary performance and dramatically advancing what users can do with AI on a portable system…

Apple’s new AirPods Max 2 has Apple’s most advanced active noise cancellation technology; AirPods can now do live translation, thanks to Apple Intelligence

During the quarter, we introduced customers to a new level of audio experience with AirPods Max 2, delivering stunning sound quality and our most advanced active noise cancellation yet…

…AirPods can bridge languages too, thanks to Live Translation powered by Apple Intelligence.

Apple Intelligence now has more powerful capabilities such as visual intelligence for cleanup; management is looking to launch a more personalised Siri later in 2026 ; Apple Intelligence is powered by Apple’s self-designed chips; management is not treating AI as a standalone feature but is instead treating AI as an essential experience

In addition to live translation, Apple Intelligence brings together dozens of powerful capabilities from visual intelligence to cleanup and photos that are seamlessly integrated into the moments that matter most to our users every day. And we look forward to bringing a more personalized Siri to users coming this year. What truly sets Apple apart is how Apple Intelligence is woven into the core of our platforms, powered by Apple Silicon and designed from the ground up to deliver intelligence that is fast, personal, and private. This is not AI as a stand-alone feature, but AI as an essential intuitive part of the experience across our devices. It builds on years of innovation from the neural engine to advanced on-device processing, enabling capabilities that are not only incredibly powerful, but also respectful of user privacy.

Reminder that in 2025, management committed to invest $600 billion over 4 years (was a $500 billion commitment in 2025 Q2; Apple has around $190 billion in gross profit per year, for perspective) in the USA in areas such as advanced manufacturing, silicon engineering and artificial intelligence; Apple now has Mac mini production in the USA; in March 2026, management brought 4 new companies to Apple’s American manufacturing program; Apple is on track to buy over 100 million advanced chips from TSMC’s Arizona fab; later in 2026, Apple will open its advanced manufacturing center in Houston to provide hands-on training for students, supplier employees and American businesses

We’re also making great progress in advancing American supply chain innovation. As part of our $600 billion commitment to the U.S., we were pleased to share recently that Mac mini production is coming to America later this year, expanding our factory operations in Houston with a brand-new facility. In March, we were thrilled to welcome 4 new companies to our American manufacturing program to help manufacture essential materials and components for Apple products sold worldwide. These include sensors that support key iPhone features like camera stabilization and integrated circuits essential for features like crash detection and activity tracking. These efforts build on the progress we’ve made in the American manufacturing program, including the work we’re doing to advance an end-to-end silicon supply chain across the U.S. At TSMC’s Arizona facility, for example, Apple is on track to purchase well over 100 million advanced chips.

As we’re accelerating our long-standing support for U.S. innovation, we’re also investing in America’s workforce. We’re looking forward to opening the doors to an all-new advanced manufacturing center in Houston later this year, which will provide hands-on training led by Apple experts and tailor-made for students, supplier employees and American businesses.

The Mac Mini and Mac Studio models are great devices for AI and agentic AI, and so demand from consumers was greater than management expected; management thinks the supply constraints with the Mac Mini and Mac Studio will take a few months to resolve; management’s guidance for 2026 Q2 already embeds significantly higher memory costs; management thinks memory costs will have an increasing impact on Apple’s business

You look forward to the June quarter, the majority of our supply constraints will be on several Mac models given the continued high levels of demand that we’re seeing. And we have less flexibility in the supply chain than we normally would. For Mac, in the June quarter, there’s 2 factors that are driving the constraints. One is that on the Mac Mini and the Mac Studio, both of these are amazing platforms for AI and Agentic tools. And the customer recognition of that is happening faster than what we had predicted. And so we saw higher-than-expected demand. The second reason is that the customer response to Mac Neo has just been off the charts, with higher-than-expected demand…

…We think looking forward that the Mini and the Mac Studio may take several months to reach supply-demand balance…

…I’ll go back to December for a moment and just walk you through the chronology. In the December quarter, we really had a minimal impact due to memory, and you can kind of see that in the gross margin results. We said it would be a bit more in the March quarter, and we did see higher memory costs in the March quarter, and they were partially offset by benefits from carry-in inventory that we had. For the June quarter and what’s embedded in the guidance that Kevan went through earlier, we expect significantly higher memory costs. They are also partly offset by the benefit of carry-in inventory. And then where we don’t give color beyond June, I can tell you that beyond the June quarter, we believe memory costs will drive an increasing impact on our business.

Apple’s management has been investing more in AI in both products and services, and this shows up in the company’s operating expenses, specifically in R&D (research and development); the increased investments in AI include building Apple’s own foundation models, and in the collaboration with Google; Apple’s collaboration with Google on foundation models is going well

[Question] As we think longer term, do you think Apple will invest more? Where will Apple invest more heavily over the next several years? And is this at all related to your net cash comments in terms of perhaps building out more infrastructure as we enter an AI-centric world?

[Answer] We are clearly investing more. You can see that in the OpEx numbers. And if you click down on those a step deeper and look at the R&D area separate than SG&A, you’ll find that R&D is even accelerating much higher than the company is. And so we are clearly investing. We’re investing in products and services, and we see opportunities in both of those…

…We believe AI is a really important investment area for Apple, and we’re going to be doing that incrementally on top of what we normally invest in our product road map…

…[Question] Last quarter, you did talk about Apple foundational models and sort of the two-pronged strategy there of the collaboration with Google as well as continuing to internally sort of work on your own models. Hoping you can sort of give us an update in terms of how you’re able to balance those 2 priorities as well as do you feel like you need to double down and invest more to be able to balance those 2 priorities side by side?

[Answer] We are investing more. You can see that in the OpEx numbers. And as I’ve mentioned before, the R&D, in particular, is — has scaled rather significantly on a year-over-year basis. The collaboration with Google is going well. We’re happy with where things are, and we’re happy with the work that we’re doing independently as well.

ASML (NASDAQ: ASML)

ASML’s management is seeing the semiconductor industry’s growth continue to solidify, driven by AI investments, and this applies to both advanced Memory and advanced Logic; management thinks semiconductor supply will not meet demand for the foreseeable future, and this is creating constraints in end markets, including AI; management is seeing ASML’s Memory customers being asked to ramp supply; ASML’s memory customers are sold out for 2026, with supply constraints extending beyond the year; management is seeing ASML’s Logic customers building capacity, including for the 2nm node to meet AI demand and mobile demand; management is seeing ASML’s customers increasing their capital expenditure to ramp up their capacity, and this capacity is supported by long-term commitments from their customers; management is seeing ASML’s Memory customers and Logic customers increase their adoption of EUV and DUV immersion lithography; the level of demand for ASML’s DUV immersion lithography systems in 2025 was significantly lower what’s currently seen; besides DUV immersion, management is also seeing health in the DUV dry lithography business; management has seen major adoption of EUV by ASML’s DRAM customers in 2025 because EUV provides better performance; DRAM has been a really good story for lithography intensity in 2025; ASML’s customers have been very open with the company on their expansion plans

We see that the semiconductor industry growth continues to solidify. This is still very much driven by investments in AI infrastructure. So, this translates into a lot of demand for advanced Memory, for advanced Logic. We expect in fact that the supply will not meet the demand for the foreseeable future. So, this is creating a strong constraint in the end markets from AI to mobile and PC. As a result our customers are strongly invited to create more capacity. So if we look at Memory, what our customers tell us is that they are sold out for 2026. And their supply constraints will last beyond 2026. For advanced Logic, we see our customers building capacity for several nodes, while they also continue to ramp 2 nm in order to address the AI products…

…We see our Memory and Logic customers increasing their capital expenditure and trying to accelerate basically their capacity ramp in 2026 and beyond. What’s also very interesting is that a lot of this demand is supported by long-term commitment from their customers. On top of that, we see both Memory customers, DRAM customers and advanced Logic customers continuing to increase their adoption of EUV, but also immersion. So this translates basically into higher lithointensity and a higher litho demand for ASML…

…When it comes to immersion DUV, we actually had a bit of a slow start because in the course of last year, we were looking at a significantly lower demand for immersion. That has now reversed itself…

…I already mentioned what we’re doing on immersion, but also the dry business is doing quite nicely…

… In the Logic business, our customers are adding capacity across multiple advanced nodes to support demand while continuing to ramp the 2-nanometer node in support of next-generation HPC and mobile application…

…We have seen a major adoption of EUV in DRAM in 2025. And you may have noticed that our, I will say, U.S. DRAM customer also made this announcement that they were shifting also pretty strongly on EUV. And the reason for that is, of course, performance, but it’s also capacity because if you are going to use more EUV layers, you are going to need less multi-patterning and multi-patterning takes a lot of space also in the fab. So I think this is also definitely another argument in favor of EUV. I think this was mentioned, by the way, by this U.S. customer in their call. So I would say the first results of that is, first, more adoption of Low NA EUV…

…DRAM has been really a good story when it comes to litho intensity in ’25…

…Customers are very, very open. By the way, that’s also the case on the Logic side. But very — customers are very open to us, and they’re very openly discussing with us also their expansion plans for this year, but also beyond.

ASML’s management does not want EUV systems to be the bottleneck in building compute capacity for AI; EUV systems are not the bottleneck today

We do not want EUV to be the bottleneck. So I think I’d like to say that very, very strongly…

…I know the question of bottleneck comes back very often. I think we don’t feel at all that we are the bottleneck today.

Intel (NASDAQ: INTC)

Intel’s management expects sustained momentum for the company’s Xeon server CPU products in 2026 and 2027, with the Xeon 6 being Intel’s fastest new product ramp in 5 years alongside the Core Series 3 products; Xeon’s momentum is powered by the reinsertion of CPUs as a foundation for AI where the CPU-to-GPU (accelerators) ratio is swinging back to the CPU’s favour; management thinks the CPU’s resurgence in AI is great news for Intel’s x86 CPU ecosystem; Intel saw strong ASIC growth in 2026 Q1 sequentially and year-on-year; Intel’s DCAI (Data Center and AI) segment, signed multiple long-term agreements in 2026 Q1; Xeon 6 was recently selected as the host CPU for NVIDIA’s DGX Rubin NVL8 systems; Xeon remains the most deployed host CPU for AI systems; DCAI recently started a multiyear collaboration with SambaNova to design a next-generation AI inference architecture; management’s confidence in the sustained growth of CPUs for AI is growing; management’s outlook for server CPU demand has improved in 2026 Q1; management expects the server CPU industry to have a strong year of double-digit unit growth in 2026, extending to 2027; the long-term agreements signed by DCAI have volume and pricing terms, and last 3-5 years; Intel’s customers are telling the company that CPUs are more important in AI inferencing and agentic AI than AI training, with the ratio of GPUs-to-CPUs flipping from 8:1 to possibly 1:more-than-1; management believes Intel’s CPUs will be very effective competitors to the likes of ARM, AMD, and the hyperscalers

Demand continues to run ahead of supply for all our businesses, especially for Xeon server CPUs, where we expect sustained momentum this year and next. Intel 3-based Xeon 6 and Intel 18A based Core Series 3 products are now in full volume production ramp and each represents the fastest new product ramp in 5 years…

…For the last few years, the story around high-performance computing was almost exclusively about GPU and other accelerators. In recent months, we have seen clear signs that the CPU is reinserting itself as the indispensable foundation of the AI era. CPU now serves as the orchestration layer and critical control plane for the entire AI stack. This is not just our wishful thinking, it is what we hear from our customers, and it is evident in the demand profile for our products. Xeon server demand is seeing strong and sustained momentum. Customers are deploying server CPUs along accelerators in the ratio that is moving back towards CPU. The accelerator remains central to Frontier AI, and we will continue to participate, innovate and partner in that category. Our recent announcement with SambaNova Systems is an example of such partnership on heterogeneous compute architectures. But the backbone of AI computing in production remain a CPU anchored architecture. That is good news for the x86 ecosystem. It is great news for Intel…

…We also saw strong ASIC growth with revenue up more than 30% sequentially and nearly doubling year-over-year…

…Within the quarter, DCAI signed multiple long-term agreements, including Google, supporting our view that the current business momentum is sustainable. In addition, Xeon 6 was selected as the host CPU for NVIDIA’s DGX Rubin NVL8 systems, and Xeon remains the most deployed host CPU due to its industry-leading memory, security and networking orchestration. Lastly, DCAI also established a multiyear collaboration with SambaNova to design a next-generation heterogeneous AI inference architecture combining SambaNova’s RDUs and Intel Xeon 6 processors…

…Our confidence in the sustained growth of CPUs driven by the AI infrastructure build-out is growing. Our outlook for server CPU demand has improved over the last 90 days, and we expect a strong year of double-digit unit growth for the industry and for us with momentum extending into 2027…

…Most of these agreements are structured with volume and pricing, and they are usually somewhere between 3 and 5 years…

…The feedback from the customer, CPU is very important when you move from training to inference. Inference side, I think in terms of orchestration, control plane and also managing all the different agent with data, CPU is much more efficient. So I think the ratio of CPU to GPU used to be 1 and 8, and now it’s 1:4 and I think towards parity or even better…

…One statistic that we look at is the ratio of CPUs to GPUs. And if you look at training solutions, they’re generally running in the kind of 7 to 8 GPUs to 1 CPU. As we look into inference, it’s probably getting into like the 3 to 4:1 kind of level. And as you get into agentic and multi-agent, it’s one potentially even flip in the other direction a little bit…

…[Question] On server CPU competition. So both when we look at competition versus x86 against AMD, do you think you are gaining share? Do you expect to gain share against them? And then broader, I think the competition against Arm because NVIDIA is planning to launch a stand-alone Vera CPU Rack. Recently, we heard Amazon talk up their Graviton option. I think Google yesterday said they would launch Axion and connect it with every TPU. So just kind of near term, how do you look at competition versus AMD and x86?

[Answer] The CPU is a great demand right now. I think we all enjoy that. And then in terms of our product road map, we have been fine-tuning the last year… We are laser-focused on execution. Multithreading, I think we are putting in. So we’re going to have Coral Rapid, have the multithreading that we can compete effectively with AMD. And we try to accelerate that Coral Rapid ahead. And then the other part is we’re also looking at some of the architecture, CPU and GPU architecture… In all, I think we have the team, we have the technology road map. I think we’re going to be — over time, going to be a very effective competitors to them.

Intel’s management sees the semiconductor industry’s addressable market approaching $1 trillion, driven by AI demand, and the company is well positioned to benefit

Driven by tremendous demand for AI, the semiconductor industry TAM is now approaching $1 trillion. Intel is well positioned to benefit from this demand with 3 strategically important assets: our x86 CPU franchise, our advanced packaging technology and our vast manufacturing network.

Intel’s management sees AI moving into the real world, with more distributed inference

Artificial intelligence is now moving into the real world towards a more distributed inference and reinforced learning workloads like agentic, physical AI and robots and edge AI.

Intel’s management is pleased with the progress of the company’s foundry technology development, but it will be a long journey; the manufacturing yields of the Intel 3 and Intel 18A process technologies are now running ahead of management’s projects; Intel continues to make progress in advanced packaging technologies, with additional customer backlog growth in 2026 Q1; Intel’s 14A process technology is now at a higher level of yield compared to 18A at a similar point in time, and the company is developing PDKs (process design kits) with multiple customers; management expects to see design commitments for 14A in 2026 H2 and 2027 H1; the progress of Intel Foundry has driven the company to land more of its own future product tiles on the Intel 14A process; Intel Foundry will be supporting TeraFab, the huge semiconductor project undertaken by Elon Musk’s companies; management wants to work with TeraFab to improve the manufacturing efficiency of semiconductors; rising prices for memory chips and other materials are a headwind for Intel Foundry’s gross margin in 2026 H2; management will continue to utilise a multi-foundry approach for Intel; Intel Foundry’s advanced packaging business is seeing demand in the billions of dollars; Intel Foundry’s advanced packaging is a differentiated offering – it allows customers to use larger reticles – and so it’s getting attractive pricing; Intel Foundry’s 18A yields are going to hit management’s end-2026 targets by the middle of the year; most of Intel Foundry’s supply is for internal demand at the moment, but management expects it to win customers over time

The accelerating deployment of AI infrastructure creates a meaningful opportunity for us as we continue to build our external foundry business. I’m pleased with the progress we have made in foundry technology development over the last year, even though I will continue to remind you this will be a long journey for us. We have made steady progress with Intel 4 and Intel 3 and 18A yields are now running ahead of the internal projections, representing a meaningful inflection in our execution and our factory finished good output.

We also continue to make steady progress on our advanced packaging technologies, including additional growth in customer backlog in the quarter.

Intel 14A maturity yield and performance are outpacing Intel 18A at a similar point in time, and we continue to develop PDKs with multiple customers actively evaluating the technology…

…We expect to see earlier design commitments emerge beginning in the second half of 2026 and expanding into the first half of 2027…

…I’m particularly pleased that our progress today has driven us to land more of our own future product tiles on Intel 14A as well. At a time when advanced wafer capacity is in the short supply, this enables us to have better control over our supply chain…

…As we look to continue challenging the status quo, I can think of no better partners than Elon Musk. We recently announced our partnership with SpaceX, xAI and Tesla to support Terafab. Elon and I share a strong conviction that global semiconductor supply is not keeping pace with the rapid acceleration in demand. We are excited to explore innovative ways to refactor silicon process technology, looking for unconventional ways to improve manufacturing efficiency that will eventually lead to a dynamic improvement in the economics of semiconductor manufacturing…

…Our foundry team is delivering consistent yield and throughput improvements across all process nodes, which will help gross margins. With that said, Intel 18A is still early in its ramp and rising input costs, especially in memory, present growing headwinds in the second half that we need to overcome…

…I’d say the one cautionary concern I have on gross margin in the back half of the year is just some of the materials have gone up in terms of cost, substrates are going up, T glass. We’ve got memory going up, as you know. So those things offset some of the improvements that we’re having through the year…

…TSMC is a very important partner for us. Morris and C.C. have decades of friendship. And then clearly, with our product group will decide which is the best foundry. So I think we’re going to use a multi-foundry approach, our own internal and also external. And so we really have good relationship, continue to build from both sides to benefit the customer…

…[Question] I would love to kind of level set where we are on the advanced packaging front. You talked about rising backlog. Anything you can share in terms of what that number looks like?

[Answer] We have been really pleased with our traction there. And I think maybe naively, I had thought that these opportunities would come in the hundreds of millions of dollars level. But so far, what we’re seeing is that their demand is more in the billions of dollars per year kind of level. So this is going to be a big part of the foundry revenue as we get through this decade. And the good news is advanced packaging really is a differentiated offering for us, and it does a lot for the customer in terms of allowing them to use larger reticles. So there’s real value to the customer. And as a result, we get very attractive pricing relative to some of the other areas of the foundry business…

…18A yields are somewhat a closely guarded proprietary piece of information for us. So we don’t typically — I would just say Lip-Bu had a target as we came into the year for the end of this year, and we’re probably going to hit that probably the middle of this year…

…[Question] As we think about your capacity tightness, the leading edge foundries are also quite tight as well. Has this driven any near- to medium-term share gains?

[Answer] All the supply right now or the lion’s share of the supply is all internal, but we do expect, obviously, to win customers over time.

Intel’s AI-driven businesses are now 60% of revenue, and was up 40% year-on-year in 2026 Q1

AI-driven businesses now represent 60% of revenue and grew 40% year-over-year.

Intel’s management now expects capital expenditures to be flat in 2026, but the actual dollar-amounts spent on tools will be up 25% in 2026, as management is seeing a lot of demand and wants to catch up on supply

We forecast capital expenditures in 2026 to be flat to last year versus our prior expectation of flat to down, reflecting increased capacity investments to support committed demand and a continued emphasis on improving fab productivity and output. We now expect expenditures to be roughly equal across the year and still to be heavily weighted towards the equipment that directly grows wafer outs to support growth this year and next…

…In the last few years, a lot of our CapEx spending was space. And I think we’re actually in a pretty good position in space. We wanted to have white space available to move into when needed. And I think Lip-Bu and I both feel like we’re in a good place. So we actually will be bringing the space spend down pretty materially, even though the total is flat. And so what that means is the tool spend is actually increasing pretty significantly. In fact, tool spending will be up year-over-year 25% or so. And so that’s, I think, a function of the fact that we just see a lot of demand, and we want to make sure we’re catching up on the supply front.

Intel’s management thinks the ASIC business will be a fast-growing one for the company in the next 5 years; the ASIC business is already at a run rate of more than $1 billion

[Question] On the ASIC business, Dave, I think you said it doubled year-on-year. If you could maybe help us with what is included in that? I believe it’s IPUs, but I just want to get a better sense how big it is.

[Answer] Stay tuned on that one, the next 5 years is going to be a fast growing for us…

…One thing that people have been surprised about is how big the business is already. It’s at a run rate that’s north of $1 billion already.

Intuitive Surgical (NASDAQ: ISRG)

The da Vinci 5 captures real-world surgical data at greater scale and fidelity, enabling deeper surgical insights; the surgical insights captured by da Vinci 5 will be used by Intuitive Surgical for AI-enabled capabilities; management expects to add telesurgery and more automation to Intuitive Surgical’s robotic surgery platforms over the long term; management believes that AI will help Intuitive Surgical to move its Quintuple Aim forward; the data captured by da Vinci 5 includes video, kinematic, and force data; the AI-powered insights that management wants to deliver to customers can be in the form of operational guidance, learning of a surgeon/care team, and in the operating theatre

da Vinci 5 captures real-world surgical data at greater scale and fidelity, enabling deeper insight into how procedures are performed in practice. That insight paired with clinical context from connected electronic medical records, provides better understanding of variation, workflow and outcomes, and informs current and planned digital and AI-enabled capabilities…

…Collectively, these efforts are foundational to our long-term digital and AI road map where we expect to add telesurgery, deeper decision support and augmented dexterity, including aspects of future automation, all in pursuit of advancing the Quintuple Aim…

…We believe, yes, that AI will be a contributor to moving the Quintuple Aim forward…

…It starts with high-quality data, and that data will exist in video data from surgeries. It will exist in robotic data streams like kinematic data and force data. It will exist in connected electronic medical records, where we’re working with customers to do so. And once we have that high-quality data set, then the job of our AI and our data scientists is to turn that into meaningful insights…

…So there are, I think, ways in which this will show up to the customer. Some will be as operational guidance and assistance as they look at their hospital robotic program and want to increase efficiencies or understand costs. Some of it may show up in the learning of a surgeon and/or a care team. But a lot of it will show up in the operating room and I think show up in the surgery itself. And an example of this kind of first phase might be AI-enabled anatomy identification where you can see AI showing critical structures in the surgical field, showing tissue planes to help assist the surgeon. Then, over time, what we expect is that many of those same foundations that are being established and built in kind of that first phase, if you will, will support more advanced assistance around augmented dexterity and it will include — likely include aspects of automation. There, an example might be helping to control the camera as the surgeon is focused on the procedure.

Intuitive Surgical’s management thinks the company’s differentiation in AI comes from its installed base of da Vinci 5 systems, and the number of procedures performed by the systems annually which generates unique data

How do we sit, how do we exist within the AI ecosystem and how are we differentiated? I think part of that differentiation is around the installed base of systems that we have out there, including about the 1,500 da Vinci 5 systems, the 3 million and more procedures that are being done on an annual basis. And I believe that gives us the foundation to strengthen the differentiation over the next 3 to 5 years. If you look at the industry and you say, what is broadly available, broadly available to everyone, it’s things like edge and cloud compute, the math that underscores much of this, some of the training algorithms. Our advantage, we believe, lies is in the unique data sets that are available to us today through something like Force Feedback and will be increasingly available to us as we add capability to da Vinci 5.

Mastercard (NYSE: MA)

Mastercard is working with key players in the agentic commerce ecosystem, including Google, Microsoft, and OpenAI; Mastercard is partnering with OpenAI on Mastercard Agent Pay, which enables agent-to-agent payments; nearly all Mastercards globally are enabled for Mastercard Agent Pay; Mastercard’s management launched Verifiable Intent in 2026 Q1; Verifiable Intent is a temper-resistent record of authorisations a user has given to his/her agent; the FIDO Alliance is using Verifiable Intent as a foundation for security standards in agentic commerce; Crossmint, a leading blockchain infrastructure provider, will integrate Mastercard Agent Pay and Verifiable Intent so that it can enable secure Mastercard transactions for agents; Crossmint’s integrations will be launched initially on OpenClaw; management thinks Mastercard’s network will serve agentic commerce with tokenised credentials; management thinks agentic commerce will bring even more incremental opportunity in transactions and services over time; volumes with Mastercard Agent Pay are still low

On Agentic, the ecosystem continues to evolve. Our payment solutions are ready, and we are engaged, shaping what comes next with key players, including Google, Microsoft, OpenAI, and other partners across the ecosystem. We’re deepening our partnership with OpenAI, reinforcing their use of Mastercard Agent Pay, working to enable agent-to-agent payments and collaborating to embed our services across their solutions while using their tools as an enterprise customer. I’m also happy to share that nearly all Mastercards around the world are now enabled for Mastercard Agent Pay…

…In quarter 1, we launched Verifiable Intent, a tamper-resistant record of what a user authorized when an AI agent acts on their behalf. In fact, the FIDO Alliance is now using it as a foundation for setting security standards in this space. And earlier this month, we announced a partnership with Crossmint, a leading blockchain infrastructure platform. Crossmint will integrate Mastercard Agent Pay and Verifiable Intent to enable secure Mastercard transactions for AI agents in its ecosystem. This will initially launch on the OpenClaw platform with plans to expand…

…But as agent-driven commerce gains traction, our network is there with tokenized credentials, powering the payments, bringing the security, and trust, and reach that everyone is looking for. It’s very clear there is even more incremental opportunity in transactions and in services over time…

…[Question] In Mastercard Agent Pay. Michael, you talked about some of the partners and some of the activity on the ground, but can you just give us a little bit more detail on volumes or any surprises with respect to actual activity or actual demand?

[Answer] In terms of where volumes are, we’re still at early stage. So that is also true because a few things were not quite in place yet.

More than 500 customers are already engaged with Mastercard Threat Intelligence, which was launched in 2025 and powered by Recorded Future’s capabilities (Recorded Future was acquired by Mastercard in 2024 Q4 and it provides AI-powered solutions for real-time visibility into potential threats related to fraud); Mastercard Threat Intelligence have helped customers take down malicious domains responsible for the payment card test impacting over 10,000 e-commerce sites; Recorded Future puts Mastercard in a unique position to provide insights on threats faced by states 

Last year, we launched Mastercard Threat Intelligence, bringing Mastercard and Recorded Future capabilities together. In a short period of time, more than 500 customers are already engaged. Using the product, partners have taken down malicious domains responsible for the payment card test impacting over 10,000 e-commerce sites. That’s tangible value…

…Asymmetrical warfare, state actors, all of that is going on, and Recorded Future puts Mastercard in a very unique position to be a trusted partner to provide those kind of insights.

Mastercard has started to launch Mastercard Agent Suite, where Mastercard will design and deploy AI agents within customer environments; management thinks Agent Suite could be a much bigger opportunity than on the consumer side

You heard us talk about Agent Suite, which we started to launch, where we’re going to get into the business of building agents with our customers in the B2B space, et cetera. So early-stage on B2B earlier than on the consumer side, but I would think this is a much bigger opportunity, and it fits right into our focus on commercial payments. So early-stage ecosystem building, covering your basis, that’s what we’re doing.

Meta Platforms (NASDAQ: META)

Meta’s AI research lab, Meta Superintelligence Labs (MSL), has released the first model, MuseSpark, in its Muse family of models; MSL has built what management thinks is the strongest research team in the industry; MSL is already training even more advanced models than Muse; management thinks MuseSpark has already made Meta AI a world class assistant for users in many areas; management has heard very positive feedback on MuseSpark; management thinks Meta’s product team is now able to build products on top of the company’s models because the models are now strong, unlike in the past; management thinks models in the future will have to be able to improve themselves in order for them to be considered leading models; management is not focused on building coding capabilities with Meta’s AI models; coding is not the only ingredient needed for models to be self-improving

Our biggest milestone so far this year has been the release of our Muse family of models and our first model MuSpark along with a significantly upgraded new version of Meta AI. This was the first release from Meta Super Intelligence Labs, and it shows that our work is on track to build a leading lab. Over the past 10 months, we have built the strongest research team in the industry and established the scientific and technical foundations to scale very advanced models. Spark is just one step on that scaling ladder, and we are already training even more advanced models…

…Spark has already made Meta AI, a world-class assistant that leads in several areas related to our vision of personal super intelligence, including visual understanding, health, shopping, social content, local, creating games and more. We’re hearing very positive feedback on it so far…

…We have our product team, and that team is now really unlocked to be able to build things on top of our models because we now have a very strong model. So before this, we have been prototyping a bunch of things using other different models, whether it was our previous older models or kind of using the APIs from other companies. And now we’re unlocked to be able to go build things and get them to scale on top of our own models…

…You’re not going to have leading models in the future if your models can’t improve themselves, right? So you’re getting to a point where today, the models are still able to learn from people — and then I think at some point, the models will have to improve themselves. And that’s how the growth is going to — an improvement in the models is going to happen…

…Does that make us a developer tools company? Not necessarily. I mean, I’m not against having an API or coding tools or anything like that. But it’s not our primary focus. But I actually think people conflate coding with self-improvement more than they should. Coding is one ingredient for the model self improving. It’s not the only thing. And we are focused on all of the parts that are going to be necessary for self-improvement in service of the personal super intelligence vision that we have for people and businesses.

Meta AI has seen large increases in usage since MuseSpark was introduced, with double-digit percent increases in Meta AI sessions per user; the Meta AI app has consistently been near the top in app stores; MuseSpark is now powering Meta AI in chat threads in Facebook, Instagram, WhatsApp, and Messenger, as well as in the standalone Meta AI app

We’ve seen large increases in Meta AI use since releasing the updates, and the Meta AI app has consistently been near the top of the app stores as well…

…We’re seeing encouraging results within Meta AI since we began powering responses with the first model from MSL, Muhspark. In tests we ran leading up to the launch, we saw meaningful engagement gains that accelerated week-over-week with each new iteration of the model. We’re seeing similar games within Meta AI following the broad rollout of our new model with double-digit percent increases in Meta AI sessions per user. MuseSpark is now powering Meta AI in direct chat threads across our family of apps as well as the stand-alone Meta AI app and website, giving billions of people globally access to our latest model.

Meta’s management has a very view on AI than others in the industry; management thinks that AI will help people and improve many aspects of their lives; management wants to build AI agents that empower people and businesses; management thinks there are clear monetisation opportunities for personal superintelligence

My view of AI is very different from many others in the industry. I hear a lot of people out there talk about how AI is going to replace people. Instead, I think that AI is going to amplify people’s ability to do what you want, whether that’s to improve your health, your learning, your relationships, your ability to achieve your personal career goals and more. My view is that human progress has always been driven by people pursuing their individual aspirations. And I believe that this will continue to be true in the future. People will be more important in the future, not less. Meta believes in empowering individuals. And those are the kinds of products that we’re going to build, and I believe that they’re going to be some of the most important and valuable products of all time. We are building a personal agent focused on helping people achieve the diverse goals in their lives. We’re also building a business agent focused on helping entrepreneurs and businesses across the world, use our tools and others to grow their efforts, reach new customers and serve existing customers better. These agents will work together to form an ecosystem…

…The focus is on building personal super intelligence, building a consumer agent that can work for you and help you get things done. That right now is a consumer experience that we’re focused on, but we think there will be clear monetization opportunities over time. You can imagine commission structures or a premium offering.

Meta’s management has been testing business AIs and weekly conversations have 10x-ed (from 1 million to 10 million) since the start of 2026; the Meta AI business assistant was recently fully rolled out to all eligible advertisers on supported Meta buying services and performance has been strong, with common account issues being resolved at a 20% higher rate; the business AIs are tested in SMBs across Latin America and Asia Pacific; management will expand access to the business AIs in 2026 Q2; the business AIs are currently free, but management expects to monetise them over time

We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year…

…The Meta AI business assistant has now been fully rolled out to all eligible advertisers on supported Meta buying services, providing personalized recommendations to advertisers, resolving account issues, and servicing campaign insights to help optimize results. Performance has been strong since we began testing the assistant in Q4 with common account issues being resolved at a 20% higher rate…

…In Q1, we expanded business AIs on WhatsApp to SMBs across Latin America and Indonesia as well as on Messenger in Asia Pacific. We now have more than 10 million conversations each week being facilitated through business AIs, up from 1 million at the start of the year. We’ll further expand access to more countries this quarter while adding more capabilities to the AIs…

…Business AIs today are currently free for most businesses on our messaging apps. But as we make more progress, we expect that we will also work towards establishing a longer-term monetization model.

Meta’s management is working to incorporate MuseSpark in the company’s upcoming models used in its recommendation systems, core apps, and advertising products; the upcoming models will enable Meta to understand more of people’s goals for the first time in the company’s history; in the last few years, Meta has seen an increasing return on the amount that it can improve user-engagement, and this has encouraged management to continue investing heavily in this area 

We’re also working on using Spark in our upcoming models to improve our recommendation systems and core business in Facebook, Instagram and ads. Right now, our apps primarily help people accomplish 3 important goals: connecting with people, learning about the world and entertainment. But we’ve always wanted our apps to understand more of people’s goals so we can help improve their lives in all the ways that they want. These new AI models will let us understand this in more detail. So instead of just looking at statistical patterns of what types of people engage with what content, for the first time in Meta’s history, we’re going to be able to develop a first principles understanding of what you care about and what each piece of content in our system is about — is that way we can show you more useful things for what you’re trying to accomplish. And we’ll also be able to create personalized content specifically for people to help you achieve your goals as well. Since our recommendation systems are operating at such a large scale, we’ll phase in this new research and technology over time.

But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers. This encourages us to continue investing heavily in what we expect will provide increasing value over the coming years as well.

Meta will be rolling out more than 1 GW (gigawatt) of its own custom chips; Meta’s AI compute infrastructure will include large amount of its own chips and AMD chips, alongside NVIDIA chips; Meta is investing in more compute, partly through multiyear cloud deals; Meta’s contract commitments increased by $107 billion in 2026 Q1; the multiyear cloud deals support both Meta’s training and inference needs; management has consistently underestimated Meta’s compute needs even as the company has been ramping up compute capacity significantly; management expects compute to be even more central for the business going forward

We are rolling out more than 1 gigawatt of our own custom silicon that we’re developing with Broadcom, as well as significant amount of AMD chips to complement the new NVIDIA systems that we’re rolling out as well…

…We’re also signing cloud deals that will come online over the course of this year and 2027, allowing us to scale more quickly. These multiyear cloud deals and our infrastructure purchase agreements drove a $107 billion step-up in our contractual commitments this quarter. Our investments will support our training needs for future models and most importantly, provide us the inference capacity necessary to deliver personal and business agents to billions of people around the world, along with several other AI product experiences we’re developing…

…Our experience so far has been that we have continued to underestimate our compute needs even as we have been ramping capacity significantly as the advances in AI have continued and our teams continue to identify compelling new projects and initiatives. And now to, there are very compelling internal use cases. So our expectation is that compute will become even more central to the business going forward.

Meta’s AI glasses continue to perform well, with daily users tripling year-on-year in 2026 Q1; the AI glasses continue to be one of the fastest-growing categories of consumer electronics ever; Meta released new glasses for all-day wear in 2026 Q1; Met has new partnerships and styles for AI glasses coming later this year; all of Meta’s AI glasses are designed to easily update to Meta’s newest AI models and features; Meta’s AI glasses are evolving into a personal agent product; the sales of Meta’s AI glasses have shifted from the prior generation to the latest generation; management is seeing strong interest in the Meta Ray-Ban Display product that comes with neural bands; management thinks the Meta Ray-Ban Display product will be the next generation for how AI glasses evolve

Our AI glasses continue to perform well with the number of people using them, daily tripling year-over-year. This continues to be one of the fastest-growing categories of consumer electronics ever. We released Ray-Ban Meta optics this quarter designed for all day wear rather than primarily as sunglasses. And building on our release of Oakley last year, we have some exciting new partnerships and styles that I think are going to have the potential to reach even more people coming later this year. All of our glasses are designed to easily update to use our newest AI models and features. I’m also really excited to see the glasses evolve from being able to answer questions to being able to be a personal agent that’s with you all day long, helping you remember things and achieve your goals…

…We’re seeing sales shift now from the prior generation of Ray-Ban Meta’s to the latest generation, which I think speaks to the value of the improved features like extended battery life and higher features like higher resolution video capture…

…We see strong interest now in the Meta Ray-Ban displays with the Meta neural bands. So that’s an encouraging sign that there is consumer appetite for display glasses, which is kind of the next generation of how this product evolves.

Ranking improvements made in 2026 Q1 drove a 10% increase in time spent on Instagram Reels, an 8% increase in total video time on Facebook globally, and a 9% increase in video watch time on Facebook in the US and Canada; the ranking improvements are driven by a number of things, including (1) the doubling in the length of user interaction sequences for training on Instagram, (2) increasing the speed of indexing new posts by the ranking models, and (3) applying more advanced content understanding techniques; same-day posts are now more than 30% of recommended posts in Instagram and Facebook, up more than 2x from a year ago; management is now using AI to auto translate and dub videos into a viewer’s local language; more than 500 million users are watching translated videos weekly on each of Facebook and Instagram; management continues to invest in Meta’s recommendation capabilities, and the investments include near term ones such as scaling up models in size and complexity and incorporating LLMs, or large language models, to deepen content understanding, and long-term ones such as building foundation models for organic content and ads recommendations, and LLM-based recommendation systems; management thinks there is still a lot of room to continue improving recommendations on both Facebook and Instagram

We’re continuing to see significant gains from our content recommendation initiatives. On Instagram, the ranking improvements that we made in Q1 drove a 10% lift in Reels time spent. On Facebook, total video time increased more than 8% globally in Q1, the largest quarter-over-quarter gain in 4 years. Within the U.S. and Canada, ranking improvements we made drove a 9% increase in video watch time on Facebook in Q1. 

These gains are benefiting from advances we’re making across the full stack. Starting with data, we doubled the length of user interaction sequences we use for training on Instagram in Q1 and increase the richness of how each user interaction is described, enabling our systems to develop a deeper understanding of user interests. Within our models, we’ve significantly increased the speed with which our ranking models index new posts, which is enabling us to recommend them sooner after they are published. We’re also applying more advanced content understanding techniques, which is enabling us to quickly identify posts that may be interesting to someone even if they haven’t engaged with a lot of similar content. These and other improvements have enabled us to increase the diversity and recency of recommended content with same-day posts now representing more than 30% of recommended reels on both Instagram and Facebook more than double the levels 1 year ago.

We’re also using AI to unlock more inventory by auto translating and dubbing videos into a viewer’s local language, enabling us to recommend a more diverse set of content. Over 0.5 billion users on each of Facebook and Instagram are now watching AI translated videos weekly. 

Looking forward, we’re making several investments we expect will deliver more valuable recommendations. This year, we will continue scaling up our models in several dimensions, including their size and complexity, while incorporating LLM to deepen content understanding across our platform. This will enable us to better match people to a wider variety of content aligned to their interests. At the same time, we are executing on our longer-term efforts to develop the next generation of our recommendation systems. This includes building foundation models that power organic content and ads recommendations as well as developing LLM based recommender systems. Our focus this year is validating the model architectures and techniques in these domains before we scale them out in future years…

…There is still a lot of room to continue improving recommendations over the rest of the year, and we expect we’ll be able to do that to drive additional engagement on both Facebook and Instagram.

Meta continues to enhance its systems to show advertising to users at the optimal time and location; improvements made to Lattice and GEM (Generative Ads Model) in 2026 Q1 increased conversion rates for landing page view advertising by more than 6%; management expanded coverage of Meta’s new adaptive ranking model, which was rolled out in 2025 H2, to off-site conversions and this drove a 1.6% increase in conversion rates across Facebook and Instagram’s major surfaces; Meta is introducing Meta Ads AI Connectors in open beta and it allows advertisers to connect their Meta advertising accounts directly to an AI agent; more than 8 million advertisers are now using at least one of Meta’s Gen AI advertising creative tools with very strong adoption among SMB advertisers; advertisers using Meta’s video generation feature are seeing 3% higher conversion rates in tests; Meta’s value optimisation suite, which maximises the return on advertising spend for advertisers by prioritising the highest value conversions, has seen strong adoption with the revenue run rate reaching $20 billion in 2026 Q1, more than double from a year ago; Meta’s new adaptive ranking model enables the company to leverage LLM-scale model complexity when it previously couldn’t

We continue to enhance our systems to show ads at the optimal time and location…

…In Q1, enhancements we made to Lattice’s modeling and learning techniques, along with advances in our GEM model architecture, drove a more than 6% increase in conversion rate for landing page view ads. In addition, we’ve been investing in more performing inference models for 1 more serving ads. In the second half of last year, we began rolling out our new adaptive ranking model, which is an LLM scale adds recommender model that we use for inference. This model improves our inference ROI by routing requests to more compute-intensive inference models when it determines there is a higher probability of conversion. In Q1, we expanded coverage of our adaptive ranking model to support off-site conversions, which drove a 1.6% increase in conversion rates across the major surfaces on Facebook and Instagram…

…This week, we’re also introducing Meta ads AI connectors in open beta, providing advertisers the ability to connect their Meta ad account directly to an AI agent. We’ve always supported advertisers both on our platform and through tools like the marketing API. And now we’re extending that to AI. So businesses and agencies can analyze and optimize campaigns with the tools they’re already using.

Usage of our ad creative tools is also scaling with more than 8 million advertisers using at least one of our Gen AI ad creative tools and particularly strong adoption among small- and medium-sized advertisers. These tools are benefiting performance as well with advertisers using our video generation feature seeing more than 3% higher conversion rates in tests…

…We also continue to invest in the value optimization suite, which helps advertisers maximize their return on ad spend by prioritizing the highest value conversions rather than optimizing solely for the most conversions at the lowest cost. Adoption by businesses has been strong following performance improvements we’ve made over the past year with the annual revenue run rate of our value optimization suite now over $20 billion, more than doubling year-over-year…

…The inference models are bound by strict latency requirements since they need to find the right ad within milliseconds, and that has, again, historically prevented us from meaningfully sizing up — scaling up their size and complexity. But in the second half of last year, we introduced a new adaptive ranking model, which enables us to leverage LLM scale model complexity of 1 trillion parameters, and we made advances in the model architecture and codesign the system with the underlying silicon, so it maintains the sub-second speed that is required to serve ads at scale. We also developed an approach that intelligently routes request more compute-intensive inference models if it determines that there is a higher probability of conversion and that lets us drive both better performance and increased inference ROI.

Microsoft (NASDAQ: MSFT)

Microsoft’s management has 2 priorities to capture the AI opportunity, namely, (1) build the leading cloud and AI infrastructure, and 2) build high-value agentic systems across core domains

We are at the beginning of one of the most consequential platform shifts that will change the entire tech stack as agents proliferate and become the dominant workload. This will drive TAM expansion and change the value creation equation across the entire economy. To capture this opportunity, we are executing against 2 priorities. First, we are building the world’s leading cloud and AI infrastructure for agentic computing era. Second, we are building high-value agentic systems across core domains such as productivity, coding and security

Microsoft’s management is optimising every layer of its technology stack and this is producing operational gains; Microsoft’s dock-to-live times for its data centers has reduced by 20% since the start of 2026; Microsoft has delivered a 40% improvement in inference throughput in Copilot’s most-used models

We’re optimizing every layer of the tech stack, from DC design, to silicon to system software, the model architecture as well as its optimization. This is translating into operational gains. We have reduced dock-to-live times for new GPUs in our biggest regions by nearly 20% since the beginning of the year. Our Fairwater data center in Wisconsin came online earlier this month, 6 weeks ahead of schedule, allowing us to recognize revenue earlier. And we delivered a 40% improvement in inference throughput for our most used models across Copilot, driven by our software and hardware optimization work.

Microsoft added 1 gigawatt of GPU compute capacity in 2026 Q1 (FY2026 Q3); Microsoft is on track to double its overall compute footprint in 2 years; management announced new data center investments across 4 continents in 2026 Q1 (FY2026 Q3)  

All up, we added another gigawatt of capacity this quarter and remain on track to double our overall footprint in just 2 years. We are moving aggressively to add capacity aligned to our demand signals we see and we have announced new data center investments across 4 continents.

Microsoft’s AI infrastructure utilises chips from NVIDIA, AMD, and itself (Maia); Microsoft’s Maia 200 chip has 30% better tokens per dollar compared to other leading AI chips, and is now live in 2 Microsoft data centers; Microsoft’s Cobalt server CPUs are deployed in half of the company’s data center regions; as Microsoft’s customers scale their AI workloads, they are increasingly using other Microsoft cloud services and are choosing Cobalt to run these services; management is expanding Cobalt’s supply significantly to meet demand

We also continue to modernize our fleet with our first-party innovation alongside the latest from NVIDIA and AMD. Across our fleet, millions of servers are powered by our custom networking security and virtualization silicon, including Azure Boost as well as our first-party CPUs and accelerators. Our Maia 200 AI accelerator, which offers over 30% improved tokens per dollar compared to the latest silicon in our fleet, is now live in our Iowa and Arizona data centers. Our Cobalt server CPU is deployed in nearly half of our DC regions running workloads at scale for customers like Databricks, Siemens and Snowflake. As our largest customers scale their AI deployments, they’re increasingly leveraging other services across our platform and choosing to run those workloads on Cobalt. And we are expanding Cobalt supply significantly to meet this demand.

Microsoft’s management thinks Microsoft offers the broadest selection of models among the cloud hyperscalers; over 10,000 customers have used more than 1 model on Foundry; the number of customers who used Anthropic and OpenAI models doubled sequentially in 2026 Q1, or FY2026 Q3 (was 1,500 in 2025 Q4, or FY2026 Q2); Bayer is using multiple models in Foundry to build its in-house agent platform; over 300 Microsoft customers are on track to process 1 trillion tokens each on Foundry in 2026, up 30% sequentially 

We offer the broadest selection of models of any hyperscaler, so customers can choose the right model for the right workload across OpenAI, Anthropic, open source and more. Over 10,000 customers have used more than one model on Foundry. 5,000 have used open source models, and the number who have used Anthropic and OpenAI models increased 2x quarter-over-quarter…

…Bayer is using multiple models in Foundry to create its own in-house agent platform with more than 20,000 active monthly users. All up, over 300 customers are on track to process over 1 trillion tokens on Foundry this year, accelerating 30% quarter-over-quarter.

Microsoft’s management is building a unified IQ layer for organisational intelligence; the IQ layer initiative is driving acceleration in Microsoft’s data businesses, with Cosmos DB revenue up 50% year-on-year in 2026 Q1 (FY2026 Q3), Fabric customers growing 60% year-on-year to 35,000, and Fabric OneLake data up 4x year-on-year; 15,000 customers now use both Fabric and Foundry, up 60% year-on-year; Fabric provides agents with operational, analytical, and unstructured data; Microsoft’s Copilot Studio is helping enterprises build agents; nearly 90% of the Fortune 500 have active agents built with Copilot Studio’s low-code and no-code tools; Copilot’s credit consumptive offer is up 2x sequentially in 2026 Q1 (FY2026 Q3); Agent 365 is a control plane for managing agents’ governance, identity, and security; tens of thousands of companies are already using Agent 365 to manage tens of millions of agents

Across Fabric, Foundry, Microsoft 365 and our Security Graph, we are building a unified IQ layer for organizational intelligence. Thousands of enterprises already are accessing context across these IQ layers. And as AI usage grows, so does the context layer, creating a flywheel that continuously improves the grounding, relevance and effectiveness of every agent they use and build, making our IQ layers an unmatched context engine for organizational intelligence. More broadly, our database business accelerated quarter-over-quarter. Cosmos DB alone saw 50% year-over-year revenue growth driven by AI app workloads. We now have 35,000 paid Fabric customers, up 60% year-over-year. And all up, the amount of data in Fabric OneLake data lake increased nearly 4x year-over-year. Over 15,000 customers now use both Foundry and Fabric, up 60% year-over-year as enterprises connect agents to real-time operational, analytical and unstructured data that Fabric brings together…

…We are also helping knowledge workers build agents with tools like Copilot Studio. Nearly 90% of the Fortune 500 now have active agents built with our low-code/no-code tools. And we are seeing fast growth of our Copilot credit consumptive offer, up nearly 2x quarter-over-quarter as customers increasingly extend Copilot with custom agents tailored to their workflows…

…With Agent 365, we offer a control plane that extends company’s existing governance, identity, security and management frameworks to agents. Tens of thousands of companies are already managing tens of millions of agents in Agent 365, and we expect this momentum to grow significantly as agents will increasingly need tools for identity, governance, security and more.

Microsoft’s management is turning its family of Copilots from synchronous assistance software to asynchronous digital workers; Microsoft 365 Copilot seat adds grew 250% year-on-year in 2026 Q1 (FY2026 Q3), the fastest growth since launch; there are now over 20 million Microsoft 365 Copilot paid seats; the number of companies with over 50,000 Microsoft 365 Copilot seats grew 4x year-on-year in 2026 Q1 (FY2026 Q3); WorkIQ grounds Copilot’s responses with an organisation’s full context; the data residing in WorkIQ now spans 17 exabytes, up 35% year-on-year; users can now access multiple models together in Microsoft 365 Copilot to generate the best responses; monthly active usage of Microsoft’s 1st-party agents in Microsoft 365 Copilot is up 6x year-to-date; Copilot queries per user was up 20% sequentially in 2026 Q1 (FY2026 Q3); weekly engagement of Microsoft 365 Copilot is now on par with Outlook

We are evolving our family of Copilots from synchronous assistance to async coworkers that can execute long-running tasks across key domains. In knowledge work, it was another record quarter for Microsoft 365 Copilot seat adds, which increased 250% year-over-year, representing our fastest growth since launch. Quarter-over-quarter, we continue to see acceleration and now have over 20 million Microsoft 365 Copilot paid seats. The number of customers with over 50,000 seats quadrupled year-over-year and Accenture now has over 740,000 seats, our largest Copilot win to date. And Bayer, Johnson & Johnson, Mercedes and Roche all committed to 90,000 or more seats…

…Work IQ grounds Copilot responses in the full context of an organization, including people, roles, documents and communications, all within the company’s security boundary. The system of work behind Work IQ alone now spans more than 17 exabytes of data growing 35% year-over-year. The liquidity and freshness of that data matters, with billions of e-mails, documents, chats, hundreds of millions of Teams meetings, and millions of SharePoint sites added each day. And that context is getting even richer as Copilot adoption grows, Copilot and Agent conversations and artifacts they create feedback into Work IQ, making it even more context-rich…

…In Microsoft 365 Copilot, you now have access in chat to multiple models by default with intelligent auto routing, in Agents with Critique and Council. You can use multiple models together to generate optimal responses. As of last week, Agent Mode is now default experience across Copilot in Word, Excel and PowerPoint. And with Cowork, you now have a new way to delegate and complete work using Copilot.

All this innovation is driving record usage intensity across Copilot. We have seen a surge in usage of our first-party agents with monthly active usage up 6x year-to-date. Copilot queries per user were up nearly 20% quarter-over-quarter. To put this momentum in perspective, weekly engagement is now at the same level as Outlook, as more and more users make Copilot a habit.

Microsoft’s management is observing a shift in pricing in business software from seat-based models to seat-plus-consumption models because of AI; nearly 60% of Microsoft’s service customers are already buying usage-based credits; HSBC is using pre-built agents to reduce issue resolution time for customer inquiries by 30%; LinkedIn Talent Solutions’ agentic products now have an annualised revenue run rate of more than $450 million; management thinks the pricing model for business software could yet evolve further to include business outcomes into the equation

When it comes to biz apps, we are seeing a new pattern emerge as customers shift from traditional seat model to seats plus consumption. The customer service category is at the forefront of this transformation as nearly 60% of our service customers are already purchasing usage-based credits. For example, HSBC uses prebuilt agents with Dynamics 365 to manage customer inquiries across products, markets, regulatory requirements, reducing issue resolution time by over 30%. And our agentic products in LinkedIn Talent Solutions, which help hirers automate time-consuming tasks like sourcing, screening and drafting messages have already surpassed a $450 million annualized revenue run rate…

…From a customer perspective, they’re going to evaluate it by evals. Where are they seeing the value of tokens, as simple as that. So where they see the outcome, the eval and the token, whether it’s improving revenue, improving efficiency, and that’s what will refine. Like when we talk about IT budgets, IT budgets are going to have to be reshaped by a combination of business outcomes, making their way into IT budgets and maybe reallocation from other line items on the income statement like OpEx.

GitHub is growing rapidly, driven by agentic coding; nearly 140,000 organisations are using GitHub Copilot; GitHub Copilot enterprise subscribers nearly tripled year-on-year in 2026 Q1 (FY2026 Q3); most users in GitHub Copilot use multiple models; usage of GitHub Copilot CLI (command line interface) nearly doubled month-on-month; management has shifted GitHub Copilot to a usage-based pricing model

GitHub itself is seeing unprecedented growth driven by proliferation of agentic coding, and we are hard at work to scale and meet this demand. We see this even with GitHub Copilot. Nearly 140,000 organizations now use GitHub Copilot and enterprise subscribers have nearly tripled year-over-year. The majority of users leverage multiple models. We’re also seeing rapid adoption of GitHub Copilot CLI with usage nearly doubling month-over-month. And earlier this week, we announced our move to usage-based pricing model for GitHub Copilot as we align pricing to actual usage and cost.

1/3 of Microsoft’s cloud and AI-related capex in 2026 Q1 (FY2026 Q3) are for long-lived assets that will support monetisation over the next 15 years and more, while the other 2/3 are for CPUs and GPUs; Azure is still capacity-constrained, and management wants to balance Azure demand for compute with 1st party demand for compute; Azure’s capacity-constrain is expected to last through at least 2026

Capital expenditures were $31.9 billion, down sequentially due to the normal variability from cloud infrastructure buildouts and the timing of delivery of finance leases. And this quarter, roughly 2/3 of our CapEx was for short-lived assets, primarily GPUs and CPUs. The remaining spend was for long-lived assets that will support monetization over the next 15 years and beyond. This quarter, total finance leases were $4.7 billion and were primarily for large data center sites. And cash paid for PP&E was $30.9 billion, roughly in line with capital expenditures as the impact from finance leases was partially offset by differences between the receipt of goods and payment…

…In Azure and other Cloud Services, revenue grew 40% and 39% in constant currency against a prior year that included accelerating growth. Results were ahead of expectations as we delivered capacity earlier in the quarter, enabling increased consumption across both AI and non-AI services. Strong customer demand across workloads, customer segments and geographic regions continues to exceed available capacity…

…Broad and growing customer demand continues to exceed supply, and we continue to balance the incoming supply we can allocate here against our other high ROI priorities, first-party applications, R&D and end-of-life server replacement…

…Even with these additional investments and continued efforts to bring GPU, CPU and storage capacity online faster, we expect to remain constrained at least through 2026.

Azure grew revenue by 40% in 2026 Q1 (FY2026 Q3) (was 39% in 2025 Q4); Azure’s revenue growth was better than expected because capacity was delivered earlier in the quarter; Azure continues to be constrained by capacity and the constraint is expected to last through at least 2026; management wants to balance Azure demand for compute with 1st party demand for compute; as Microsoft’s customers scale their AI workloads, they are increasingly using other Microsoft cloud services; Azure’s margin for its AI business remains better than the non-AI business when it was at a similar age

In Azure and other Cloud Services, revenue grew 40% and 39% in constant currency against a prior year that included accelerating growth. Results were ahead of expectations as we delivered capacity earlier in the quarter, enabling increased consumption across both AI and non-AI services. Strong customer demand across workloads, customer segments and geographic regions continues to exceed available capacity…

…Broad and growing customer demand continues to exceed supply, and we continue to balance the incoming supply we can allocate here against our other high ROI priorities, first-party applications, R&D and end-of-life server replacement. As a reminder, year-over-year Azure growth rates can vary quarter-to-quarter based on capacity, timing and contract mix…

…Even with these additional investments and continued efforts to bring GPU, CPU and storage capacity online faster, we expect to remain constrained at least through 2026…

…. As our largest customers scale their AI deployments, they’re increasingly leveraging other services across our platform and choosing to run those workloads on Cobalt…

…We’ve been talking about sort of where this AI business of ours has been in the cycle compared to even the cycle we saw with the cloud, which now seems very long ago. And how margins were actually better and they remained better in our AI business versus where we saw in the cloud transition, looking back.

Microsoft’s management has gained more confidence over the past 1-2 years that the economics of AI’s addressable market are in areas where the company has structurally strong positions in

One of the things that we have learned even in the last, whatever, 2 years or so in AI and also build more conviction and confidence on is where is the TAM and the category economics of the TAM. And so this, I mean, it’s fascinating that here we are in 2026 and the most exciting things are plug-ins in Word or Excel or CLIs in coding or — and so when you see that, that means we have a structural position in knowledge work, coding, security, which are the big TAMs.

Microsoft’s management continues to feel good about partnering with OpenAI after the recent change to the 2 companies’ agreement; Microsoft has full IP rights to OpenAI’s frontier models all the way to 2032; OpenAI remains a large customer of Microsoft

We feel good about our partnership with OpenAI. I’m always very, very focused on any partnership and ensuring that there’s a win-win construct at all times. I mean that’s how you can remain with partners. In this case, it starts with, quite frankly, IP, Amy referenced this. We have a frontier model, royalty-free with all the IP rights that we will have access to all the way to ’32, and we fully plan to exploit it…

…They’re a large customer of ours, not just on the AI accelerator side, but also on all the other compute side, and so we want to serve them well.

Netflix (NASDAQ: NFLX)

Netflix has been using generative AI to improve content recommendations for members; management is also leveraging generative AI to provide better tools for filmmakers; Netflix acquired InterPositive, a company providing AI-powered filmmaking tools, in March 2026; management thinks Netflix has significant and unique data for applying AI; management thinks even with AI tools, only great artists can make great art; Netflix’s content creation partners have been leveraging AI tools for many purposes, and these tools also help improve on-set safety; InterPositive contains proprietary technology created specifically for filmmakers and for filmmaking, so it’s different compared to other generative AI video apps; management is already seeing momentum around adopting InterPositive’s tools among Netflix’s content creation partners; management has been working on content recommendation and personalisation for many years, but they think generative AI provides plenty of opportunity for Netflix to continue improving in those areas; management thinks AI can be applied in Netflix’s advertising suite to make it easier to create new formats, customise ads, and improve contextual relevance 

We’ve been using machine learning and AI for many years, and as the technology advances with GenAI, we continue to find new opportunities to deliver an even more seamless experience for members and expand possibilities for storytellers. This includes using GenAI to improve recommendations for members through deeper content understanding so we can recommend the right title at the right moment, test conversational discovery experiences, and improve the breadth and quality of our promotional assets. Leveraging GenAI, we are enabling our creative partners with more and better tools to help them tell their stories, with the potential to make our single largest area of spend—content—even more impactful. To accelerate this opportunity, in March we announced our acquisition of InterPositive, the filmmaking technology company founded by Ben Affleck that develops AI‑powered tools built by and for filmmakers…

…Given our technology DNA, we have a significant and unique data assets here. We have tremendous scale. So we see that as all great opportunities to leverage new technical capabilities across every aspect of the business. So I think AI is going to deliver benefits for our members, for creators and for our employees…

…It takes a great artist to make great art and AI won’t change that. But AI will give those artists better tools to bring those visions to life in ways that we’re just scratching the surface on. So today, our talent leverages these tools for things like set references, pre visualization, visual effects, sequence prep, shot planning. All of these things, by the way, also improve on-set safety, which is something that’s not talked about enough…

…With our acquisition of InterPositive, we think it accelerates our GenAI capabilities because it’s a proprietary technology that was created specifically for filmmakers and specifically for filmmaking and that’s different than other GenAI video applications. So while our ownership of InterPositive is very new, we have generated a bunch of interest with our creators who spent time with the tools, and we’re seeing real momentum build around adoption…

…We’ve been in personalization and recommendation for 2 decades, but we still see tremendous room and opportunity to make it even better by leveraging some of these newer technologies. We see that recommendation systems based on these new model architectures, not only improve the current personalization, but it also allows us to iterate and improve more quickly to improve that velocity. Things like adding support for different content types going forward, that’s much more quick, much more efficient…

…We really see an opportunity to leverage AI within our Netflix ad suite. Makes it easier to design new creative formats, custom ads, improved — that improve contextual relevance. And the technology stack just allows us to roll them out more quickly, more effectively and allow partners to leverage those things in an easier manner.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s capital expenditure is always in anticipation of growth in future years; management expects capex for 2026 to be near the high end of its previous guidance of US$52 billion to US$56 billion (growth at the high would be 37% from 2025’s capex of US$41 billion); management now expects TSMC to grow revenue by more than 30% in USD terms in 2026 (previous guidance was for growth to be nearly 30%); TSMC’s capex in the last 3 years was ~US$100 billion, and the next 3 years is expected to be much higher, although management does not expect a sudden surge in capital intensity; management thinks the AI accelerators business will have a CAGR for 2024-2029 towards the high end of the previously released growth forecast of mid-to-high-50% CAGR

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years…

…We now expect our 2026 capital budget to be towards the high end of our range of between USD 52 billion and USD 56 billion, as we continue to invest heavily to support our customers’ growth…

…We maintain strong confidence for our full year 2026 revenue to now grow by above 30% in U.S. dollar terms…

…In the past 3 years, our total CapEx was $101 billion. This year, we’re already seeing is towards the high end, which is $56 billion, which is already over 50% of the past 3 years in total. So we have a strong conviction in the AI megatrend. So we expect the CapEx in the next few years, in the next 3 years will be significantly higher than the past few years…

…Now therefore, we do not expect in the next several years, a sudden surge in capital intensity…

…But again, let me say that is toward higher 50s of the CAGR that we observe.

TSMC has been sourcing helium (an element whose supply has been affected by the conflict in the Middle East) from different regions, and it has safety stock in hand; TSMC has been working with Taiwan’s government to secure power, and Taiwan has sufficient LNG supply through at least May; management does not expect any near-term impact to TSMC’s operations from the Middle East conflict in terms of materials and power supply

About the materials and energy supply update given the recent situation in the Middle East. TSMC operates a well-established enterprise risk management system to identify and assess all relevant risks and proactively implement risk mitigation strategies. In terms of material supply, TSMC’s strategy is to continuously develop multi-store supply solutions to build a well-diversified global supplier base and to improve the local supply chain. For specialty chemicals and gases, including helium and hydrogen, we source from multiple suppliers in different regions and we have prepared safety stock inventory on hand. We are also working closely with our suppliers to further strengthen the resiliency and sustainability of our supply chain. Thus, we do not expect any near-term impact on our operations for material supply.

In terms of energy, TSMC worked closely with Thai Power and the Taiwan government to ensure a stable and sufficient energy supply. With the recent situation in the Middle East, the Taiwan government has announced it has secured sufficient LNG supply through at least May. The government has also said it is actively working on securing further LNG supply, diversifying sourcing to other regions and other power backup plans. Therefore, we do not expect any near-term disruption or impact to our operations.

TSMC’s management sees very robust AI-related demand, as the shift from generative AI and queries (chatbots) to agentic AI is leading to a step-up in token consumption; management is seeing very strong signals and positive outlooks from TSMC’s customers’ customers, who are the cloud service providers; management’s conviction in the AI megatrend remains high

AI-related demand continues to be extremely robust. The shift from generative AI and the query mode to agentic AI and command and action mode is leading to another step-up in the amount of token being consumed. This is driving the need for more and more computation, which supports the robust demand for leading edge silicon. Our customers and customers of customers, who are mainly the cloud service providers, continue to provide us with a very strong signal and positive outlook. Thus, our conviction in the multiyear AI megatrend remains high, and we believe the demand for semiconductors will continue to be very fundamental.

TSMC’s management intends to ramp up new technology nodes in Taiwan because of the need for tight integration between production and R&D; TSMC’s N2 node entered high-volume manufacturing in 2025 Q4 in Taiwan with good yield; N2’s ramp is supported by strong demand from both smartphone and HPC AI applications; management believes that N2, N2P, and A16 will lead to the N2 family becoming another large and long-lasting node for TSMC; management has decided to add capacity for N3 even though TSMC has historically not added capacity to a node once it has reached its target capacity, because of the strong demand for N3 in AI applications; management is seeing robust multiyear demand for N3 nodes from end markets such as smartphone, HPC AI, and more; TSMC is adding a new N3 fab to its giga fab cluster in Tainan, with volume production expected in 2027 H1; TSMC is continuing to convert N5 tools to support N3 capacity in Taiwan; management is focusing on flexible capacity support among the N7, N5, and N3 nodes; the upcoming A14 node has 10-15 speed improvement at the same power or 25-30 power improvement at the same speed, and a nearly 20% chip density gain; the A14 node is on track and progressing well; management is seeing a high level of customer interest and engagmeent for A14; volume production of A14 is expected for 2028

Our practice is to prioritize the land in Taiwan to support the fast ramp of our new node due to the need for tight integration with R&D operations. Today, our new node, N2, has already entered high-volume manufacturing in the fourth quarter of 2025 with good yield. N2 is ramping successfully in multi phases at both Hsinchu and Gao Hsiung site supported by strong demand from both smartphone and HPC AI applications. With our strategy of continuous enhancement such as N2P and A16, we expect our N2 family to be another large and long lasting node for TSMC.

Historically, we do not add additional capacity to a node once it reached its targeted capacity. However, as a foundry, our first responsibility is to provide our customers with the most advanced technologies and necessary capacity to unleash their innovations. Based on our assessment, to meet the strong demand in AI application, we are stepping up our CapEx investment to increase our N3 capacity. Thus, we are now executing global capacity plan to support the robust multiyear pipeline of demand for 3-nanometer technologies, which are used by smartphone, HPC AI, including HBM based side, automotive and IoT customers. 

In Taiwan, we are adding a new 3-nanometer fab to our giga fab cluster in Tainan Volume production is scheduled for the first half of 2027…

…In addition to all the new fabs, we continue to convert 5-nanometer tool to support 3-nanometer capacity in Taiwan…

…We are also focusing on capacity optimization across nodes, which including flexible capacity support among the N7, N5 and N3 nodes…

…Figuring our second-generation transistor structure, A14 delivered another 4-node stride from N2, with performance and power benefit across to address the sensible need for high performance and energy efficient computing. Compared with N2, A14 will provide 10 to 15 speed improvement at the same power for 25 to 30 power improvement at the same speed and close to 20% chip density gain. Our A14 technology development is on track and progressing well. We are observing a high level of customer interest and engagement from both smartphone and HPC applications. Volume production is scheduled for 2028. Our A14 technology and its derivatives will further extend our technology leadership position and enable TSMC to capture the growth opportunities well into the future.

TSMC’s 2nd Arizona fab will utilise N3 technologies; the N3 nodes in the 2nd Arizona fab will begin volume production in 2027 H2; management has gained a lot of experience in Arizona, and expects to improve the cost structure of the Arizona fabs

In Arizona, our second fab will also utilize 3-nanometer technologies. Construction is already complete and volume production will begin in the second half of 2027…

…We already gained a lot of experience in Arizona. And so now we have much more confidence in last year that we can make good progress and moving aggressively forward and with, we expect we can improve the cost structure, of course.

TSMC’s management now plans to utilise N3 technology in the company’s 2nd fab in Japan; volume production is scheduled for 2028

In Japan, we now plan to utilize 3-nanometer technology in our second fab and volume production is scheduled in 2028.

TSMC’s management is open to including CPUs into its HPC (high-performance computing) AI calculation, but they will not do it right now, because TSMC is not able to tell where the CPUs it manufactures goes to

[Question] TSMC’s definition of AI revenue includes GPU, AI accelerator, HPM based maybe I up a few others, but it does specifically excludes data center CPU, I think you made that the definition very clear for a couple of years now. But with the CPU, there’s more and more conversation about CPU now becoming part of the AI infrastructure, especially for agentic workflows. Any chance for TSMC to maybe provide us revised numbers for AI revenue and maybe the AI revenue growth take a projection going 2029, 2030 and maybe hopefully give us some sense about the historical AI revenue numbers would have been if some of the data centers CPU numbers, especially for genetic AI workloads are included there.

[Answer] Certainly, CPUs becomes more and more important in today’s AI data center. But actually, let me share with you, this is a good question, by the way. Let me share with you that we are not able to identify which CPU goes to where, right? It’s a PC or it’s desktop or it’s AI data center. So today, we still not include the CPUs in our AI HPC’s calculation. Someday later, we might consider.

TSMC is working with NVIDIA for its next-generation LPU (language processing unit); the LPU comes with NVIDIA’s recent acqui-hire deal with Groq; Groq’s LPUs have historically been manufactured by Samsung

[Question] NVIDIA, of course, they recently added more CPU content to the overall but I think that most people are focusing on that brand-new LPU. They recently added — we understand I appreciate that the TSMC very strong institute and we’ll definitely participate in that upside in CPU. But the LPU business, it’s the acquired business, well, for historical reasons, it’s still at your competitors Samsung Foundry. And I think investors are looking at that and the thing that maybe looks like Samsung foundry finally made the first inroads into AI. So any thoughts from TSMC side, how should we think about whether and how TSMC will win back that LPU business or any future business coming from your customers?

[Answer] We are working with our customers for their next-generation LPU anyway. And we are very confident in our technology position, and we will work hard to capture every piece of business possible.

Tesla (NASDAQ: TSLA)

Tesla’s management is going to increase the company’s capital expenditure significantly, partly for AI-related investments; the increase in capital expenditure will last for a few years; management expects Tesla’s capex to be $25 billion in 2026, and thus cause the company to have negative free cash flow for the year

We’re going to be substantially increasing our investments in the future so you should expect to see significant — a very significant increase in capital expenditures, but I think well justified for a substantially increased future revenue stream…

…We’re investing in and improving our core technologies, battery powertrain, AI software, AI training, chip design, manufacturing — laying the groundwork for significantly increased manufacturing and production. We are also strengthening our supply chain across the board, batteries, energy, AI, silicon, everything, and laying the groundwork, like I said, for what we expect to be a significant increase in vehicle production in the future and, of course, a very significant increase — well, actually releasing Optimus…

…We are in a very big capital investment phase, which is going to start now and would last a couple of years. So based on that, our current expectation for 2025 — 2026 is over $25 billion of CapEx. And just to remind you, we are paying for 6 factories which were going to go into operation. Some have already started, some would go into operation later part of this year. We’re further increasing our investment in AI-related initiatives, including the AI infrastructure to support Robotaxi and the launch of Optimus. We’ve already started placing orders for the research semiconductor fab in Austin and for solar manufacturing equipment. While this may seem a lot and will have the impact of negative free cash flow for the rest of the year, we believe this is the right strategy to position the company for the next era.

Tesla’s management thinks Optimus can be useful outside of Tesla sometime in 2027; management continues to think Optimus will be the biggest ever product made; Tesla is preparing its Fremont factory for production of Optimus later this year; the production S-curve of Optimus will be very slow at the start, before ramping significantly in 2027; Tesla is building a 2nd Optimus factory, with production scheduled for mid-2027; v3 of Optimus (Optimus 3) is almost ready to be demonstrated, but management is hesitant because they have found competitors trying to copy Optimus’s design (in the 2025 Q4 call, management said Optimus 3 would be ready in a few months); management thinks Optimus can start production in July/August 2026, but it will take tremendous work to get there; management does not know what the production rate for Optimus will be in 2026; the production rate for Optimus will be limited by the slowest part in the entire Optimus supply chain; management wants to place a lot of intelligence locally in Optimus in the event that the robot loses wireless data; management thinks Optimus would need an orchestrator-AI and a voice AI, both of which can be Grok (a foundation model from one of Elon Musk’s companies, xAI)

But increasing our internal production for testing and then probably being able to have Optimus be useful outside of Tesla sometime next year. As you’ve heard me say a few times, I think, Optimus will be our biggest product — not just Tesla’s biggest product ever, but probably the biggest product ever. And I remain convinced of that conclusion…

…We’re preparing Fremont for start of production later this year with Optimus. Again, totally new supply chain, totally new technology. So therefore, the production S-curve is always very slow in the beginning, but it will ramp up to significant numbers next year. And we’re constructing a second Optimus factory in — at our Giga Texas location. And that will probably start production around summer next year.

The V3 Optimus design is almost ready to demonstrate. I think we want to just make sure it’s like polished. Like it works functionally, but there’s some aesthetic elements that need to be finalized. And I think probably middle of this year, we should be able to show it off. We’re also a little hesitant to show V3 off because we find our competitors do a frame-by-frame analysis whenever we release something and copy everything they possibly can. So I think there’s some value to not showing new technology until it’s close to production…

…We want to push the Optimus 3 unveil maybe closer to production. Start of production is — we’re assuming is somewhere around the late July, August time frame…

…The last S, X production will be in early May. But you have to look at the entire upstream portion of the production line. So you have to start with sales, battery packs, motor production, all the parts production. And so we’ve been dismantling the S, X production line from the more base-level parts — more basic level parts to — as you get to more larger subassemblies, you start dismantling the line from the small parts first, not from the final assembly first. So the final assembly line will — that will be dismantled next month and after the last of the S, X vehicle is done. You can’t dismantle some gigantic production line like overnight. It takes at least a few months to do so. And then you’ve got to install a new production line, and you’ve got to provide all of the wiring and communication, test out the machines of the new production line for Optimus. So that also takes several months. So frankly, if we’re able to go from stopping production on one line, dismantling that entire line, reinstalling a whole new line and turning that on in a matter of 4 months, that is an insanely fast speed. I don’t think any other company on earth has ever done that before…

…I don’t know what the production rate of Optimus will be this year. It is impossible to predict these things…

…when you have a brand-new product in an entirely new production line and you have 10,000 unique items, all of which have to go right into ramp production, it will move as fast as the least lucky, slowest, dumbest part in the entire 10,000. And this is a — Optimus is a completely new product with completely new production line. So it’s just literally impossible to predict, except that I think it will be quite slow at first as we iron out the 10,000-plus unique items that have to be sold for Optimus to reach volume production…

…We think we can put a lot of intelligence locally in the robot, and it certainly needs to be enough intelligence that if the robot gets disconnected, like if it’s a bad cellular signal or there isn’t WiFi, Optimus can’t just get stuck. It needs to have enough local intelligence that it can still do useful things even if it loses connection, kind of like a car…

…You can think of like Optimus needs kind of a manager to tell it what to do, broadly speaking, like if — otherwise it’s going to keep doing the same thing it did before. So I think you need kind of an orchestration AI, which Grok would be good for orchestration. And then for Optimus’ voice, having a low-latency intelligent voice AI, Grok is actually very good for that. So if you want to talk to Optimus and have kind of a Grok-level conversation, you kind of need to connect to a Grok-level AI for that.

All Tesla cars are autonomy-ready; supervised full self-driving is getting really good; v14.3 (version 14.3) of FSD was a major architectural update; management has a pipeline of improvements for FSD that they think will lead to unsupervised full self-driving being available globally; v15 of FSD is coming by end-2026 or early-2027; v15 of FSD will be a complete software architecture overhaul; v15 of FSD will run on Tesla’s AI4 chip; management thinks v15 of FSD will increase the safety level of FSD to way above human level; FSD now has 1.3 million paid customers globally (1.1 million in 2025 Q4); most of the growth in FSD customers in 2026 Q1 came from subscriptions, as management has removed the upfront-purchase option in some markets during the quarter; FSD recently received approvals in Netherlands; management is looking for EU-wide approval for FSD in 2026 Q2; FSD has received some approvals in China, although broader approval has yet to arrive; management hopes FSD can be fully approved in China by 2026 Q3; management has changed Tesla’s sales strategy to emphasise FSD as the product; management hopes to have unsupervised FSD in a dozen states by end-2026; management thinks unsupervised FSD revenue will not be material in 2026 but will be material in 2027; management thinks unsupervised FSD will reach customer-cars by 2026 Q4, but the release will be gradual; the FSD software deployed in Netherlands has the same exact architecture and the training procedure as the US version, but with more Europe data; management believes that the way Tesla solves full autonomy in the US can be applied to all parts of the world, if Tesla can add data from local regions; the Tesla customer fleet of vehicles is driving close to 10 billion miles on FSD in a few weeks; management thinks v14.3 of FSD is the last piece of the puzzle to enable unsupervised FSD; most Tesla drivers with Hardware 4 are already using FSD; FSD’s churn rate has improved

It’s always, I think, worth noting that a Tesla car is incredibly — incredible value for money, and they’re all autonomy-ready, depending on what part of the world you’re in. The supervised full self-driving is getting extremely good…

…For full self-driving and Robotaxi, version 14.3 was a major architectural update. And we have a whole pipeline of major improvements to full self-driving that, we believe, will lead to unsupervised full self-driving being available anywhere in the world that it is legal to do so. And then there’s a version 15, hopefully later this — hopefully by the end of this year, but certainly by early next year. And that will be a complete overhaul of the software architecture, and will run on AI4. That’s — and at that point, we’re really just increasing the safety level of FSD above human safety level, even more. Meaning, I think, even within version 14, we’re significantly safer than human, but v15 will take that to another level…

…On the FSD adoption front, we continue to see improvement, reaching nearly 1.3 million paid customers globally. The bulk of the growth came from subscriptions, while upfront purchases only increased 7% as we remove the purchase option in some markets in Q1.

We recently received approvals for FSD in Netherlands. This sets up us well for an EU-wide approval later in Q2, and we’re just gated by how the regulators go about it. Additionally, we’ve also received approvals in China. The broader approval is still not there, but we’re working with the regulators in the country, and we’re hoping that we can get approval by Q3…

…We have evolved our vehicle sales strategy, where we now emphasize FSD as a product and vehicle as only the delivery mechanism…

…We certainly hope to be — have unsupervised FSD/Robotaxi operating in, I don’t know, a dozen or so states by the end of this year…

…I think probably unsupervised FSD or Robotaxi revenue would not be super material this year. But I do think it will be material — it will be material probably in a significant way next year…

…[Question] When do you expect FSD unsupervised to reach customer cars?

[Answer] I’m just guessing here, but probably in the fourth quarter. It’s difficult to release this like to everyone everywhere all at once because we do want to make sure that they’re not unique situations in a city that particularly complex intersection or actually, they tend to be places where people get into accidents a lot because they’re just — perhaps there’s — and like I said, an unsafe intersection or bad road markings or a lot of weather challenges. So I think we would release unsupervised gradually to the customer fleet as we feel like a particular geography is confirmed to be safe…

…From a technology standpoint, what we deployed in Netherlands and Europe is the same exact architecture and the training procedure and so on, except it had more Europe data. And I suspect that same thing will be true for unsupervised FSD as well. Whatever we use to solve in the U.S. will work in other places and the rest of the world, too, provided we were able to add the data from the local regions…

…We are simultaneously solving the long tail of safety by monitoring the metrics across the entire Tesla customer vehicle fleet, which is close to driving 10 billion miles on FSD in the next few weeks…

…I think 14.3 is last piece of the puzzle for unsupervised FSD. Now the question is like degrees of safety. Like how — safety and convenience, I suppose…

…[Question] You have 180,000 new users, paying users this quarter, and I compare that to your overall installed base. It might be 15%, but then if I shrink that to the U.S. or to North America where most of them are, it’s probably more like 30%, 35%. And I’m trying to — and I compare that to what you sold, about 100,000 cars in North America in the quarter. So you’re winning twice more FSD users than you’re selling cars. And then if I add to that picture the fact that, I guess, it’s mostly Hardware 4 owners who subscribe to FSD, it sounds like most drivers in North America who have Hardware 4 would already be using FSD. Is that the right way to think about it and the kind of like success FSD is meeting today?

[Answer] You’re thinking about it the right way…

…We are actually seeing churn of subscribers also coming down, which again is a reflection of the product is getting better.

Tesla has started production of Cybercab, which are autonomous vehicles for the company’s Robotaxi fleet; the production of Cybercab will be a stretched-out S curve, ramping up only towards end-2026; the Robotaxi service has been expanded to Dallas and Houston; the expansion of the Robotaxi service is limited by management’s desire for really high safety levels; Robotaxi has, to-date, not had a single accident or injury; management hopes to have unsupervised Robotaxi in a dozen states by end-2026; management thinks Robotaxi revenue will not be material in 2026 but will be material in 2027; Robotaxi is currently running on FSD v14.3; Cybercab is 2-person vehicle; management thinks most of Tesla’s future vehicle production will be Cybercab; Tesla’s vehicles in the Robotaxi fleet sometimes get stuck because it’s programmed for maximum safety; the vehicles in the Robotaxi fleet can sometimes be stuck on infinite loops

We have just started production of Cybercab…

…Whenever you have a new product with a completely new supply chain, new everything, it’s always a stretched out S-curve. So you should expect that initial production of Cybercab and Semi will be very slow, but then ramping up and going kind of exponential towards the end of the year and certainly next year…

…We’ve expanded Robotaxi to Dallas and Houston using the same software source in the Bay Area. And the limiting factor for expansion is really rigorous validation, making sure things are completely safe. We don’t want to have a single accident or injury with the expansion of Robotaxi. And we have, to the credit of the team, not had a single one to date…

…We certainly hope to be — have unsupervised FSD/Robotaxi operating in, I don’t know, a dozen or so states by the end of this year…

…I think probably unsupervised FSD or Robotaxi revenue would not be super material this year. But I do think it will be material — it will be material probably in a significant way next year…

…So far, we have 0 incidents, and that’s what the NHTSA filing also shows…

…The version of Robotaxi that’s running in Austin, Dallas, Houston, et cetera, those are essentially 14.3 variants, and it’s obviously safe that, that’s why we’re able to launch in those cities…

…Cybercab is a compact vehicle. It’s actually — I mean, it’s very roomy, but it’s a 2-person vehicle. And we do think probably most of our production long term will be Cybercab because 90% of miles driven are with 1 or 2 people…

…A lot of what limits wider deployment of Robotaxi are actually not safety issues, but convenience issues or the car basically gets paranoid and gets stuck. Like sometimes it gets — because it’s programmed for maximum safety, so the problem is that then it sometimes just gets scared to do things. So like sometimes it gets scared to cross railroads, for example, or it’ll get stuck at a light or where there’s — the light never changes from red or, I mean, there was one kind of amusing situation where a whole bunch of Robotaxis got stuck in the left turn lane in Austin because, I kid you not, a Waymo had crashed into a bus. And so they could not turn left because the Waymo had crashed into the bus. And so you have this like long line of like, I don’t know, a dozen or more Tesla Robotaxis that were waiting for the bus to move, but the bus was never going to move because the Waymo crashed into the bus…

…We’ve also had literal infinite loops where the car might want to make a turn into a road, but there’s construction, and then it goes around the block, tries to turn into the road with construction, goes around the block, tries to turn into the road, and so you have to stop the infinite looping, the literal infinite looping.

Tesla has taped out its AI5 chip; management thinks the AI5 chip will be the best AI chip for inference at the edge, and will be the best value-for-money AI chip; Tesla is already designing the AI6 chip and is working on Dojo 3; management expects AI5 to go into Optimus and Tesla data centers, because AI4 is currently sufficient to achieve autonomy that is much safer than human drivers, so AI5 is not needed in the vehicle fleet; management thinks it will make sense at some point in the future to put AI5 into Tesla vehicles; management is planning to increase the memory and compute capacity of AI4, but the progress partly depends on Samsung (the fab for the chip)

Congratulations to — again to the Tesla AI chip team for taping out AI5. That’s going to be a great chip. I think probably the best AI inference chip for edge compute that exists. And certainly, I think, unequivocally the best value for money. The team did a great job. And we already have a lot of momentum for designing AI6, and we’ve begun to discuss ideas for Dojo 3…

…I do expect that AI5 will go into Optimus and into the data center because it’s looking like we’ll be able to achieve unsupervised self-driving with AI4 that is far greater than human safety levels. So — which means it’s not — certainly not immediately needed in the car. At some point, I think it will make sense for us to switch to AI5 in the car, but that’s — but there’s not a pressing issue to do so. So — but at some point, the AI4 hardware is going to get like so old that it’s like, okay, the only reason they’re keeping the factory open is for AI4.

We are planning an AI4 upgrade to use newer generation RAM. So it will go from 16 gigabytes to, I think, 32 gigabytes per SoC. So a total of 64 gigabytes, and probably a 10% increase in compute in sort of into — trillions of operations per second and in memory bandwidth. So that’s AI4.1 or AI4+, probably goes into production middle of next year, I think, depends. It depends on — Samsung is doing the modifications for us. So it sort of depends on when they’re able to finish that — finish those modifications and bring it to production.

Tesla’s management now thinks that Tesla vehicles with Hardware 3 will not be able to run unsupervised FSD; Hardware 3 has much lower memory capacity for Hardware 4, and memory capacity is needed for unsupervised FSD software to run; management is offering a trade-in for Tesla Hardware 3 vehicles to upgrade to Hardware 4; management is also considering setting up small factories to upgrade Hardware 3 on existing vehicles to Hardware 4

Unfortunately, Hardware 3 — I wish it were otherwise, but Hardware 3 simply does not have the capability to achieve unsupervised FSD. We did think at one point, it would have that, but relative to Hardware 4, it has only 1/8 of the memory bandwidth of Hardware 4. And memory bandwidth is one of the key elements needed for unsupervised FSD. And it’s just generally a thing that’s needed for AI. If you’re doing autoregressive transformer memory bandwidth, this is the choke point. So for customers that have bought FSD, what we’re offering is essentially a trade in — like a discounted trade-in for cars that have AI4 hardware. And then we’ll also be offering the ability to upgrade the car, to replace the computer, and you also need to replace the cameras, unfortunately, to go to Hardware 4.

So to do this efficiently, we’re going to have to set up like kind of micro factories or small factories in major metropolitan areas in order to do it efficiently. It’s — because if it’s done just at the service center, it is extremely slow to do so and inefficient. So we basically need like many production lines to make the change. And I do think, over time, it’s going to make sense for us to convert all Hardware 3 cars to Hardware 4 because that’s what enables them to enter the Robotaxi fleet and have unsupervised FSD.

Tesla’s research fab for the TeraFab project will begin construction this year at the company’s Giga Texas campus; management’s still working out details on TeraFab, which is a joint-venture between Tesla and other Elon Musk-related companies (xAI and SpaceX); the construction of the research fab will see Tesla spend around $3 billion, and the research fab is for Tesla to try out new ideas; SpaceX will be in charge of the initial phase of the scaled up TeraFab; Intel will be partnering the TeraFab for some of the core manufacturing technologies; TeraFab will utilise Intel’s 14A process, which is leading-edge but currently not fully mature; the TeraFab will be housing memory, logic, mask, lithography, and advanced packaging all under one roof, whereas the broader fab industry has separate facilities and companies for the different activities; management wants TeraFab to house all the different activities because they think it’s the fastest way to conduct R&D, but they are also aware it’s a long shot; management sees TeraFab as the only way to produce sufficient AI chips for the world, and not to press 3rd-party fabs on pricing; the TeraFab is also a great way for management to test out the radical ideas they have for improving AI chips

We’ve also finalized plans for the chip fab — the research chip fab on the Giga Texas campus, and we’ll start construction of that this year…

…We’re still working out the details of the Terafab deployment. In the near term, Tesla will be building the research fab on our Giga Texas campus. This is something we expect to be probably a $3 billion-ish initiative and capable of maybe a few thousand wafers per month, but it’s really intended to try out ideas, the research fab, both in terms of maybe — we have some ideas for improving the fundamental technology of how chips are made and some of the — there’s some new physics we’d like to test out. But we also want to test out the ability to see if something is working in production. So you need kind of like a few thousand wafer starts a month to make sure that a production process is sound. And then SpaceX is going to take care of like the initial phase of the scaled up Terafab. And that’s what we’ve figured out thus far…

…Intel is excited to partner with us on some of the core manufacturing technologies. So we plan to use Intel’s 14A process, which is state-of-the-art and, in fact, not yet totally complete. So — but given that by the time Terafab scales up, 14A will be probably fairly mature or ready for prime time. 14A seems like the right move…

…I think this will be unique in the world, or at least I’m not aware of any — a place where you have the lithography mask creation, the — and then logic, memory and packaging under one roof in one building. That’s about the fastest I could possibly imagine doing recursive research and development and being able to try out some pretty radical ideas, some of which have — it’s kind of long-shot stuff, but if some of these long shots pan out would be radical improvements in the way chips work…

…Terafab is not some sort of mechanism to generate leverage over our chip suppliers. It’s just literally we don’t see a path to having enough — a sufficient quantity of AI chips down the road as we scale production to high levels. Just the rate at which the industry is growing in logic, but even more so in memory, it just doesn’t — we just anticipate hitting the wall if we don’t make chips ourselves…

…I think that we do have some ideas for how to make maybe radically better AI chips. And these are kind of research ideas there — which means like long shot, but if long shot pays off, it’s maybe a giant improvement. And it’s just easier to do that if we have our own research fab and are developing our own production technologies. So — and if you look sort of long term at, say, having AI satellites, making chips for those. There’s just no way in hell the existing industry can keep up with that. It’s impossible.

Visa (NASDAQ: V)

Visa’s management believes agentic commerce will expand Visa’s market opportunity in 4 ways, namely, (1) accelerating the digitisation of commerce, (2) creation of significantly more transactions by agents, especially in a new category of commerce characterised by micro transactions, (3) accelerating the digitisation of B2B payments, with virtual cards and tokens becoming a preferred way to pay and be paid, and (4) accelerating overall GDP growth by 80-150 basis points

We believe AI and agentic commerce will expand our addressable market in 4 important ways. 

First, like eCommerce and mobile commerce before it, agentic commerce will accelerate the digitization of commerce around the world. And just like the acceleration from eCommerce and mobile commerce, Visa will benefit.

Second, agents will create significantly more transactions. Agents will intelligently split purchases across multiple transactions, optimizing price, timing and value to the buyer. And importantly, in some use cases, we expect agents will pay for their own data and resource consumption transaction by transaction and event by event, which creates an entirely new category of commerce with micro transactions.

Third, we will see accelerated digitization of B2B payments, where there is still enormous friction that AI agents can help remove. They will be able to automate payment initiation directly from invoices and contracts and manage approvals autonomously. In this context, virtual cards and tokenization will become a preferred way to pay and be paid.

And lastly, just like the advent of eCommerce and mobile commerce, agentic commerce will increase economic growth generally. Third parties estimate we are looking at a boost of 80 to 150 basis points of incremental GDP growth from AI and when GDP grows, spending grows and digital payments transactions grow.

Visa’s management believes the company is well positioned to win in agentic commerce for 3 reasons, namely, (1) the massive scale of Visa’s network, which means plenty of proprietary data to work with, (2) the tight security of Visa’s network, and (3) the high level of trust in it; Visa is a proven leader in tokenisation, and management believes tokens will become an essential element in agentic transactions; management thinks people will want their agents to pay with cards, just like how they prefer to use cards for physical and online payments; management recently launched Intelligent Commerce Connect, a network protocol and token vault agnostic on-ramp for agentic commerce; management is seeing early growth in agentic commerce transactions performed with Visa agentic tokens; management thinks the CLI (command line interface), which is effectively a chat box, is becoming a commerce platform, and cards will continue to have strong value in CLI-driven commerce transactions of all sizes; management recently launched Visa CLI as a proof-of-concept for developers to use their Visa credentials to make payments; early feedback for Visa CLI is very positive; management thinks agents will soon realise that no other payment methods, other than Visa cards, offer ease of use, broad acceptance, privacy, easy liquidity management, KYC, user security protection, and rewards; management thinks the limiting factor for agentic commerce is currently trust, which also means users will fall back on payment methods they already trust

Visa is extraordinarily well positioned to win in agentic for 3 important reasons. Our network, security and trust. Our network has enormous scale, more than 175 million seller locations, 5 billion credentials in 200 countries and territories with nearly 14,500 financial institution clients who have opted in to using this network. Payment security is only going to become more difficult and more valued. With our scale comes over 300 billion transactions annually, equating to an average of about 900 million transactions per day, and all of the data that comes with it. Visa has proven it knows how to manage transaction risk, identity risk and fraud, all enabled by this transaction data. And trust. Visa has well-established trust grounded in our standards and brand. We’ve set the standards that enable trusted payments in the digital and emerging agentic ecosystem.

And a big part of our network, security and trust are Visa tokens. Visa is a proven leader in tokenization, which is foundational in eCommerce and is set to become an essential element of trusted transactions in an agentic world.

People overwhelmingly choose to pay with cards face-to-face and online, and they will prefer their agents to pay with cards. And merchants want this, too. We recently launched Intelligent Commerce Connect, which acts as a network protocol and token vault agnostic on-ramp to agentic commerce for agent builders, merchants and enablers. Now while it’s early, we are seeing growth in agentic shopping and the emergence of early agentic commerce, real transactions with Visa agentic tokens.

And AI continues to evolve. With the AI landscape, we are seeing that Claude code and other agentic coding assistants will allow anyone to become a developer. It’s that easy to work in simple command-style tools like the command line interface, or CLI. These agentic coding assistants are a great example of how we see AI and agentic commerce increasing economic growth as they enable anyone to bring their new business ideas to life. We see a world where we will all design, build and launch digital products and experiences ourselves, engage with digital platforms and buy digital services using the CLI, or a slick consumer-friendly version of one as our interface. The CLI itself is becoming a commerce platform, and we believe that the preference and value of cards will be equally strong for all sizes of transactions, including micro transactions. A key to making this happen is enabling safe, simple and easy payments that are widely accepted by all API endpoints. We recently launched Visa CLI as a proof of concept, which shows how easy it is for a developer, soon all of us, to use their Visa credential to pay for digital services like an image, a website builder or more via the CLI. The early feedback we have been receiving from developers is very positive. And as we move forward, we plan to enable CLI commerce at scale, which means scaling the availability of command line tools and card acceptance by promulgating standards, products, rules and pricing…

…In all of these use cases, Visa cards are providing significant value. They’re easy to use, broadly accepted, integrated into the transaction flow, offer privacy, unlike most stablecoins, offer a way to manage liquidity in aggregate rather than funding millions of real-time micro transactions, offer an issuer KYC, user security protections if something goes wrong, and in many cases, cards offer rewards and benefits. We see no other payment method on earth that delivers all of these features. Buyers know this, sellers know this and soon so will agents. We expect more transactions, more value-added services and therefore, more revenue in the years ahead from agentic…

…I think the limiting factor for agentic commerce is trust. I think when we all think about ourselves as buyers and we all think about ourselves having agents go out and transact on our behalf, we are going to fall back on payment methods that we, as users, trust…

…When you think about yourself as a user, when you think about kind of who you’re going to trust your agent to make payments on your behalf, whether those are macro transactions, average transactions or micro transactions, we feel really good about our ability to win those transactions for our users using all of those capabilities.

AI is making Visa’s value-added services better; Visa’s new Large Transaction Model, which has a 5x increase in fraud value capture, is starting to be a foundational model for a variety of AI-powered fraud and risk services at the company; management has been integrating AI features across Visa’s VAS solutions; management thinks AI helps improve the differentiation of Visa’s VAS business even more; there are a variety of AI-driven products within the VAS portfolio that have helped the business perform well

Across Visa, AI is making what we do even better, especially for our value-added services. Our new Visa Large Transaction Model is beginning to act as the foundational model for a variety of AI-powered fraud and risk services at Visa. Early results have shown that it can power up to a 5x increase in fraud value capture. Our team has been integrating new AI-enabled features across our suite of VAS solutions, including the recent release of 6 dispute resolution capabilities. In fact, across all of our services, client adoption has been the fastest among AI embedded services such as Smarter Stand-In Processing and Visa Provisioning Intelligence…

…Our value-added services are highly differentiated and even more so in an AI world…

…We’ve been shipping new, especially AI-driven products in the issuing solutions space. We outperformed in the quarter in our AI-driven stand-in processing platform. We outperformed in our Visa supplier payment services platform. Those are two of the service — issuing solution platforms. In the acceptance side of the business, our Visa account updater platform outperformed. That’s one that allows merchants to automatically upstore credentials when you might have had fraud on your account and it was reissued or something like that. Look at our Risk and Security Solutions area, we saw outsized performance in VCAS, our Visa Consumer Authentication Service, or also in our VAA and VRM platforms, Visa Advanced Authorization and Visa Risk Manager. These are all products that we’ve been deploying in market, largely AI-driven products, and they’ve been driving broad-based out-performance across the value-added services portfolio.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Intuitive Surgical, Mastercard, Meta Platforms, Microsoft, Netflix, TSMC, and Visa. Holdings are subject to change at any time.

The View On Consumer Spending From The Largest Payments Companies (2026 Q1)

Mastercard and Visa can feel the pulse of consumer spending – what are they seeing now?

Mastercard (NYSE: MA) and Visa (NYSE: V) are two of the largest payments companies in the world. As a result, they have a great view on consumer spending that’s taking place. With both companies reporting their earnings results for the first quarter of 2026 earlier this week, the bottom line is that consumer spending remains strong in the USA and other parts of the world, although there’s some near-term uncertainty because of the current conflict in the Middle East. Here’s what they are seeing.

*What’s shown in italics between the two horizontal lines below are quotes from Mastercard and Visa’s management teams that I picked up from their earnings conference calls.


From Mastercard

1. Mastercard’s management sees consumer and business spending, and the labour market, remaining healthy, although the economic backdrop is uncertain, driven by geopolitical tensions in the Middle East that have affected cross-border travel and global energy supply

Looking at the macro picture, the economic foundation remains generally supportive, with healthy underlying consumer and business spending. However, the backdrop remains uncertain, driven by geopolitical tensions, which has put some pressure on cross-border travel. Overall, labor markets continue to be balanced and wages are still outpacing inflation in most major markets…

…Despite elevated geopolitical risks, the macro economy has remained largely supportive, with healthy, underlying consumer spending and the fundamentals of our business remain strong. With that said, we are operating in a period of heightened uncertainty magnified by the ongoing conflict in the Middle East. Since the outbreak of the conflict at the end of February, we have seen restrictions on travel and a reduction in the world’s energy supply. And as I noted earlier, we are seeing impacts from that in our cross-border travel metrics.

2. Worldwide GDV (gross dollar volume) was up 7% year-on-year in constant-currency basis; cross-border volume was up 13% globally in constant-currency, driven by both travel and non-travel cross-border spending (cross-border volume growth was 14% in 2025 Q4); cross-border volume in 2026 Q1 was affected in March because of impacts on cross-border travel from the conflict in the Middle East; switched transactions was up 9% year-on-year; card growth was 5% in 2026 Q1, with Mastercard ending the quarter with 3.7 billion cards in circulation (there were 3.7 billion cards in 2025 Q4, and year-on-year growth was 6% then); on currency-neutral basis, domestic assessments were up 6%, cross-border assessments were up 18% and transaction processing assessments were up 15%

I’ll speak to the growth rates of our key volume drivers for the first quarter on a local currency basis. Worldwide gross dollar volume, or GDV, increased by 7% year over year. In the US, GDV increased by 4%, with credit growth of 8% and debit growth of 1%. Excluding the impacts from the migration of the Capital One debit portfolio, our US debit GDV growth would have been 7%…

…Outside of the US, volume increased 9% with credit growth of 9% and debit growth of 8%. Overall, cross-border volume increased 13% globally for the quarter, reflecting continued growth in both travel and non-travel related cross-border spending. As one would expect starting in March, we began to see some impact on cross-border travel from the conflict in the Middle East.

…Switched transactions grew 9% year-over-year in Q1. Excluding the impacts from the migration of the Capital One debit portfolio, our switched transaction growth would have been 10%…

…Card growth was 5%. Globally, there are 3.7 billion Mastercard and Maestro branded cards issued…

…All growth rates are described on a currency neutral basis, unless otherwise noted. Looking quickly at each key metric, domestic assessments were up 6% while worldwide GDV grew 7%. The difference is primarily driven by mix, partially offset by pricing. Cross-border assessments increased 18%, while cross-border volumes increased 13%. The five point difference is driven primarily by pricing in international markets. Transaction processing assessments were up 15%, while switch transactions grew 9%. The six ppt difference is primarily due to favorable mix and pricing, slightly offset by lower revenue from FX volatility.

3. In 2026 Q1, Mastercard’s operating metrics had good year-on-year growth and were stable sequentially; in April 2026 so far, Mastercard’s operating metrics continue to be strong with worldwide switched volume growth of 8% (5% in the USA, and 10% outside of the USA), switched transactions growth of 9%, and cross-border volume growth of 9%; cross-border travel volume declined sequentially in April 2026 from 2026 Q1 because of an acceleration in the impact of the Middle East conflict; in all, management continues to see healthy consumer and business spending

Let me comment on the operating metric trends for Q1 and the first 4 weeks of April. As we look across Q1 and April, growth rates of our operating metrics were impacted by timing of holidays, namely Ramadan and Easter. March would have seen the benefits from the timing, while February and April saw a negative impact. Looking at the Q1 operating metrics on a sequential basis, switched metrics were generally in line with Q4 and underlying spend remains stable. Of note, U.S. switched volume was flat sequentially as the strength in consumer and business spend offset the impact from the migration of Capital One’s debit portfolio in the quarter. Excluding Capital One, on a like-for-like basis, U.S. switched volume growth was over 1 ppt higher in Q1 as compared to Q4. Now on to switched transactions; excluding the migration of the Capital One debit growth — sorry, excluding the migration of Capital One debit, growth was generally in line with Q4.

Moving to our cross-border metrics. Our overall cross-border volume remains healthy with growth at 13% in the first quarter. Cross-border card-not-present ex-travel grew at 18% and remained strong. And the sequential decline in cross-border travel was due primarily to the conflict in the Middle East and portfolio shifts.

Now looking specifically at cross-border travel for the first 4 weeks of April, the sequential decline from Q1 is due to an acceleration of the impact of the conflict, the portfolio shifts and the negative impact from the timing I just mentioned. None of these factors relate to any fundamental change, and underlying consumer and business spend remains healthy.

From Visa

1. US payments volume growth was good at 8%, with e-commerce growing faster than physical spend, and it reflected resilience in consumer spending; there was good growth in both US credit and debit volumes; growth across consumer spend bands improved from 2025 Q4 (FY2026 Q1) with the highest spend band continuing to grow the fastest; both discretionary and non-discretionary spend remained strong; management did not see a deterioration in spend in the lower bands; 

U.S. payments volume grew 8% year-over-year, up almost 1.5 points from Q1, reflecting resilience in consumer spending. E-commerce spend outpaced face-to-face spend. Both U.S. credit and debit demonstrated broad-based spend improvement, and we believe both were helped in part by higher tax refunds. Debit grew 7%, up almost 1 point from Q1 and credit grew 10%, up more than 2 points from Q1, with strong travel spend in both consumer and commercial.

Growth across consumer spend band saw incremental improvement from Q1 with the highest spend band continuing to grow the fastest. Across our volume, both discretionary and nondiscretionary spend remains strong. We do not see signs of the lower spend consumer weakening in our volumes.

2. Visa’s cross-border volume growth remained strong in 2026 Q1 (FY2026 Q2) at 11%, and was the same as in 2025 Q4 (FY2026 Q1)

Q2 total cross-border volume was up 11% year-over-year, consistent with Q1. Cross-border eCommerce volume was up 13%, 1 point above Q1. While crypto continued to be a slight drag, the improvement was primarily driven by U.S. inbound volume. Travel-related cross-border volume was up 10%, generally consistent with Q1, led by continued strength in commercial and improved U.S. inbound volume that generally offset the impact in the Middle East that was most pronounced in March.

3. Payments volume on Visa’s network continues to grow in April 2026, with US payments volume up 9%, cross-border volume up over 9%, e-commerce volume up 14%, and processed transactions up 8%; management is seeing near-term uncertainty in cross-border travel spend in the CEMEA (Central Europe, Middle East, and Africa) region because of the Middle East conflict

Now let’s look at drivers through April 21 with volume growth in constant dollars. U.S. payments volume was up 9%, with credit up 10%, and debit up 8% year-over-year. For constant dollar cross-border volume, excluding transactions within Europe, total volume grew 9% year-over-year with eCommerce up 14% and travel up 5%. The step down in travel from March was driven by both the impact from the Middle East conflict and Ramadan timing. When you normalize for Ramadan timing, the total April cross-border volume growth was in line with February levels. Processed transactions grew 8% year-over-year…

…The Middle East conflict has introduced some near-term uncertainty, in particular to cross-border travel spend in the CEMEA region.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Mastercard and Visa. Holdings are subject to change at any time.

Some Signs of AI Froth

Companies are seeing their stock prices surge mani-fold just by adding “AI” to their name.

The late French writer Jean-Baptiste Alphonse Karr has a phrase, “Plus ça change, plus c’est la même chose” which translates into “the more things change, the more they stay the same.” This aptly describes the financial markets.

During the heady days of the Dotcom Bubble in the late 1990s, companies saw their stock prices surge simply by changing their name to include a reference to the internet. In a 2002 academic finance paper, A Rose.com by Any Other Name, Michael Cooper, Orlin Dimitrov, and Raghavendra Rau wrote (emphasis mine):

“We document a striking positive stock price reaction to the announcement of corporate name changes to Internet-related dotcom names. This “dotcom” effect produces cumulative abnormal returns on the order of 74 percent for the 10 days surrounding the announcement day.”

There have been recent rhymes in the stock market, but of the AI (artificial intelligence) variety.

On 15 April 2026, Allbirds announced a financing agreement along with changes in its business direction (laughably, from consumer footwear to providing compute for AI) and name (to NewBird AI). In response, its stock price surged 582% to US$17 on the day of the changes. NewBird AI’s stock price has since retreated to US$8, but it is still significantly higher than the pre-name-change price of less than US$3.

Later in the same day saw Myseum add “AI” to its name to highlight “the Company’s core technology platform that will integrate proprietary privacy-first artificial intelligence (AI) into its secure messaging and social media platforms.” The company’s stock price jumped by 129% the next day to close at US$3.30; at the day’s peak of US$5.77, Myseum AI’s stock price was up by 300%. The stock price is currently hovering near US$3.

The acclaimed investor Howard Marks has a great investing quote: “We may never know where we’re going, but we’d better have a good idea where we are.” And where we are right now, from what I see, is a bubbly place in AI-land.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q1 2026

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the first quarter of 2026.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, it is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the first quarter of 2026 – was held earlier this week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy remains resilient, but the risks are growing in complexity.

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy remained resilient in 2026 Q1; consumers and businesses continue to spend; there are multiple tailwinds supporting the economy’s resilience, but the risks to the economy are growing in complexity; consumer spending growth in 2026 Q1 is faster compared to a year ago; energy is just 3% of the typical consumer’s expenditure, so they are not significantly affected by higher energy prices; the strength of the American consumer is the result of a strong labour market, so if the labour market were to weaken for any reason, the American consumer will also weaken

The U.S. economy remained resilient in the quarter, with consumers still earning and spending and businesses still healthy. Several tailwinds are supporting this resiliency, including increased fiscal stimulus, the benefits of deregulation, AI-driven capital investment and the Fed’s asset purchases. At the same time, there is an increasingly complex set of risks— such as geopolitical tensions and wars, energy price volatility, trade uncertainty, large global fiscal deficits and elevated asset prices. While we cannot predict how these risks and uncertainties will ultimately play out, they are significant and they reinforce why we prepare the Firm for a wide range of environments…

…Notwithstanding the recent volatility in market and gas prices based on our data, consumers and small businesses remain resilient with consumer spend growth continuing above last year’s pace…

…[Question] How resilient is consumer spend and credit if energy prices remain high? And are there any signs of cracks that you’re seeing at all?

[Answer] There really is not anything new or interesting to say this quarter. We’ve looked at it through every angle. Early roll rates, delinquency rates, cash buffer, spend, discretionary spend, non-discretionary spend, it all looks consistent with prior trends and fundamentally, healthy. So let me add maybe just a little bit of nuance in the context of energy prices and what’s going on this quarter. So I think gas or energy cost is something like 3% of the typical consumer’s expenditure, at least in our portfolio. So it’s not nothing, but it’s not overwhelming. We’ve looked to see if there’s kind of evidence in there of people trading, decreasing other discretionary spending to adjust for higher gas prices, but it’s just kind of not enough yet to be visible.

I would caution, though, I think it remains fundamentally the case that the biggest single reason that the consumer credit performance is healthy is that the labor market is strong. And if you get bad outcomes in the Middle East, much higher energy prices or other problems that sort of do eventually track what has been, I think, from many people’s perspective, a surprisingly resilient American economy and a very resilient U.S. consumer, and that winds up having knock-on effects on the labor market, then you will see that come through, clearly. But right now, in the end, the story remains the same, which is resilient consumer that’s doing fine despite higher gas prices.

2. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) was flat at US$2.3 billion compared to a year ago

Credit costs of $2.5 billion with net charge-offs of $2.3 billion and a net reserve build of $191 million.

3. Management thinks there has been no recent changes in real-world systemic risk

It’s important to understand that under the current rule, the surcharges for almost all of the G-SIB banks are scheduled to increase meaningfully over the next 2 years, simply as a result of recent growth in the system despite, in our view, no change in real-world systemic risk.

4. Mortgage loan originations had strong growth in 2026 Q1, driven by refinancing of mortgages 

In Home Lending, originations of $13.7 billion increased 46% year-on-year predominantly driven by refi performance.

5. JPMorgan’s investment banking fees were up 28% in 2026 Q1 from a year ago because of strong performance in mergers & acquisitions (M&A) and equity underwriting; management sees a strong pipeline for capital markets activities, barring significant deterioration in the ongoing Middle Eastern conflict; the sentiment of companies for capital markets activities has been surprisingly resilient

IB fees were up 28% year-on-year, driven by strong performance across M&A and equity underwriting, partially offset by lower debt underwriting. Looking ahead, client engagement and pipelines remain healthy, but of course, developments in the Middle East could have an impact on deal execution and timing…

…On the question of overall sentiment on the pipeline, I would describe it as resilient, maybe surprisingly resilient, given everything that’s going on. But I also think the time lines in the Middle East are kind of quite short. There are deadlines or negotiations. I think it’s reasonable for people to kind of proceed with their plans in the hope or maybe expectation that we get relatively quick resolutions. But if things start getting derailed, I would be surprised if you don’t see some impact on sentiment and on deal decision-making. But for right now, it seems quite resilient.

6. Management continues to expect credit card net charge-offs for 2026 to be 3.4% (was around 3.3% in 2025); management expects JPMorgan’s credit card loans to grow 6% in 2026

The adjusted expense outlook continues to be about $105 billion and the Card net charge-off rate continues to be approximately 3.4%…

…What we said about Card loan growth expectations at Company Update, which is that we said we expected 6% or maybe a little bit more, and that hasn’t really changed. That’s still kind of our core expectation. 

7. Management thinks that there will not be systemic issues for banks even if the private credit industry experiences a default cycle because the private credit industry is still relatively small compared to the overall loans market banks are participating in; the credit quality within private credit portfolios has not gotten much worse

[Question] do you think that if we do have a default cycle in private credit, that it will be systemic?

[Answer] Private credit leverage lending is like $1.7 trillion, high-yield bonds are something like $1.7 trillion, bank syndicated leveraged loans are like $1.7 trillion, investment-grade debt is $13 trillion, mortgage debt is like $13 trillion, and there’s a lot of other stuff out there. And I pointed out that I think there’s been some weakening in underwriting and not just by private credit elsewhere. And there will be a credit cycle one day. And I think when there’s a credit cycle, losses will be worse than people expect relative to the scenario. I don’t think it’s systemic. It almost can’t be systemic at that size relative to anything else. But when recessions happen and values go down and people refi at higher rates, there will be stress and strain in the system. Are people prepared for that? I can’t speak for other banks, but these are — most of these things are on top of — you have to have very large losses in private credit before at least it looks like banks are going to get hit or something like that. So it doesn’t mean you won’t feel some stress and strain, and that you might have to do something about it, but I’m not particularly worried about it…

…We always had what we call marking rights to look at the underlying collateral, and that’s just a right that protects you and gives you certain rights, things like that. Obviously, if you ever see credit getting worse, and it’s gotten not terribly worse, the actual credit which a lot of these private equity — private credit guys are pointing out, the actual credit hasn’t gotten that much worse. There are pockets where it has. And credit spreads themselves haven’t gotten much worse in general, but there are pockets where it has. So we’ll be watching it closely. We think we’re okay on all of that.

8. Management thinks corporate and consumer debt are not too high, whereas government debt is high

Corporations in general, the debt is not too high. Consumers, in general, the debt is not too high. Most of the excess debt is in government debt at this point. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The CEO of The USA’s Largest Bank Thinks About The World Today

Jamie Dimon’s latest excellent letter.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Its CEO, Jamie Dimon, is known for writing lengthy annual shareholder letters in which he pontificates about the state of the world and the banking industry. 

His latest letter was released earlier this week. I read it in earnest, taking extensive notes that I thought will be useful to share. So here they are! (The italicised passages between the two horizontal lines below are direct quotes from Dimon’s letter.)


1. Asset prices look fully valued

And we also continue to buy back enough stock so as not to increase total excess capital, though we have a number of options on how to deploy our capital and are clear-eyed that many asset prices, including bank stocks, are fully valued.

2. Current banking regulations have had some good effects, but have also made the banking system weaker by creating risks, including the creation of conditions that led to the Silicon Valley Bank crisis in 2023, and the risk of moral hazard in bank runs

A properly regulated banking system helps reduce risk to the financial system, protect customers, and maximize productive use of capital and lending. The Dodd-Frank Wall Street Reform and Consumer Protection Act and some of the rules that followed that legislation accomplished some good things. At the same time, they also created a fragmented, slow-moving system with expensive, overlapping and excessive rules and regulations — some of which made the financial system weaker and reduced productive lending…

…Here are some of the negative consequences partially due to poor bank regulations.

  • Because capital requirements on banks are much higher than the market gives to private entities, insurance companies or even foreign banks, huge arbitrage is created. This is often a sign of potential risk.
  • Regulators wrongly incorporated an accounting concept called “held to maturity” (HTM) into the capital rules, thereby giving Treasury and mortgage securities better capital treatment because the holder has promised not to sell them. This had many negative consequences — it allowed banks to not recognize mark-to-market losses on those securities in their regulatory capital, and in some cases, it falsely increased returns on those securities (because the amount of regulatory capital needed to be held against them was significantly smaller). This inadvertently encouraged banks to take on more interest rate risk, which was the ultimate trigger for the failure of Silicon Valley Bank (SVB) and First Republic Bank (FRB).
  • The Fed’s Comprehensive Capital Analysis and Review (CCAR) stress test, as currently constructed, produces results that are far worse, in our strongly held opinion, than what our actual results would be under those severely adverse conditions. The process is flawed, including reliance on inaccurate models and assumptions and the fact that it tests only one type of crisis, so other scenarios are overlooked (e.g., rapid rises in interest rates, as in the case of SVB and FRB). Testing should use accurate numbers and assumptions — then the results are what they are — rather than being driven by predetermined “what-ifs.” More transparency and sound methodology would lead to continuous improvement, not gaming the system. Essentially, we do not use CCAR to manage risk — we look at far more scenarios and need to be prepared for all of them. We also look at these risks every week, not just once a year…

…One of the huge risks for a bank has always been a “run on the bank,” which occurs when people think that their uninsured deposits are at risk. The FDIC only covers insured deposits, and the run risk is driven by uninsured deposits, particularly nonoperating uninsured deposits. In recent bank failures, regulators have had to invoke the systemic risk exception (SRE) to protect uninsured deposits at the point of failure. That is a problem — no one should want this as an emergency mechanism. It creates moral hazard, and the process to invoke the SRE is chaotic and involves multiple agencies, including approval by the Treasury Secretary in consultation with the President. Bank runs can happen quickly, and relying on that type of action to avoid contagion is simply not a good idea.

3. Real banking risks are always about credit, liquidity, interest rate, and operations

The real risks almost always end up being credit, liquidity, interest rate or operational risk.

4. Ideas to reduce actual risks in the banking industry through regulations include placing limits on the amount of HTM (held-to-maturity) assets banks can hold, requiring banks to have more liquidity to pay off uninsured deposits, and putting limits on the percentage loss on uninsured depots in the event of a bank failure

Here are some ideas that I believe would not only significantly reduce the chances that the SRE would need to be invoked but would also make the system safer and avoid moral hazard.

  • I would limit the amount of HTM securities in a way that links to the total long-term debt that the bank must have available to absorb losses upon its failure. And while this is a judgment call, banks need to realize that when available-for-sale and HTM security losses start to exceed 50% of tangible equity, investors will get worried…
  • …Prior to failure — between the Fed window and the rather quick sale or financing of securities or other assets — banks should be in a position where they have enough liquidity to pay off more than 50% of uninsured, nonoperating deposits. Regulators floated a similar idea in 2024, and I agree with them. This plan, plus the fact that equity and long-term debt will absorb losses before uninsured deposits are at risk, would give customers far greater peace of mind.
  • We should also consider simply setting, upfront, a statutory cap on the percentage loss on uninsured deposits in the event of failure — say, at 5%. This would reduce moral hazard and create an additional buffer for the FDIC to achieve a smooth resolution without using the SRE. With this plan, a small portion of the uninsured deposits would be immediately available to cover losses and communicated to depositors in “peacetime” while the bulk of uninsured deposits would be protected in a resolution. Although some might argue that a mechanism like this might increase the risk of a bank run, I think if the percentage is well-chosen, it might actually be stabilizing by eliminating the uninsured depositor’s nightmare scenario of losing all their money. In the end, all debates about the best way to proceed revolve around how much shareholders, creditors and uninsured depositors of the failing bank should pay and how much healthy banks should pay. As I already said, it has never been the taxpayer. And perhaps capping the maximum loss on uninsured deposits upfront would put an end to ad hoc involvement by the government once and for all.

5. AI is transformational, will have tremendous positive impacts on society, and is not a speculative bubble, but AI will also create serious new risks, including job displacement; AI will also have second- and third-order effects

The importance of AI is real — and while I hesitate to use the word transformational — it is. The pace of adoption will likely be far faster than prior technological transformations, like electricity or the internet. Those took decades to roll out, but this implementation looks likely to accelerate over the next few years. Our Chief Operating Officer describes our efforts in more detail, but I want to make some key points here.

  • We will not put our heads in the sand. We will deploy AI, as we deploy all technology, to do a better job for our customers (and employees).
  • AI will affect virtually every function, application and process in the company. And in the long run, it will have a huge positive impact on productivity. I do not think it is an exaggeration to say that AI will cure some cancers, create new composites and reduce accidental deaths, among other positive outcomes. It will eventually reduce the workweek in the developed world. And people will live longer and safer.
  • We do not yet know exactly how AI will unfold. The landscape will change rapidly, with shifting assumptions about power consumption, costs, chip technologies and the speed at which data centers are deployed. There will be a wide variety of AI models — open and closed, large and small — and no single tool will dominate. Overall, the investment in AI is not a speculative bubble; rather, it will deliver significant benefits. However, at this time, we cannot predict the ultimate winners and losers in AI- related industries.
  • AI is a genuine technological shift that will impact many sectors, including physical industries and scientific research. AI is only beginning to be applied broadly in science, and its influence will continue to expand.
  • AI will also introduce serious new risks — from deepfakes and misinformation to cybersecurity vulnerabilities. These risks are real, but they are manageable if companies, regulators and governments prepare. The worst mistakes we can make are predictable: overreact at the first serious incident and regulate out important innovation or underreact and fail to learn from what went wrong. The right approach requires rigorous preparation in advance, an honest assessment when things go wrong — and they will — and discipline to fix what’s broken without destroying what works.
  • AI will definitely eliminate some jobs, while it enhances others. Our firm will have definitive plans on how we can support and redeploy our affected workforce.
  • AI will create many jobs — some we can see today in cybersecurity and AI itself, and some we can’t see. But we do know that there is a huge workforce shortage for many well-paying white- and blue-collar jobs.
  • There is a possibility that AI deployment will move faster than workforce adaptation to new job creation. In prior technological transformations, labor had time to adjust and retrain. We do believe that business and government can do many things to properly incent retraining, income assistance, reskilling, early retirement and relocation for those whose job might be adversely impacted by AI (I talk about some of these ideas in Section IV around work skills training and the Earned Income Tax Credit).

One last but important point: We have focused on some of the “known and predictable” and some of the “known unknown” events. But huge technological shifts like AI always have second- and third-order effects as well that can deeply impact society. Some of these are, for example, cars bringing about the development of suburbs and shopping malls; agriculture enabling cities; and the original internet (invented back in 1969) leading to mobile phones, apps and social media. We should be monitoring for this kind of transformation, too.

6. Small teams are required to execute with speed

It’s essential to organize in small teams for super speed.

The real competitive battles are fought at the detailed segment level: It’s not just investment banking or the investment banking healthcare sector; it’s having the right team to win in healthcare pharma or medical devices. It’s not just credit card or even affluent brands; it’s the Chase Sapphire® card. It’s not small business clients in branches; it’s restaurateurs or law firms. It’s not digital payments; it’s 24/7 digital payments with automatic currency conversions. It’s hundreds of small teams (including technology, AI, marketing, subject matter experts and others) attacking specific problems. The teams needed to tackle these challenges should be small and authorized with the decision-making ability to move and act like Navy SEALs or the Army’s Delta Force. Finally, they need to be dedicated to the task at hand. Very often when a management team wants to accomplish something new, like create a digital account opening process that cuts across virtually every area, everyone on the team says, “We’ll get it done,” meaning they will add it to the long list of tasks already on their plate. But when efforts are 1% of a lot of people’s jobs, it will never get done. You need a team 100% dedicated to the mission — and everyone else supports them.

7. The global and US economy is very different today compared to 20 years ago in terms of (1) the importance of energy, (2) the size of the financial markets, (3) the composition of the players in the financial markets, (4) the size of investment portfolios, (5) the composition of holders of US Treasuries, and (6) central bank activity

It’s helpful to recognize that the world’s economy is far larger and more diversified and far less reliant on energy as an input versus 20 years ago. Global energy consumption to the global gross domestic product (GDP) is only about 40% of what it was around 45 years ago, say in the early 1980s, and the United States, instead of being a major importer on a net basis, is now a major exporter…

…If you look at the tables below, there are a few items that are truly different now from what they were in 2010, and these may well lead to different and unexpected outcomes. To name a few: The global debt and equity markets are far bigger than before (as are global deficits). Many nonbank financial institutions and investors are dramatically bigger than they were in the past (think hedge funds, private equity funds, sovereign wealth funds, among others). Global foreign portfolio investments are far bigger than before, and a large stock of U.S. Treasuries owned by foreigners is not held by central banks (central banks are less likely to make dramatic changes in their holdings of U.S. Treasuries). In addition, global QE is far bigger than it ever was before. A change in sentiment could easily affect the global flow of investments into securities, including U.S. Treasuries. You can also see that brokerage inventories are far smaller as a percentage of investments than ever before and, as a result, market makers are less able to intermediate in extremely volatile markets.

8. The US remains the world’s best investment destination; the US must continue to be the premier military force globally, maintain its economic position, and manage its foreign economic affairs, in order to remain strong

It is also good to remember that the United States remains the world’s best investment destination, particularly when things are going badly…

…There are three critical issues that will ultimately determine the health and safety of the United States and possibly determine the future direction and strength of the free and democratic world. JPMorganChase and its employees — like all other businesses and individuals — will be deeply affected over time by how the United States succeeds in these areas:

  1. The United States must maintain the premier military force in the world.
  2. The United States must maintain its preeminent economic position in the world, which also requires reigniting the American Dream.
  3. The United States must manage its foreign economic affairs to strengthen the U.S. economy and that of our critical allies so that the first two points remain true.

9. Inflation is a risk to the US and global economy in 2026; other large risks to watch include (1) Russia’s ongoing war with Ukraine, and the US and Israel’s ongoing war with Iran, (2) high sovereign deficits and debt, (3) high asset prices and low credit spreads, (4) new trade arrangements, (5) the relationship between the US and China, (6) private credit, (7) lengthy holding periods of private equity investments, and (8) cybersecurity; losses on leveraged lending could be higher than most expect when a credit cycle happens

The skunk at the party — and it could happen in 2026 — would be inflation slowly going up, as opposed to slowly going down. This alone could cause interest rates to rise and asset prices to drop. Interest rates are like gravity to almost all asset prices. And falling asset prices at one point can change sentiment rapidly and cause a flight to cash…

…I think some of the larger risks are much like tectonic plates, always moving and periodically causing earthquakes and volcanoes when they crash into each other. Some of the larger risks we should keep our eyes on are:

  • First and foremost, geopolitics. Russia’s war in Ukraine and its ongoing sabotage in Europe and now the war in Iran and its potential effects on energy prices can cause events that are unpredictable. We all hope these wars get properly resolved. But war is the realm of uncertainty, as each side in a war determines what it wants to do (as is often said, “the enemy gets a vote”), and these conflicts involve many countries. Not only do they have a major impact on the nations at war, but they also have an impact on countries and economies across the globe that are not directly involved in war. Nations that are heavily dependent upon imported energy are already seeing the effects. And it’s not just energy, it’s commodity products that are byproducts of oil and gas, like fertilizer and helium. And given our complex global supply chains, countries are experiencing disruptions in shipbuilding, food and farming, among others. The outcome of current geopolitical events may very well be the defining factor in how the future global economic order unfolds — then again, it may not.
  • High global sovereign deficits and debt. Global deficits are significantly elevated, particularly during what has been a relatively healthy global economy and, until recently, a time of peace — the deficit globally is at an extremely high 5%, while global sovereign debt is at all-time highs. The current forecast from the Congressional Budget Office has our debt-to-GDP ratio going from 100% today to 120% in 2036. High government debt is somewhat offset by low consumer debt, which was nearly 100% of GDP in 2007 and is now below 70%. Similarly, corporate debt is at a fairly normal healthy level of 45%. High and increasing government debt will eventually have to be dealt with — the right way would be to deal with it now before it becomes a problem; the wrong way would be to let it become a crisis, which, in my opinion, is probably the likely outcome. Importantly, almost 60% of government spending is for entitlements and is not discretionary. This makes the job that much harder. A crucial note on the importance of growth: If interest rates went down 100 basis points and GDP grew at 3%, the debt-to-GDP ratio could actually start to go down instead of going up.
  • High asset prices and very low credit spreads. In and of itself, this is not a bad thing. Household net worth as a percentage of GDP is now 560%. The high during the housing peak in 2006 was 460%. But this also means that anything less than positive outcomes could have a dramatic impact on global markets. Rapidly decreasing asset prices can sometimes create a self-reinforcing loop. It’s always good to remember that prices are set by the marginal buyers and sellers — which, on the average day, is only a small fraction of asset owners. And it’s also good to remember that foreigners own almost $30 trillion of U.S. equities and bonds. While U.S. investments and the U.S. dollar are generally havens of security in a troubled world, that didn’t stop recessions and bad markets in prior times.
  • Trade 2.0. The U.S. tariffs themselves had only minor effects on inflation or growth, and were only one straw on the camel’s back. But the trade battles are clearly not over, and it should be expected that many nations are analyzing how and with whom they should create trade arrangements. This is causing a realignment of economic relations in the world. While some of this is necessary for national security and resiliency, which are paramount, it is hard to figure out what the long-term effects will be.
  • U.S. and China relations. This relationship is critical to the whole world and is also impacted by the events mentioned above. The United States and China clearly have different systems, values, goals and objectives, and while both sides are currently engaging, we have to expect that there will be some bumps in the road — maybe even some large ones. We should all hope that ongoing proper engagement continues to lead to what may be a competitive but peaceful future.
  • Private credit and credit in general. The leveraged private credit market totals $1.8 trillion. As a comparison, the U.S. high yield bond market totals $1.5 trillion, and the bank syndicated leveraged loan market totals $1.7 trillion. Taking a wider view, the total market size of investment grade bonds is $13 trillion. And the total market value of all residential mortgage securities and loans is also $13 trillion. In the great scheme of things, private credit probably does not present a systemic risk.

    I do believe that when we have a credit cycle, which will happen one day, losses on all leveraged lending in general will be higher than expected, relative to the environment. This is because credit standards have been modestly weakening pretty much across the board; i.e., more aggressive and positive assumptions about future performance (called add-backs), weaker covenants, more use of PIK (payment-in-kind; not paying interest in cash but accruing it), more aggressive private ratings (particularly in insurance companies) and more arbitrage (not always a great sign). Also, by and large, private credit does not tend to have great transparency or rigorous valuation “marks” of their loans — this increases the chance that people will sell if they think the environment will get worse — even if actual realized losses barely change. Additionally, actual losses right now are already a little higher than they should be, relative to the environment. Finally, if rates or credit spreads ever go up, the companies that borrowed will have to borrow at even higher rates, putting them under even greater stress. However this plays out, it should be expected that at some point insurance regulators will insist on more rigorous ratings or markdowns, which will likely lead to demands for more capital.

    It has always been true that not everyone providing credit is necessarily good at it. There are many players who are late to this game, and it should be expected that some credit providers will do a far worse job than others. We have not had a credit recession in a long time, and it seems that some people assume it will never happen.

    Additionally, anything that gets sold to retail investors as opposed to institutional investors requires greater transparency, higher standards and fewer potential conflicts. If anything ever goes wrong, you should assume that retail investors, even though they were told about some of the risks, will seek remedy in the courts. Also, some of these loans go into various funds run by the asset management company. Generally, each of these funds has its own objectives and its own fiduciary responsibility to make sure that the loans are suitable for that specific fund. Those who do not do this properly are likely to get into trouble.
  • Private markets. With stock markets at all-time highs in recent months, it is a little surprising that private equity firms, which own close to 13,000 companies, have not taken greater advantage of healthy markets to take their companies public. Private equity investments are now held for an average of seven years — this is virtually double what it used to be. And some are sold, not to another company or taken public, and put in a new fund called a continuation fund. We have generally had nothing but a bull market since the great financial crisis — it’s hard to imagine what will happen if and when we have an extended bear market.
  • Cyber risk. I have to mention this because it remains one of our biggest risks, and this is probably true for many other major industries and corporations. AI will almost surely make this risk worse. We invest significantly to protect ourselves and stay vigilant.

10. There are a number of things that will have a positive impact on the US economy in 2026, and they are (1) the One Big Beautiful Bill, (2) purchases of securities by the Federal Reserve, (3) less restrictive regulations, and (4) AI-related capital spending

While there are many larger risks, as discussed in the next section, that may or may not impact the economy in 2026, we do know several things that will have a positive impact on the economy in the remainder of this year. They are:

  • Increasing fiscal stimulus from the One Big Beautiful Bill. Our economists believe this will inject another $300 billion (effectively 1% of GDP) into the economy. This has to be very modestly inflationary this year.
  • Benefits from the Fed’s purchase of $40 billion of additional securities each month, which is supposed to be reduced to $20 billion–$25 billion this April. At a minimum, this supports asset prices and helps ensure there is no liquidity squeeze in the financial system.
  • Positive effects of comprehensive deregulatory policies. This was badly needed and long overdue. Change is clearly evident in bank regulations that will free up capital and liquidity, which can be lent out (and we already see this happening), and in deregulation across many other industries, from energy to home building. It is fair to say that actions taken have clearly increased confidence and animal spirits. This should add to productivity and be modestly deflationary this year.
  • Huge increase in AI-driven capital spending and construction by the five hyperscalers. In 2025, this number was $450 billion, and in 2026, it will be approximately $725 billion. While AI will clearly drive productivity, which is generally good for inflation in the long run, all of this spending is probably inflationary in the short run.

Some of the items above have mild inflationary effects, while others probably have some deflationary effects.

11. The US has come together before to overcome incredible challenges

We have met big challenges before. At one point in 1940, only one nation, the United Kingdom, stood against the Nazi war machine, which had already conquered most of Western Europe. The United States was unprepared for what was going to happen but rose to the challenge. You may find it uplifting to read the book Freedom’s Forge, which shows how the United States came together to build the arsenal of freedom and to keep the world safe for democracy.

12. The US has become too dependent on unreliable sources for its national security needs

The United States has also allowed itself to become too dependent on unreliable sources for items that are essential to our national security, such as critical minerals, semiconductors and advanced manufacturing output, among others. We have maintained insufficient productive capabilities to be ready to quickly increase production if necessary. And our military needs to be able to rapidly develop new and often cheaper weapons, like drones.

13. The US could have grown even faster over the past 20 years than what it actually did

Over the last 20 years or so, U.S. GDP has averaged about 2% annually — I believe we could have easily achieved at least 3% growth. The reason we were able to grow 2% is that America’s businesses and entrepreneurial spirit allowed us to overcome a lot of the roadblocks mentioned later in this section. That 1% difference would have had an enormous impact, providing Americans with an extra $20,000 GDP per person annually, giving us resources to take care of nearly all our problems and jump-starting deficit reduction. Growth is part of the solution to almost all of our problems…

…I am going to mention a few damaging policies, not in detail because I’ve written about them in the past, but if they aren’t corrected, real progress may be impossible.

  • Fraud, waste and abuse…
  • …Inefficiencies within the federal government (and within state and local governments, too)…
  • …Mortgage and regulatory policies and local housing requirements….
  • …Red and “blue” tape, permitting reforms and a little litigation reform… 
  • …Policy uncertainty…
  • …Unreliable R&D policies…
  • …Failure to recognize that capital formation drives growth.

14. Supportive policies for capital formation in Sweden and Australia have led to great results

In Sweden, an investment savings account is available that simplifies the investing process with favorable tax treatment. Account holders can deposit and withdraw funds at any time, and there is no capital gains tax — just an annual tax of 1% on the balance. This has dramatically increased investment by retail investors into the Swedish stock market. It may surprise some of our readers that Sweden’s policies have created a growing and innovative stock market and that Sweden has more unicorns and billionaires per person than America does. Another example is Australia, which has a wonderful retirement policy based on superannuation, a savings account funded by both employer and employee contributions.

15. The private sector should be the one allocating capital, not the government

Industrial policy mechanisms, when used, should be as targeted and as simple as possible. They come in many guises: grants, cheap loans, equity investing, purchase agreements and others. The cleanest of these is tax credits in various forms. Whatever the policy, two rules should not be violated: (1) there should be no social engineering — this is not a jobs program (the Jones Act meant to preserve jobs in the Merchant Marine has basically destroyed our Merchant Marine and merchant ship building business) and (2) for the most part, the market should allocate capital, not the government. Industrial policy can easily devolve into a buffet where corporate America gorges at the expense of the taxpayer. While there are certain circumstances that require the government to allocate capital (think infrastructure and national security), generally the government is simply not good at allocating capital in a free market. America does best not with central planning but with consistent and clear policies that are conducive to growth.

16. Europe is currently on a very bad path of decline and fragmentation; Europe’s defense industrial base is not in good shape

I believe we are staring one in the face: the slow but constant decline and fragmentation of Europe. Europe is entering a decisive decade, and it is unable to act. The EU was an extraordinary accomplishment —nations coming together and using political and peaceful means to settle differences. And this after a millennium of terrible wars. It worked, but it only went halfway. Europe never finished the economic union (see the Draghi report), which meant that European countries constantly underperformed economically. This has led to their GDP relative to the United States going from 90% in the year 2000 to approximately 70% today. This fragmentation remains a structural drag on competitiveness. As former European Central Bank President Mario Draghi has noted, internal EU market barriers function like “hard tariffs” of approximately 45% for manufacturing and 110% for services. Those barriers reflect not a failure of ambition but rather a failure of integration. This has led to a lack of scale for their major businesses and a lack of mobility for both capital and people.

EU nations also created whole new layers of bureaucracy that reduced innovation, growth and investment among other things. This will continue unless European leaders dramatically change course. If they don’t, they will eventually be unable to afford their social safety nets, restrengthen their nations’ militaries and grow their economies. The EU is currently home to world-class companies, deep pools of savings and a talented workforce. But without new EU direction, their major global companies will weaken, faced with very strong American and Chinese competition. The ultimate loser in all this will be Europe and all its citizens — and it will hurt the United States as well.

Europe and America are each other’s largest trade partners at $2 trillion a year…

…Yet Europe’s defense industrial base is still not fit for purpose. This is as much an economic and industrial challenge as a military one. The continent needs enduring production capacity, coordinated procurement and dual-use manufacturing that serves both commercial and defense sectors.

17. Strong leadership by the US is still required for global prosperity

Strong American leadership is required – there is no real alternative.

Some political leaders have said that there is a “rupture” between America and the Western world — that the red lines have been crossed and there is no return to the prior system. I completely disagree. There is no practical replacement to the prior system. It has not ruptured, but it needs reform. The middle-sized nations do not have real alternatives in terms of building a unified military or a unified economy that can compete effectively with the United States and China. If these middle nations did, the result would look a lot like what Europe is today: dysfunctional. The only practical alternative is to fix the current situation.

The United States and Europe have an extraordinary number of commonalities, including values deeply held. For more than 75 years since the end of World War II, the United States and Europe have worked together to resolve most major global economic or military challenges and in fighting terrorism and nuclear proliferation. We need this cooperation for the next 75 years.

I do not want to contemplate the opposite. Without American leadership, there would be a huge vacuum. If not us, who? We are the only country that has the capability to do it. Fragmented relationships with and among our extensive allies could lead to an “every nation for themselves” mentality. America would become more isolated, the U.S. dollar would no longer be the world’s reserve currency and autocratic nations would rejoice. Need I say more?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What Warren Buffett Thinks About The Stock Market And Global Economy Now

The Oracle of Omaha was interviewed by CNBC recently.

Warren Buffett stepped down as CEO of Berkshire Hathaway at the end of 2025. Together with this change in his professional life, he would also no longer hold court during Berkshire’s AGM (annual general meeting) that has traditionally been held in early-May each year. This means there’s less opportunity for the public to hear his thoughts about the financial markets and the world writ large. 

So I was excited when he appeared for an hour-long interview session hosted by CNBC earlier this week. Here are my takeaways from the interview (passages in italics below are from the official transcript of the event):

1. The US stock market has not fallen much so far

QUICK: Well, let’s talk about that. The market has come down substantially.

BUFFETT: Not substantially.

QUICK: Well, you’ve got both the Dow and the Nasdaq in correction territory. It’s the worst performance on a quarterly basis for stocks in about four years. Do things look cheaper to you?

BUFFETT: No. Three times since I’ve taken over Berkshire, it’s gone down more than 50%. I mean, if you look at the markets, of the worst, probably was the 2007, 08′ period, although it was that one Monday, when you had 21% in a day. I mean, this is nothing. I mean—

QUICK: This is nothing to make you get excited and think there’s huge valuation—

BUFFETT: Well if they’re 5 or 6% cheaper, that doesn’t, we aren’t in it to make 5 or 6%…

2. Buffett’s happy to deploy capital for long-term investments if there’s a big decline, but he does not know what the market will do

QUICK: Are you waiting for the next big drop in the market to deploy that cash, and if so, when do you see that coming?

BUFFETT: Yeah, if there is a big decline, we will deploy, I mean, but we won’t, we will deploy it because stocks are attractive or businesses are attractive to us, and we are not planning to sell them next week or next month, so we want to be right on them. And we’ve had our American Express stock 30 years without having a — close to 40 years, 35 years. And on the other hand, there’s things I change my mind on fairly quickly, but, but the goal is own the owned businesses, and when we buy Occidental Chemical, we expect all of that 50 years from now. You know, the world can change in some way, but that we do not, we do not buy that with a thought of resale…

…BUFFETT: No, no, I don’t have any ability to predict what stocks will do next week or next month and I will buy them if they’re cheap. I’ll buy a whole lot of them if they’re cheap and I think I really understand the business, and Apple is still our largest single investment…

…BUFFETT: I mean, the idea that people think they know what the market’s going to do is just crazy. I mean, the idea that they would shout out to the world, you know, that something they really knew, I mean, that’s like saying if they had gold — found gold in their backyard, they’d come on television and say, here’s where the gold is in my backyard, you know? I mean, they’re selling something.

3. Railroads are more likely to be around 50-100 years from now than smartphones, but Apple is the company that earns the higher on capital

BUFFETT: Yeah, well, if I didn’t like it, I could sell it. Yeah, I can,  I think it’s a remark — it’s better than any business we own outright. Now, we own a railroad that’s worth more money than our Apple position, for example, they’re both looked at the same way. I mean, they’re both, they’re both businesses. I expect the, I think it’s more predictable in a certain sense, that the railroad will be around 50 or 100 years from now, but it doesn’t earn the rate remotely on capital than Apple does. I mean, Apple is a business that you’ve got one, probably and your kids have got them, and—

QUICK: Not one, we’ve got like 20 of them.

BUFFET: Yeah, devices. Actually, the Bell Telephone Company was that way at one point, but they were regulated.

4. Buffett thinks US technology companies are too well-liked by consumers for them to face heavy regulation; Apple is a consumer company in Buffett’s eyes

QUICK: Well, do you worry about regulation coming for some of these big tech companies, in particular Apple?

BUFFETT: I think the consumers are in love with them too much. I don’t, I don’t think Washington will do anything that really destroys something that every one of their voters likes and they’re using themselves. I mean, it’s a remarkable product that way. Just think of something as useful as the Apple is…

…QUICK: You don’t necessarily follow tech companies and Apple, people look at as a tech company, but you always looked at as a consumer company.

BUFFET: It’s a consumer.

QUICK: Yeah.

BUFFETT: Company. 

5. Buffett thinks the Federal Reserve’s biggest worry should be about the status of the US dollar as the world’s reserve currency

QUICK: Warren, let me ask you about the economy because the Fed is in a bit of a quandary right now, just trying to figure out which one of its mandates it’s more worried about. Is it worried about inflation potentially rising more. Is it worried about the jobs market and, you know, potential decline in economic output? What, what of those two issues would worry you most if you were at the Fed right now?

BUFFETT: Well, if I were at the Fed, the thing I’d worry about always is, you know, you’re the reserve currency of the world. I mean, so you’ve got very smart people, very sophisticated people, the American dollar looks like nothing could happen to it. I don’t feel anything could happen to it. But if it does happen to it, I would, I would, I wouldn’t want the responsibility of running the Fed.

6. Buffett would prefer the Federal Reserve to have a 0% inflation target instead of 2%; Buffett is concerned about inflation

QUICK: Did they keep rates low for too long? I mean, I think that’s, as they didn’t worry about inflation, as they said it was going to be transitory? Because I think even Powell himself said that he might wish he’d turned it sooner.

BUFFETT: Well, I wish they had a zero inflation target.

QUICK: Right.

BUFFETT: But, I mean, once you start saying you’re going to tolerate 2 percent, that compounds pretty dramatically over time. And you’re saying to people, if you’re getting less than 2 percent on your money, you’re going backwards. And, actually, if you pay tax, you may pay tax on the 2 percent. You know, I mean, I don’t like that particular goal. But—

QUICK: So, inflation is maybe what you’d be more concerned about? I mean, that’s what Greenspan, Alan Greenspan always said.

BUFFETT: Yes. I would be, I would care about inflation.

7. Buffett is concerned about the stability of banks, in particular, the inter-connectedness of the financial system; Buffett does not know enough about the private credit industry to opine on its effects on the banking system

BUFFETT: Yes. I would be, I would care about inflation. I would compare what I really would care about is the stability of the banks.

QUICK: Yes.

BUFFETT: I mean, the banking system, in some sense is very strong, in other sense, is very fragile. I mean, JPMorgan in the last couple annual reports reported doing $10 trillion of business per day. Now, that’s an unsecured policy. Now, they know what they’re doing. Believe me. I mean, there’s nobody smarter than JP– but I don’t want — I didn’t want — during the 2008 period, I didn’t want anything unsecured, you know, out there for a day. I mean, who knew? Nobody was any good. You know, I mean, it, the world is very interconnected and everybody panics. I mean, it, you know, they may say they don’t, but you can call the biggest investment banking firms and they say, well, they don’t answer the phone even if things get bad enough. And if they do answer the phone, you know, they say 10 bid, 20 offered subject.

QUICK: Yes. I mean, Joe will talk about that day that you mentioned in where the Dow was down 21 percent. I think he was, at that point, he said it himself. He was hiding under his desk for the calls that were coming in.

BUFFETT: Yes. And—

QUICK:  Because when liquidity disappears, it disappears—

BUFFETT: 21 percent and that was some day, and it just kept coming. And most of the specialist firms, which then counted for more in terms of the stability of the markets. They were broke. I mean, as I remember, they went around to their banks and said, just don’t pull the loans, you know, but they, people, they were supposed to keep making markets, but people just kept hitting the bid and can widen the spread out. You got circuit breakers now, all kinds of things. But when people are scared, they’re scared. And people, if you yell fire in a crowded theater, everybody runs. Still, it still pays to beat people to the door, you know, and I can get trampled, you know, so, I will stand back there and say everybody to stay calm, you know? But that’s because I can’t run fast. On the other hand, when people come back into the theater, they come in one at a time. They know they don’t have to get into it. But when people panic, they panic.

QUICK: But is it the banking system we should be concerned about right now, or is it the shadow banking system, the private credit at this point?

BUFFETT: Well, it’s all parts of the banking system because they all affect each other and the troubles from one can spread over to another. And, well, you saw what happened, I mean, in 2008.

QUICK: But at risk of potentially, I don’t want people to say that you are commenting on what’s happening in the private credit situation right now. What do you think of the private credit situation right now? Are there enough concerning issues there that you worry that it could cause a contagion—

BUFFETT: I don’t think I know.

8. Buffett is always prepared for a wide range of outcomes by holding significant amounts of treasury bills, but he’s not thinking that there’s something on the horizon

BUFFETT: I don’t, I do not think I know what, but, therefore, I want to be prepared for anything, and, therefore, we will always have, we’ll always have cash around and we’ll have treasury bills. We won’t have money market funds. We didn’t have them in 2008. We won’t have commercial paper in 2008. There’s just one thing that’s legal tender. And, you know, if you own treasury bills, and we have known, we don’t own treasury bonds way out. I mean, but every Monday, the treasury has to sell bills. And as long as they got to sell, you know, X billions worth of bills, I mean, they kind of a, they can print some money to do it, and they’ll do it.

QUICK:  But just to put a fine point on it, you don’t think you know what’s happening out there. You’ve had this huge cash forward north of $350 billion. It’s just there waiting for any time. It’s not that you necessarily think that there’s something on the horizon. It’s just the longer time goes—

BUFFETT: Oh, sure. No, I always want to have—

QUICK: Yes.

BUFFETT: Yes. And I never want to buy anything just because people think the market is going up.

9. Buffett’s worried about the possession of nuclear weapons by certain countries

BUFFETT: I took that pretty philosophically. I mean, I could handle that. And now, you’ve got nine countries, including, you know, a guy in North Korea. I mean, and there will be, something will happen. And we worried enormously about it when there were two. And we had perfectly, we had really pretty sane leaders in Kennedy and Khrushchev. You know, I mean, you were not dealing with unstable people or anything like that. And. You know, the ships turned around, but people were hiding under their desks with two. I mean, just think how you feel with North Korea having it and Iran wanting to get it. I mean, it — it is — and I don’t have an answer for that. I mean, we did the right thing in 1938 even or 1939. You can go look at it. It’s all over the Internet. The most important letter ever written. And Leo Szilard could not get the message to. He was a famous nuclear physicist. Terrific one. Very funny too. And he couldn’t get the message to Roosevelt, but he knew if Einstein signed the letter, that it would get there, and he finally got Einstein to sign the letter. And that letter was a month before the Germans started rolling into Poland. And I don’t think Roosevelt understood U-235 any better than I do. I mean, you know, but he knew if Einstein signed it, he better do something. And the funny thing is, of course, he was doing it because he was worried about the Germans getting it. And it was actually used on the Japanese. But it, we, we haven’t learned to live with it. Now, we’ve been — we’ve gone 80 years since then. We’ve had a lot of close calls. I mean, we’ve had training tapes put in there that that almost got the president to do something. They’ve had them. I mean, there is no way that the planet has an expectancy of 500 years now when it was 4.5 billion when I was a kid and we had to do it. I’m not faulting anybody. My dad was in Congress. He would have voted for it. I mean, everybody rejoiced on VJ day. You know, I mean, it — it — but there was no way we could undo it…

…QUICK: Yeah. So if you were the president today or if you were advising the president today, what would you say about going after the enriched uranium in Iran?

BUFFETT: I would say that one way or another. In the next 100 years, maybe it’s 200 years, who knows? But one way or another, something will happen that cause it to be used. And we can’t take what’s out there now. And if you thought it was dangerous with the Soviets and us with Khrushchev, who was perfectly rational guy, probably Kennedy, just wait until we, wait until we’re dealing with, you know, the guy in North Korea that criticizes haircut or something, I mean, or, or I would say the most dangerous thing is actually somebody that’s got their hand on the switch who is dying themselves or is facing enormous embarrassment if he figures if I go ever—

QUICK: If you’re cornered, yeah, if you’re cornered.

BUFFETT: Yeah.

QUICK: So that’s still rises to the level of one of the most important and—

BUFFETT: It is.

QUICK: Yeah.

BUFFETT: It’s just that I don’t know the answer for it. But I do know that the — it’ll be more difficult if Iran has the bomb than if they don’t.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Apple. Holdings are subject to change at any time.

Still More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

A few weeks ago, I published Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s AI-first ARR (annual recurring revenue) in 2025 Q4 (FY2026 Q1) tripled year-on-year; management thinks Adobe’s AI-first business will be the company’s next $1 billion business

Our new AI-first offerings ending ARR more than tripled year-over-year, reflecting progress against this opportunity with individuals and enterprises alike…

…What we had identified as the AI first sort of book of business. That tripled, but that should be our next $1 billion business.

Adobe’s management thinks the company’s success in AI will be underpinned by its deep understanding of the creativity domains, its access to vast data, its delivery of complex workflows, and its great brand; enterprises are increasingly asking Adobe for help on their AI strategy in their customer experience orchestration; management thinks agentic AI will further enable outcome-focused enterprise workflows, and Adobe is uniquely able to meet the needs of enterprises in these areas; emerging new platforms have always been additive to Adobe’s market opportunity; management intends to integrate Adobe with leading AI platforms including Anthropic, Google, and OpenAI; management is collaborating with global system integrators (GSIs) such as Accenture and Deloitte to drive technological transformation 

Adobe’s continued success in AI will be underpinned by our deep understanding of creativity domains, the vast amount of data to which we have access, delivery of complex workflows driving business outcomes, and a great brand across individuals, small and medium businesses and enterprises…

…, Adobe has always been a trusted partner for enterprises and we’re increasingly being asked to help them drive their AI strategy across customer experience orchestration (CXO) globally. Enterprises are looking to the combination of employees and automation to deliver on the demands of content and marketing at scale. Agentic AI will further enable outcome-focused enterprise workflows as customers look beyond speed to elevate creative differentiation, brand governance, and personalized experiences across channels. Adobe’s end-to-end solutions are uniquely designed to meet these needs at scale…

…Emerging new platforms have always been additive to our market opportunity. In addition to Windows, MAC, iOS, Android, Chrome and EDGE, we intend to integrate with leading AI platforms such as Anthropic, Google, Microsoft, NVIDIA and OpenAI— providing customers with access, choice, and flexibility. We’re jointly driving enterprise transformation at scale in collaboration with global leaders such as Accenture, Cognizant, Deloitte, dentsu, EY, IBM, Infosys, Omnicom, Publicis, PWC, Stagwell, TCS and WPP.

Adobe’s management’s approach with AI is to expand access to AI in Creative Cloud and Acrobat, reach new audiences with Firefly and Express, and automate content production in Firefly Enterprise; AI usage at Adobe is growing quickly, with record generative credit consumption; Adobe’s content automation solutions are seeing record number of API (application programming interface) calls

Our approach is to expand access to AI across our existing audiences in products like Creative Cloud and Acrobat, reach new audiences with products like Firefly and Express, and help automate content production in enterprises with Firefly Enterprise…

…AI usage continues to grow quickly, as measured through record levels of generative credit consumption…

… Our content automation solutions continue to see strong enterprise adoption, as measured through record numbers of API calls.  These metrics highlight that we are executing against our strategy to empower individuals and businesses to create content in new ways in the era of AI.

Adobe’s management’s approach with AI across Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content; AI Assistant MAU doubled year-on-year in 2025 Q4 (FY2026 Q1) and Express MAU tripled; Express is now used in 99% of US Fortune 500 companies; Adobe Acrobat Studio, introduced recently, brings all of Adobe’s AI and creative capabilities into PDF tools, is off to a strong start

Our vision for Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content…

…PDF Spaces transforms collections of files and links into dynamic knowledge hubs that allow you to easily collaborate with others. Acrobat AI Assistant provides users conversational experiences that help them comprehend information faster and more accurately with an individual PDF or across documents in a PDF Space. Our Acrobat and Express integrations empower users to turn content they are consuming into generated presentations, infographics, audio summaries and more. It’s clear that these AI-based capabilities are resonating with users, as AI Assistant MAU doubled year over year and Express MAU tripled year over year. Express is now used in 99% of U.S. Fortune 500 companies.

In Q3, we introduced Adobe Acrobat Studio, a single offering that brings together all these AI and creative capabilities with the PDF tools users know and rely on. Subscription upgrades to offerings that include Acrobat Studio value are off to a strong start across routes to market, including Adobe.com and enterprise license renewals.

Adobe’s management is embedding Adobe products directly into chatbots; management launched Acrobat and Express for ChatGPT in 2025 Q4 (FY2026 Q1); management will soon launch similar integrations into Copilot, Claude, and Gemini; management recently launched a Photoshop conversational editing experience in ChatGPT; brands can now create ads for ChatGPT with Adobe’s tools

We are embedding Adobe’s capabilities directly into new conversational platforms. In Q1, we launched both Acrobat and Express for ChatGPT, significantly expanding the reach of our creativity and productivity workflows. You can expect to see similar integrations into Copilot, Claude and Gemini as those platforms support integrated application experiences…

…Photoshop launched a conversational editing experience in ChatGPT…

…Partnership in the OpenAI initiative to enable brands to create ads for ChatGPT

Adobe’s management’s approach with AI across Creators and Creative Professionals is to empower everyone to create, with Firefly, Adobe’s all-in-one creative AI studio, as the centerpiece; enterprises are increasingly turning to Firefly Enterprise to unlock content automation; Firefly users can access over 30 industry-leading models from both Adobe and leading AI labs; Firefly users can edit and assemble images, videos and audio with prompts and in an integrated way with Photoshop and Express; Firefly’s generative credit consumption was up 45% sequentially in 2025 Q4 (FY2026 Q1); Firefly’s generative credit consumption is skewing toward higher-value modalities, with video generative actions up 8x from a year ago and audio generative actions up 2x; Firefly subscription and credit pack ending ARR was up 75% sequentially in 2025 Q4 (FY2026 Q1); Adobe’s management has continued to add new AI capabilities into Creative Cloud applications, which has led to higher AI usage and in turn, a nice ramp in purchases of Firefly credit packs; Adobe’s Creators & Creative Professionals segment saw the traditional Stock business decline faster than management expected; the entire Firefly ecosystem’s ending ARR exceeded $250 million in 2025 Q4 (FY2026 Q1)

Our strategy for Creators & Creative Professionals is to empower everyone to create – from first-time creators to seasoned professionals to large enterprises seeking to scale content production. Firefly, an all-in-one creative AI studio, is the right tool for the next generation of creators and creative professionals…

…Enterprises are increasingly turning to Firefly Enterprise to unlock a new era of content automation.

Firefly is quickly becoming the go-to destination for content generation, ideation and assembly. Users can generate with over 30 industry-leading models, including Adobe, Google and OpenAI. They can collaboratively ideate with stakeholders in Adobe Firefly Boards. They can edit and assemble image, video and audio using Firefly’s prompt-based editing capabilities with integrated Photoshop and Express web journeys. Firefly momentum is strong, with generative credit consumption growing over 45% quarter over quarter. While that growth is broad-based, generations are skewing toward higher-value modalities, with video generative actions growing more than 8x year over year and audio generative actions doubling year over year, reflecting customers moving deeper into AI-assisted creation across the full creative process. As a result, Firefly subscription and credit pack ending ARR grew 75% quarter over quarter.

Creative Cloud applications continue to embed new AI capabilities, making users far more productive. Photoshop added new partner models and support for higher resolution image generation and editing. Illustrator expanded its generative design capabilities with models from OpenAI, Ideogram, and Google to support frequent vector workflows. Premiere added AI Object Mask, which quickly became one of the most used AI features in the application. As Creative Cloud users increase AI usage, we are seeing purchases of Firefly credit packs ramp nicely…

…While Q1 had many highlights, our traditional Stock business saw a steeper decline than we expected. This shift is playing out more quickly than we had planned for and our focus remains on giving customers meaningful choice between stock and generative AI as they build their creative and marketing workflows…

Firefly ending ARR, across Firefly App, Firefly credit packs, and Firefly Enterprise exceeded $250 million

Firefly Enterprise combines Firefly Services and Firefly Foundry; Firefly Services provides APIs for automated content production workflows, including 3D digital twin workflows, image and video resizing across every social and digital channel, campaign variant generation, and more; Firefly Foundry allows enterprises to build private, deeply tuned AI models trained on their own IP (intellectual property), and gives enterprises a commercially safe model that is able to accurately generate their branded assets; Firefly Enterprise’s new customer acquisition was up 50% in 2025 Q4 (FY2026 Q1) from a year ago; Firefly Foundry recently signed new partnerships in the media & entertainment vertical

Firefly Enterprise, the combination of Firefly Services and Firefly Foundry, is empowering the world’s largest brands to scale content production to unprecedented levels. Firefly Services provide enterprise-grade APIs, giving businesses more than 30 content production capabilities which can be run in automated workflows. These include 3D digital twin workflows for showcasing physical products, image and video resizing across every social and digital channel, and campaign variant generation and assembly for personalized marketing content. Firefly Foundry enables the world’s largest marketing teams and media companies to build private, deeply tuned AI models trained on their own IP. Unlike generic AI models, Firefly Foundry gives enterprises a commercially safe model that understands and is able to accurately generate their branded assets. Together, these products are driving measurable business outcomes, by increasing production scale, accelerating velocity and reducing costs. Firefly Enterprise new customer acquisition grew 50% year over year…

…Firefly Foundry continues to build momentum in the media & entertainment vertical, with partnerships including B5 Studios, Cantina Creative, Creative Artists Agency, United Talent Agency and WME. 

Adobe’s management sees Adobe as the  trusted partner for AI-powered Customer Experience Orchestration (CXO) for enterprises; management recently introduced new agents in Adobe Experience Platform (AEP); management recently expanded AEP’s Agent Orchestrator capabilities; AEP now handles 35 trillion segment evaluations and 70 billion profile activations daily; subscription revenue for AEP and native apps grew 30% year-on-year in 2025 Q4 (FY2026 Q1); traffic to retail sites from LLMs (large language models) was up 7x during the 2025 holiday season; traffic from LLMs to retail sites convert 31% higher and generate 254% more revenue per visit; Adobe has products that help brands engage consumers across their owned properties, search, social media, LLMs and agentic channels; Adobe LLM Optimiser helps enterprises improve their websites’ discoverability by LLMs; Adobe Brand Concierge helps enterprises configure and manage agentic AI experiences on their websites and mobile apps; Adobe is in the process of acquiring Semrush and management expects Semrush to help Adobe provide a comprehensive solution for enterprises to shape brand-image across their own websites, LLMs, and traditional search; 650 customer trials for  Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge are underway; AEP AI Assistant is now used by 70% of all AEP customers;   

Adobe has become the trusted partner for AI-powered Customer Experience Orchestration (CXO) through our thought leadership, rapid innovation, and omnichannel capabilities, while providing the security, reliability, data governance, global scale, and partner ecosystem that enterprises require. 

Adobe’s unified CXO platform provides solutions for brand visibility, content supply chain and customer engagement. Adobe Experience Platform (AEP) is a leading platform for digital customer engagement and brings together new AI-powered apps and agents to transform how businesses build, deliver and optimize marketing campaigns and customer experiences, as well as reduce costs. In Q1, we introduced new AEP Agents along with expanded Agent Orchestrator capabilities, now available to all AEP customers, via a Try and Buy program. The scale of our platform has grown to over 35 trillion segment evaluations and more than 70 billion profile activations per day. Subscription revenue for AEP and native apps grew over 30% year over year, demonstrating continued momentum and value realization…

…According to Adobe Digital Insights, during the 2025 holiday season, traffic to retail sites from LLMs increased nearly 7x, bringing qualified referrals that convert 31% higher and generate 254% more revenue per visit. Adobe’s brand visibility solution, which includes Adobe Experience Manager, Adobe LLM Optimizer and Adobe Brand Concierge, empowers brands to engage consumers across their owned properties, search, social media, LLMs and agentic channels. Adobe LLM Optimizer enables enterprises to enhance the discoverability of their websites by LLMs and significantly increase their organic traffic. Adobe Brand Concierge is an AI-first application enabling businesses to configure and manage agentic AI experiences on their websites and mobile apps to guide consumers from exploration to purchase decisions, using immersive and conversational experiences. We expect our pending acquisition of Semrush will expand our offering to provide marketers with a comprehensive solution to shape how their brands appear across their own websites, LLMs, traditional search and the wider web…

…Strong customer demand for our agentic web offerings with over 650 customer trials underway for Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge…

…Continued adoption and momentum for AEP AI Assistant with 70% of all AEP customers using the agentic capabilities;

Adobe’s management recently delivered innovation that enabled GenStudio-created content assets to flow directly into activation workflows across Adobe’s stack and some of the largest 3rd-party advertising platforms; Adobe GenStudio’s family of products saw ending ARR grow 30% year-on-year in 2025 Q4 (FY2026 Q1)

GenStudio is our comprehensive content supply chain offering, spanning content ideation, creation, production, and activation…

… In Q1, we delivered breakthrough innovations enabling GenStudiocreated assets to flow directly into activation workflows across the Adobe stack and a broad ecosystem of advertising platforms including Amazon Ads, Google, LinkedIn, and Meta. Ending ARR for the Adobe GenStudio family of products grew over 30% year over year as the world’s leading brands and agencies increasingly turn to Adobe to power their content supply chain.

Adobe’s management thinks that only 2-3 really large LLMs (large language models) will succeed because people are not interested in the model but the workflows; management thinks it’s the right strategic move for Adobe to provide a choice of models because customers can then use the right models for the right use cases; management thinks it’s a win-win for Adobe and the model providers for Adobe to be providing different models because the model providers want access to customers while Adobe wants different model-capabilities

My take on the model side would be as follows, which is they’re going to be 2 or 3 really large language models that actually succeed. All of these individual models that exist, small model companies in 1 part of a media ecosystem, I just don’t see how long term they survive because people aren’t interested in just the model, they’re interested in the workflow. And so for us, offering customers with that choice was actually very strategic because we can actually then provide for all of our creative customers the right model for the right case because these all have different brands…

…As it relates to the support of all these models, I think it’s a win-win. They would like access to customers, which Adobe has, and we would like access to these different models because they have different brand attributes. And I think if you look at the larger companies like Google, we’re actually with them and with Nano banana. It’s been a great partnership because we are providing them with a lot of customers and they’re providing us with great technology.

Okta (NASDAQ: OKTA)

Okta’s management thinks the market for securing AI agents is still early; management thinks that Okta is well positioned to help companies secure their AI agents; 91% of organisations surveyed by Okta are using AI, but only 10% have a governance strategy for their use of AI; when management is speaking to customers, they are asking how Okta can help them manage agents securely; management thinks that the surface area for threat actors increases as AI becomes embedded in more workflows and automations; management sees AI agents as a new identity type, and securing identities is Okta’s expertise; Okta can secure the entire agentic lifecycle and gives customers the freedom to deploy agents without any ecosystem lock-in; Okta’s solutions for securing AI agents, Auth0 for AI Agents and Okta for AI agents, treats AI agents similarly as human users; management believes that AI agents are the future of software; Okta for AI Agents became available in early access only in January 2026; Okta’s solutions can enable organisations to observe, govern, and secure the entire life cycle of an AI agent; management thinks identity is even more important in the agentic world than before; management thinks Okta for AI Agents is more unique and differentiated than Auth0 for AI Agents; Okta for AI Agents can help customers understand what different agents are doing;

I mentioned that our portfolio of new products now includes our AI products, Auth0 for AI Agents and Okta for AI Agents. It is still early for this developing market, but as the leading modern identity solution for workforce and customer identity, Okta is uniquely positioned to help organizations combat the growing security threat that AI agents represent. The reality is that the AI revolution has moved faster than today’s security frameworks. According to Okta’s AI at Work report, 91% of surveyed organizations are already using AI but only 10% have a governance strategy in place.

In meetings that I have had with customers and prospects over the past six months, the vast majority of the conversations revolve around their AI initiatives and how Okta can help them build and manage agents securely. As AI becomes embedded in more workflows and automations, the growing number of exploitable entry points—from nonhuman identities to unsecured integrations—expands the attack surface for threat actors. It is clear that in order to get AI right, you have to get identity right. Okta was built to meet this challenge…

…AI agents are simply a new identity type, and protecting them is a natural extension of what we do best. Okta’s neutral and independent identity solution is uniquely positioned to secure and govern the entire agentic lifecycle and gives customers the freedom to deploy on any agent without ecosystem lock-in, all while strengthening their security posture. Our two-pronged solution with Auth0 and Okta for AI Agents treats AI agents with the same importance as humans and gives customers everything they need to secure this powerful new technology. 

We are still in the early stages, but we believe that in a few years, agents and agentic systems will not be the exception to how enterprise software is built and operated. They will be the rule. We believe that AI agents represent nothing less than the future of software…

…Okta for AI Agents, which became available in early access in January…

…With our solutions, developers, administrators and IT teams can ensure that the entire life cycle of an AI agent from initial design through active deployment is observable, governable and secure…

…Identity is at the center of — traditionally, in legacy technology, it was always at the center. And in this agentic world going forward, it’s becoming clear to everyone, it’s even a bigger deal than it was before…

…[Question] It seems like you’ve got a real competitive advantage on the Auth0 side. Could you maybe compare, and contrast initial takes for sales cycles, competitive dynamics and velocity of each? I know it’s still early stages, but is Okta for AI Agents in a more competitive market?

[Answer] I think Okta for AI Agents is more unique and more differentiated than maybe we would have expected. I think Auth0 for AI Agents is unique and differentiated as well. But I think maybe the sentiment you’re expressing is it’s different than what we’re seeing. Customers need a solution that’s pre-integrated to all these agentic systems. I mean there’s no good way for customers to even understand what all these vendors are doing in agentic. There’s no catalog of systems that says, Salesforce is doing this. ServiceNow is doing this, AgentCore is this, Google is doing this, Microsoft is doing this. And that’s what Okta for AI Agents does. And then on top of that, models connections and has policy for connections that connects users to different agents, agents to systems.

A financial services platform company is an existing Auth0 customer and it picked Auth0 for AI Agents to build AI agents; the financial services platform found Auth0 for AI Agents offered enterprise-grade identity for humans and agents, and secure access to 3rd-party MCP (model context protocol) servers

An existing Auth0 customer is building AI agents as part of their leading financial services platform. These agents will help the firm’s advisers make better and faster decisions, but to do so, the agents need access to sensitive customer information, which must be least-privileged. And they need to work with existing systems and third-party services inside the financial institution. The customer picked Auth0 for AI Agents as it met their stringent requirements for a secure, extensible platform to build and deploy agentic systems. They needed a solution that offered enterprise-grade identity for humans and agents while providing secure access to third-party MCP servers, all while acting as a single source of truth.

A global business and technology services provider is rolling out AI agents across multiple agent platforms and chose Okta for AI Agents to manage identities for its growing sprawl of agents; Okta is an independent agent-agnostic platform

Another notable deal that included Okta for AI Agents, which became available in early access in January was with a top global business and technology services provider. They chose Okta for AI Agents to help them discover, control and govern identities for their growing sprawl of agents. Rolling out AI agents across multiple agent platforms is key to their ongoing transformation and centralizing agentic identities in an independent agent-agnostic platform like Okta will strengthen their cybersecurity posture.

Okta for AI Agents and Auth0 for AI Agents contribute very little revenue at the moment because they are still very young products, but management thinks they can be a huge source of upside in the coming years; Okta for AI Agents and Auth0 for AI Agents will lead to higher growth in current RPO before it flows down to revenue

Okta for AI Agents is not even generally available yet, and Auth0 for AI Agents is — just was generally available at the beginning of the quarter. So it’s off to a huge start. Now the relative number is small compared to our $3 billion revenue run rate. But looking forward to next year, we’re very, very excited about the potential of these products…

…Because the agentic products are so new, it’s tough to pour too much into our assumptions about growth in terms of guidance. But I think those things could be a huge source of upside over and above the guidance in the years ahead…

…We’re not thinking about this as an opportunity just for FY ’27. This is an opportunity to be accretive to growth for FY ’28, ’29. And we’ll see the results, as you guys know, in current RPO first before we see it in revenue…

There is some confusion that Okta’s customers have between identity infrastructure and identity security; identity infrastructure and identity security are separate things, and Okta is the only company that does both; management sees both identity infrastructure and identity security as being really important for the agentic market; management is not seeing any big change in the competitive landscape for Okta in the agentic market for identity infrastructure and identity security

I think the biggest confusion people have is the distinction between identity infrastructure and identity security. And they hear the word identity, and they think if you’re sitting on top of identity and detecting threats and blocking threats, you’re also identity infrastructure. So that’s one of the big confusions. And when you look at the agentic market, they’re both really important. It’s the identity security, making sure the agents are monitored and checked that they can’t go out of bounds. But just the infrastructure, just the ability for the agents to connect and just for tracking and visibility, that’s an infrastructure play. And we’re the only company that really does both. It’s at the security layer and the infrastructure layer. So I think that is maybe a little bit of a confusion and something that we’re working hard to make sure everyone understands the advantage of that position as well…

…From an Okta standpoint, we’re not seeing any material change in the competitive behavior in our transactions yet. Of course, we’re keeping our eye on the landscape.

Okta’s management has been speaking to customers, and they think there are 2 ways to charge for agents, (1) a multiplier on a person who uses agents, and (2) a fee that is based on the number of connections a non-human-connected agent has; it’s still early days for the pricing model Okta will adopt, but management sees the pricing as a nice step up for the company

We have these conversations with our 20,000 customers, we get really rapid feedback on how we can capture value, what would be most valuable for them, easy for them to consume. So it’s really a strategic advantage. We have this feedback loop, and we’ve actually structured the go-to-market team for AI agents to capture that feedback rapidly and feed it right back into the product teams. And what we’re seeing is that there’s really 2 ways that we charge for agents. One is like a multiplier on a person. So in the model where a human identity uses a number of agents to augment their work, there’s a multiplier on that agent or on that — what they pay for a person to what they pay for agents. And then also, there’s a — if the agent is not coupled to a person, there’s a — we sell it based on the number of connections the agent makes because that’s really the value. They want to secure those connections and filter on fine-grain access to all the back-end systems and the SaaS applications and the custom applications and data warehouses the agent connects to as they get more — the agent is more valuable as it has more fine-grained access to different things and it’s more secure. So there’s a multiple based on that. The pricing we’re working with these customers on is pretty early. So we’re — it’s a nice step up.

From a hypothetical point of view, Okta’s management thinks it’s really difficult and costly to vibe code a competing product to what Okta has built over the years because the vibe coder (1) needs to ensure there are no vulnerabilities and the product can scale, (2) is likely to incur significant inference costs, and (3) will suffer major costs if things wrong; Okta’s management is hearing customers share similar views as what they have when it comes to vibe coding; management is paranoid about competition from vibe coding and Okta is using LLMs and coding tools to build in the as fast possible; customers are telling Okta that they do not want to use startups for securing AI agents and they do not want to use just one provider for agents

[Question] When you look at what you’ve built over the years and the data that you’re sitting on, can you talk about sort of the structural advantages that you see over maybe some upstarts or some vibe coding alternatives?

[Answer] I think if you want to build what any SaaS company has done or what Okta has done, it’s years and years of hardening and making sure there’s no vulnerabilities and making sure it scales and it’s reliable. And it’s — if you — I don’t know what the inference cost to build that would be, but it would be pretty significant inference cost. And then if you flip it around, you just think about what’s the price of getting it wrong. And if getting it wrong, it’s hard to validate. It’s hard to prove you have it right. And if it’s wrong, you have a major security breach or you’re down and none of your agents or none of your people can access systems. So the cost of getting it wrong hypothetically and actually just the cost to do it theoretically, if it was even possible theoretically with an LLM or a tool would be pretty high. And that cost could change over time. We don’t know… But when you talk to customers and you hear their challenges and their opportunities, they — a lot of the same things are echoed. They want to identify key infrastructure pillars, and they want to standardize on them. And they see that as the unlock to hundreds of other decisions and hundreds of other builds versus buy decisions they have to make. And they’re putting foundational security, foundational identity in this bucket of things that they want to partner with a leader and trust it and go on top of that and figure everything else out. That’s what they’re telling me. And it kind of matches up with what I would think about hypothetically…

…We are paranoid. And we’re making sure that we are using all the latest technologies, LLMs, coding tools to make sure we have not only something that’s resilient and secure but has the best features and the best capabilities. And so we’re making sure that we build things internally as fast as anyone could build them because we — make no mistakes, the prize here that the whole industry is going after, which is this agentic future where digital labor is part of the TAM is a massive prize. And everyone is at some level; big picture is going to be going after this prize. And it’s exciting because it’s greatly expanded the TAM of what Okta could be…

…They’re reticent to trust a start-up with this critical piece of foundation because they know there’s going to be M&A, and they know there’s going to be start-ups going away. There are so many start-ups playing in this space that there’s bound to be a lot of failure, and they don’t want to build their whole foundation around something and have it be pulled out from under them. And the other factor that is in their minds is that they don’t want to be locked in. Think about — what’s happening at agentic and what’s happening in this world, these foundational models are moving incredibly fast. And its Anthropic foundational model that has the leap ahead and then it’s OpenAI and then it’s an open-source model and then it’s — and that’s going to continue for many years. And they don’t want to be locked into a certain stack and a certain set of tools. So they’re reticent to trust their foundational security with one provider, one platform. And back to the start-ups, they know that a bunch of these start-ups are going to get bought by the big players, so they’re thinking, even if I go with a start-up now, it’s going to get sold and then we’d be locked into Microsoft, and they don’t really want that.

Okta’s management thinks the proliferation of AI agents could massively expand Okta’s total addressable market (TAM); management thinks the SIEM (Security Information and Event Management) market is changing because of AI agents

Think about identity and what it’s been in the past. It’s roughly $20 billion TAM right now in terms of what people spend on the vendor data. We talk about an $80 billion TAM. I mean this could be bigger than — this could be the biggest part of cyber in a few years for sure. And it could be even bigger than that if you really think about the infrastructure that stitches together the entire agentic enterprise and is the plumbing that makes it run…

…The SIEM market is transitioning to be not just a platform for logging in and doing authentication authorization, but it’s a platform for customers building agentic interfaces to their customers and to agents coming into their systems. So Auth0 for AI Agents, that’s what it is. It’s a token vault. It helps agentic login. It helps customers hook other AI tools up to their customer login. And so I think over time, that market is evolving into something that’s hugely impactful and value delivering for our customers.

Okta’s management is working with standards bodies in building solutions for securing AI agents, but they do not think that there will be only one set of standards that will dominate

They’re all trying to do a ton of things and make their services more agentic and more compelling and security and the ability to have them be more enterprise-ready is on their list, but we have to convince them to get it higher on their list. So it’s not like a competing standard is like a prioritization thing. But remember, we are — we want to provide this identity infrastructure and make sure that we give people this solid foundation to build upon. And that’s going to require standardization just because it’s not going to — you can’t use a standard piece of foundation if everyone is doing their own things in a different way, which is why we’re working with standards bodies in general. It’s not just Cross App Access, but it’s an important part of the equation. But I wouldn’t say like the whole war rests on one specific standards body or standards battle. I think it will be an evolutionary thing over the next several years.

Sea Ltd (NYSE: SE)

Monee’s credit business grew in 2025 because of its AI-driven improvements in risk underwriting capabilities; management is experimenting with transformer-based AI models to assess credit risks and the experiments are showing very good performance

Our credit business expansion in 2025 was made possible by improvement in our risk underwriting capabilities. This improvement tapped on our rich ecosystem data and advancement in AI. Over the year, we made good progress training our risk models to better understand and map how user behavior evolves over time. We are better able to access individual repayment capacity alongside evolving market risk and dynamically adjust the credit limits as needed. Enhancing our models precision and performance enabled us to scale rapidly in 2025, while still maintaining a stable risk profile…

…We’re experimenting with the new AI — new risk model with the transformer structure as well to do a sort of a long sequence data training fit into our model to utilize many of the e-commerce data that we are not able to use in the traditional risk modeling, and it has been showing us very good performance.

Sea’s management has directed a lot of investments into AI for the Shopee business; for each AI investment in Shopee, management looks at the ROI (return on investment); Sea has used AI to improve the take rate on its advertising business; management recently rolled out multi-modal search for Shopee and the roll-out has delivered clear ROI; management is using AI to help sellers on Shopee; customers are able to talk to Shopee’s sellers with the help of AI and this helps sellers upsell and reduce manpower costs; Shopee has AI-powered tools for sellers to create pictures, videos, and descriptions of their products, and the tools have a fairly positive ROI

I think if you look at the e-commerce side, we do spend quite a lot of effort on the AI. I think you mentioned about AI investment there. For every — for the investment on the e-commerce for AI, we also look at the positive return of investment across the initiatives.

For example, if you look at one of the area we spend on AI is our search recommendation and also ad systems. The uplift on our ad take rate is a consequence of many of our AI efforts. For example, how do we actually expand the description for our products, we can understand the product better. For example, how can we expand the queries from the users, we can understand user intention better. Recently, we also rolled out a multimodal search in our platform as well. So user can search a picture plus a long description, and we are able to serve that just similar to how Gemini or ChatGPT would do. I think all those AI investment has a clear ROI.

We also spent quite a lot of effort using AI to help our sellers. For example, if you go to many of our countries, you can talk to the sellers with the help of AI already. So we built an AI chatbot for our sellers. Our sellers can customize it for their own purposes. This will help the seller to reduce their manpower and also make it not only reduce cost, but also have the better upsell for the buyers. And we also have tools for the seller to create videos and picture descriptions for their products, et cetera. All those typically come with a fairly positive return on investment for our ecosystems.

Tencent (OTC: TCEHY)

AI is benefitting Tencent’s game content development, user engagement, and marketing efficiency; management believes that Tencent’s business has a high degree of resilience in the age of AI because of (1) network effects, (2) a connection between the digital and physical world, (3) licensing requirements, (4) unique resources, (5) low take rates, and (6) proprietary data; AI can enable faster game development, but the gaming industry is already in a state of oversupply and it will be game-quality, which depends on human creativity, that will be the key success factor; management thinks games will benefit from AI as people will have more time on hand; 

AI contributes meaningfully to game content development, user engagement, and marketing efficiency. Video Accounts total time spent increased over 20% on upgraded recommendation algorithms and enriched content ecosystem. Our marketing services revenue growth outperforms the industry, benefiting from our upgraded ad tech model and newly introduced automatic campaign solution, AiM Plus…

…AI will affect every part of the technology industry, but some products and services are inherently more resilient than others. We believe that some of the characteristics of resilience would include network effects arising from consumer to consumer to content creator, and consumer to business interactions in descending order of strength. That’s number one. Number two, deep supply chain integration linking the worlds of bits with the world of atoms. Number three, stringent regulatory and licensing requirements. Number four, scarce or unique resources, including physical and intellectual properties. Number five, tick rates that are low compared to value provided or cost of switching. And number six, private data that is closed and interactive in nature. Using these criteria, we look across our major existing businesses. Our conclusion, which is supported by usage trends, is that each one of them has got a high degree of inherent resistance.

In particular, for our communication services, including Weixin, QQ, and Tencent Meeting, people use them to connect and interact with other people, largely their families, friends, and colleagues, and business partners. We believe this need for human interaction, together with the network effects and closed nature of the data arising from these interactions, have resulted in communication services being extremely sticky in the face of competing non-AI services in the past and will continue to be resilient versus AI-based services in the future.

Moving on to our games. They are also very resilient as our multiplayer games, especially PVP games, also enjoy network effects. Similar to sports, they are team-based in nature, and players play with and against other players. Just as people prefer to participate themselves or watch the teams they support compete in sports rather than watching AI sports, game players continue to enjoy the interaction with other humans that our games provide…

…While AI will enable more games to be made faster, the game industry is already in a position of excess supply, with 200,000 new games on mobile and 18,000 new games released on Steam every year. The limiting factor is that new games need to be high quality and more innovative than the best existing games, which in turn requires human creativity on top of cutting-edge technology. Game is a natural beneficiary of AI proliferation, also when people have more time at hand.

Our fintech services are also resilient as they depend on difficult to secure and retain licenses which are limited in nature and also set the boundary on how innovations can be introduced in an industry. We have also invested decades building a payment network of difficult to replicate rails into partner banks, merchants, and connecting them with more than 1 billion consumers, which brings its own network effects. Our mobile payment take rates are already among the lowest in the world, which we believe makes competing with us on price highly uneconomical.

Tencent’s management is deploying AI to strengthen the company’s core businesses; management thinks Tencent is at the forefront in China and globally in strengthening its core businesses with AI; Tencent is using generative AI in its games business to speed up content production, acquire new users, retain existing users, and improve the gameplay experience; Tencent is using generative AI in its marketing services to improve ad conversions and user experiences, allow advertisers to create more ads, and provide the AiM Plus automated advertising campaign solution; Tencent is using AI to enhance content recommendation for Video Accounts; Tencent is using AI to improve content production efficiency for digital content; Tencent is providing AI agents within its enterprise software products; Tencent is using AI in the Fintech business to improve credit scoring and fraud detection; management has integrated AI into Weixin to enhance the user experience in a wide range of areas; the improved user experience in Weixin include AI agents which autonomously interact on behalf of users within Weixin functionalities (see Point 3 for more on using Hunyuan to build AI agents in Weixin); management thinks the trend of AI agents, such as OpenClaw, being controlled through users’ existing communication apps, mean that Weixin and QQ, will be the most efficient place for users to interact with AI agents; management thinks Tencent is already seeing vey good ROI (return on investment) when applying AI to the company’s existing businesses

We believe that in each of our core businesses, we are now at the forefront of their respective industries in China and often globally in utilizing AI with positive initial results demonstrated by user engagement and revenue trends.

In games, we are deploying generative AI to accelerate in-game content production, enabling us to produce more content within our big games. We’re using generative AI to facilitate new user acquisition and existing user retention through measures such as targeted ads and personalized daily highlight reels. We’re enriching the core gameplay experience with AI features such as virtual teammates in PVP games and realistic non-player characters in PVE games. These initiatives are one reason why Tencent’s games are more and more evergreen, and our revenue growth of 22% in 2025 outperformed the 7% growth of the global games industry.

For marketing services, we scaled up our advertising foundation model to provide more relevant ads to more targeted users, boosting ad conversions for advertisers and providing better user experiences at the same time. We provide generative AI-powered ad creative solutions, enabling advertisers to create more ads which are more relevant to smaller set of users and more efficiently. We introduced our automated ad campaign solution, AiM Plus, under which advertisers can automate targeting, bidding, and placement, improving their return on marketing investments and increasing their budget allocation to us. These initiatives contributed substantially to Tencent’s marketing services revenue growth of 19% in 2025, outstripping the overall China ad industry growth of 14%.

For Video Accounts, deploying a longer sequence AI model which captures more of a user’s signals to enhance content recommendation is boosting user growth, engagement, and content distribution. Total time spent on Video Accounts increased more than 20% in 2025, and Video Accounts is now the second-largest short video service by DAU in China.

For digital contents, we utilize AI in content production, improving production workflow efficiency, and providing visually compelling special effects. AI also helps in content distribution through more intelligent content recommendations across music, videos, and literature.

We’re using AI in enterprise software to provide features such as AI agents that can take notes on and summarize concurrent meetings for users, and AI agents that generate intelligent summaries of customer service history for merchants. Our enterprise software products, WeCom and Tencent Meeting, are leaders in their categories in China in terms of usage and revenue.

For Fintech, we utilize lightweight AI models to enhance credit scoring processes and facilitate fraud detection, contributing to us sustaining better than industry non-performing loan rates…

…We have also integrated AI to enhance a range of existing user experiences within Weixin, including content consumption, information retrieval, and merchandise recommendation and customer service. We’re building AI agents which autonomously interact on behalf of users within Weixin functionalities, especially Mini Programs. The excitement around OpenClaw illustrates that people recognize AI can unlock computer use capabilities to improve their daily lives but also illustrate the risks around unleashing unsupervised AI. We want AI agents in Weixin to deliver AI productivity that’s beneficial to the general public as well as early adopters, and which will boost ecosystem activity and naturally generate revenue…

…OpenClaw is upgrading AI from thinking to doing via autonomous workflows and continuous task execution. Users control this new generation of AI tools through command line interfaces in their existing communication apps, which generally means Weixin and QQ in China, as it’s the most efficient for users to interact with digital agents in a place and format where they are already interacting with human contacts…

…We have already seen very good ROIs when we apply AI into our existing businesses, right? You know, so if you look at the breakdown of our financials, you know, if you look at the financials on a combined basis and then sort of we break it out and saying, oh, you know, these are the financials with existing businesses plus the investment into AI for supporting these businesses, right? You know, the growth is actually quite strong and if you exclude the investment in new AI products, then you know, the operating leverage is clearly there.

Tencent’s management sees substantial opportunities from configuring a strong foundational model for the company’s core customer-facing use cases; management thinks Tencent is not at the forefront when developing frontier models, but the company has revamped its AI-building capabilities; version 3 of Tencent’s foundation model, Hunyuan, is now in testing and it is a step-improvement compared to version 2; management thinks Tencent’s 3D text-to-image and world models are early category leaders; management believes that users of AI agents will have access to multiple foundation models, but integrating Hunyuan with Weixin will enable Weixin to have unique agentic capabilities; management spent RMB 7 billion on HunYuan and Yuanbao in 2025 Q4 alone, and RMB 18 billion in 2025, and expects to double the investment in 2026; management is confident that the investments in HunYuan and Yuanbao will lead to monetisation; management thinks the AI race is not just one race of model-building, but there are many different races taking place, so they are not worried about Tencent being relatively late; management believes that HunYuan will eventually be a SOTA (state of the art) model in the future

At the foundation model layer, we see substantial opportunities from combining a strong foundation model with configuration for core user cases such as chatbot, coding, multimodal, and agentic applications. 

Although we’re not the first mover in large language models, having already revamped our team, improved our data quality, and rebuilt our AI infrastructure for pre-training and reinforcement learning, we’re now iterating more intelligent models at a faster pace. HunYuan 3.0 is in internal testing and currently represents a bigger step in capabilities versus HunYuan 2.0 than HunYuan 2.0 was versus HunYuan 1.0.

For multimodal capabilities, our 3D text-to-image and world models are early category leaders and will increasingly benefit from leveraging our proprietary data and abundant use cases…

…AI agents are currently powered by a multiplicity of foundation models, and we expect that users at the application level will continue to have access to a range of models. However, improving the performance of HunYuan will enable us to offer new, unique to Weixin agentic capabilities. The Weixin and HunYuan teams will work increasingly closely together going forward…

…Our spending on our two biggest new AI products, HunYuan and Yuanbao, was CNY 7 billion in the Q4 of 2025 and CNY 18 billion for the full year. These figures are only for HunYuan and Yuanbao and exclude AI initiatives supporting our existing products and services, as well as exclude costs arising from providing GPUs to external customers via Tencent Cloud. We expect to more than double these investments in HunYuan, Yuanbao, and other new AI products in 2026, which we intend to fund from increasing earnings from our core businesses…

…Over time, we’re confident that monetization will follow usage for these new AI products…

…[Question] I have one question regarding the comment quite a few times that we mentioned that we are not a first mover or we are even a latecomer in AI. In the U.S., we have also observed that it’s becoming very difficult for some of the latecomers to catch up, even for those that have very high resources in terms of compute, talents, and data. How does management get comfortable and confident that we won’t be following the same path in terms of, you know, lagging behind, not able to catch up and around areas on compute modeled applications?

[Answer] If you are playing just one game, then basically it’s hard to sort of, you know, catch up on one game, right? You know, if you view AI as sort of, you know, a multiple of different games, then, you know, there are new opportunities, new frontier that’s opened all the time… All these elements can be packaged together, you know, in the new race of AI. It’s not sort of, you know, one race. It’s actually sort of, you know, a world of many, many races… I think, you know, that will, you know, increasingly manifest itself and as a result, there will be a lot of opportunities for different players to come up and innovate from behind. I’m not sort of, you know, very worried about, you know, being late, but I’d be worried about, you know, if we’re not innovating fast enough…

…Our HunYuan 3.0 is gonna be much better than HunYuan 2.0, and that’s actually just the starting point. I think, you know, over time, we’ll be able to iterate the training of our model faster and, you know, I’m very confident that, you know, if we focus on that, you know, we’ll reach SOTA at some point in time.

Tencent’s management thinks building AI chatbots is not the best way to use AI to help people; management thinks AI chatbots are competing with internet search; management is still finding product-market-fit for Tencent’s chatbot, Yuanbao; management will be deploying HunYuan 3.0 in Yuanbao in the near future and they think this will improve Yuanbao’s user experience; Tencent’s management is seeing that consumers in China are not willing to pay for AI subscriptions, unlike in the USA; management thinks Tencent’s consumer AI products, when introduced to Chinese consumers, will have to be seen as investments upfront because the company can’t charge for them at the moment, but management still thinks the AI products will generate a very attractive return over time; see Some observers in Chinese tech are single-mindedly focused on AI chatbots as the only means for bringing AI to users. We believe this mindset is overly simplistic because AI can help people in a multitude of ways beyond powering an information advice app. We believe that AI chatbot applications are largely competing with search applications rather than with every other application. For Yuanbao, our own AI chatbot app, we’re focused on finding product market fit and use cases which belong in chatbot AI app. We’re rapidly iterating Yuanbao to enhance its user experience by providing better search integration, improved speech recognition, easier access to multimodal capabilities, and exploration around group chat, which we believe will increase usage and user retention of the app. In the coming months, as we deploy HunYuan 3.0 in Yuanbao, we believe the core user experience will step up further…

…You know, we would be seeing new investments first, right? You know, there’s not that much of a revenue, especially in the context of China. Unlike in the U.S. where you can actually get consumers to pay subscriptions and you can get companies to pay for, you know, coding agents at a very high cost. In China, those are not sort of that available. I think these will present themselves as investments upfront. Over time, we believe, you know, we’ll be able to generate revenue from these new AI products and they would generate, you know, very attractive return for us over time.

Tencent’s management has introduced productivity-enhancing AI tools for OpenClaw; management sees OpenClaw as a decentralised model for how AI works, beyond just having two major chatbots; management thinks that users of OpenClaw will want OpenClaw to work with multiple models

Speaking of OpenClaw, we have introduced a number of AI tools for enhancing productivity, including WorkBuddy, QClaw, and Tencent Cloud Lighthouse. We provide downloadable skills to easily put these tools to use from our SkillHub…

…I think OpenClaw is actually a very exciting concept, right? You know, it actually sort of presents a decentralized model or a decentralized regime for, you know, how AI works in this world…

…For some time, right, AI seems to be sort of, you know, everybody is trying to fight to become the AI, AGI hegemon or monopoly. You know, there seems to be a point in it which like people said, “Oh, if there’s one model which is AGI, then, you know, it would rule over everybody,” right? You know, the reality is it’s not, right? You know, you have multiple models becoming, you know, very strong and, you know, they specialize in different kinds of activities, right? One in chatbot, the other one in coding, and the other one in multimodal. You also have open source, which are, you know, pretty good. You have a lot of other models which sort of, you know, fast followers too. Then there was a time in which, you know, in the two C world [referring to ChatGPT and Claude], there seems to be, the chatbot being sort of, you know, the single entry point. Now with Claw, you can see, you know, it opens up a completely decentralized regime where, you know, many companies can have their own Claw, and the Claw can be using all kinds of different models…

…If you use these OpenClaws, then you know you go into them, and you have a choice. Do you want to use, you know, model A, which is, you know, very high performance and high price per token, or, you know, model Z that’s medium performance and very low price per token, or models, you know, B through Y in the middle? You know, that’s part of the appeal of OpenClaw. You know, HunYuan is, you know, one of those models that is available. You know, we believe with the capabilities of the HunYuan team now in place, that going forward, HunYuan will get better faster, and therefore consumers will naturally increasingly opt to use HunYuan. I don’t think it will be a monopoly situation.

Tencent’s management thinks the company’s investments in AI will follow a similar experience with Tencent Cloud; Tencent Cloud was a late entrant into cloud services in China, but management was patient and knew that Tencent Cloud had scale right from the start; Tencent Cloud focused on high-quality services starting in 2022 which pressured revenue growth for some time, but Tencent Cloud ended up achieving operating profit breakeven in 2024; Tencent Cloud faced revenue headwinds in 2025 because of GPU-supply constraints, but it still grew revenue and earnings; Tencent Cloud is facing a better pricing environment in recent party because of AI demand; management has ordered a substantially higher volume of compute for Tencent Cloud in 2026, which would facilitate revenue growth; cloud services providers in China were suffering for years because the supply of infrastructure was ample, but the supply is now constrained; management will be passing Tencent Cloud’s higher supply costs to customers

I would like to present a case study on Tencent Cloud as the latest example on how we develop our services into market leaders with economic returns over time. That would follow games, payments, and long-form video. We expect it will be the same for our new AI products. Tencent Cloud was a relative late entrant in cloud services. However, we committed to a patient and long-term investment strategy, believing that it had scale from the start due to Tencent itself being the biggest single end user for a range of technology infrastructure in China, and that it could provide differentiated services arising from Tencent’s unique insights, ecosystem, and capabilities. For example, we believe that we were the first cloud service provider in China to fully recognize the stepped-up capabilities of AMD’s recent generations of CPUs, becoming AMD’s largest partner in the country, and that our cloud video streaming service is the industry leader in terms of streaming quality. 

After a period where Tencent Cloud prioritized the revenue growth somewhat misguided by other industry participants, in 2022, we aggressively restructured Tencent Cloud to focus on high-quality services rather than chasing high revenue but low-value-added activities such as reselling and customizing projects. This pivot cost us several quarters of revenue growth, but it enabled Tencent Cloud to achieve operating profit breakeven in 2024, up from significant losses in prior years. During 2025, although Tencent Cloud continued to face revenue headwinds due to limited availability of GPU for external customers as we prioritize our internal needs, it grew revenue and sharply improved earnings, achieving CNY 5 billion adjusted operating profit. In recent months, we’re seeing a better pricing environment, especially for memory and CPU, which, along with robust AI demand and overseas expansion, allowing Tencent Cloud to grow revenue at a faster rate. Moving through the year, we have ordered a substantially higher volume of compute, which should also facilitate revenue growth…

…For years the industry has suffered because the cloud services providers in China were operating at very low margins. One of the reasons they operated at very low margins was because, you know, if there was a new entrant or if the customers wanted to source infrastructure directly, they were able to telephone the supplier and, you know, order the infrastructure that they wanted from the supplier of, you know, CPU or GPU or DRAM. You know, that’s no longer the case. You know, now, the supply is booked out months, quarters, in some cases, years in advance. You know, the supplier is prioritizing the biggest, most regular customers, which are the hyperscalers such as ourselves. Therefore, you know, the smaller cloud providers no longer have certainty that they can source supply, and they need to come to the hyperscalers. You know, the hyperscalers have been operating at low margins and so, you know, when the demand picks up, then, you know, we almost sort of as an industry have no choice but to pass through higher prices. You have seen a number of price increases in China cloud in the last 24 hours as a result…

…We seek to deliver, you know, more value through, you know, enrichment. Enrichment means that, you know, at a minimum, if you have, you know, compute, you can rent it out bare metal and you get a certain low price and low margin. You know, preferably you rent it out. You subdivide it and virtualize it into tokens, and then you get a higher price and higher margin per unit of compute. Ideally, you bundle it into a platform as a service or software as a service. Then you can get, you know, the best pricing and the best margins. That’s part of the journey that we’ve been on, and that’s part of, you know, how Tencent Cloud has moved from a very substantial losses four years ago to pretty substantial profits last year.

Tencent’s management added Tencent CodeBuddy to Weixin’s developer toolkit, enabling developers to create mini-programs using natural language; management provided developers of AI native mini-programs with free compute resources

For Mini Programs, total user time spent increased over 20% year-on-year, driven by workplace productivity tools, mini-games, and novels. We added Tencent CodeBuddy to our developer toolkit, enabling developers to create mini-programs using natural language input, and we provided developers of AI native mini-programs with free compute resources.

Tencent’s management is using AI in Delta Force to improve user engagement and development efficiency

Delta Force leverages AI coding for development efficiency and deploys AI-powered companions to enhance user engagement. 

The Marketing Services segment’s revenue was up 17% year-on-year in 2025 Q4, driven by improved ad targeting, expansion of closed-loop marketing services, and tailoring of ad formats for specific advertiser use cases; management will be deepening collaboration of Marketing Services with e-commerce platforms; management has increased the inventory for video ads and Video Accounts; Weixin Search’s overall query volume grew rapidly in 2025 Q4 because of AI enhancements to search results, driving commercial query volume

For marketing services, revenue increased 17% year-on-year to CNY 41 billion. We experienced rapid growth from the internet services and local services categories, partially offset by slower growth from the e-commerce category due to platforms temporarily shifting budget from marketing to subsidies, and also from the financial services category due to the impact of policy changes affecting online lending during the quarter. Growth drivers included improved ad targeting, expanding our closed loop marketing services, and tailoring ad formats for specific advertiser use cases, such as ads that are playable previews of the mini games being advertised.

Entering 2026, we have deepened collaboration with e-commerce platforms, facilitating their merchants advertising within Tencent, and we’ve increased the inventory for rewarded video ads and Video Accounts, which have contributed to faster year-on-year marketing services revenue growth in the Q1 to date versus in the Q4 of last year.

At a product level, Video Accounts total time spent increased due to upgrades to the content recommendation algorithm, enabling faster growth in ad impressions while our ad load remained lower than peers. Better conversion rates contributed to more marketing spending for Mini Shops merchants. For Mini Programs, consumers engaging more with mini-games and mini-dramas attracted more marketing spend from the mini-game and mini-drama studios. Weixin Search overall query volume grew at a rapid rate due to AI enhancements to search results, driving growth in commercial query volume, while search pricing also increased.

Tencent’s management has obtained additional AI compute through leasing, through purchasing imported GPUs (likely referring to NVIDIA’s GPUs), and through purchasing domestic GPUs; the priority use-cases for Tencent’s AI compute is for HunYuan and the company’s new AI products; management currently does not want Tencent to design its own AI chips; management thinks there are many options for AI inference chips in China, and this has brought down the cost of inference chips; management wants Tencent to leverage the best training chips to build models

In terms of GPU constraints then, we’ve been quite actively provisioning, more compute, and that will be coming on stream, progressively, and increasingly quickly through this year, especially the H2 of the year. You know, that additional compute comes from leasing capacity. It comes from us purchasing, higher-end imported GPUs which are now becoming available again, and it comes from us purchasing, the increasing quantity of, domestically China-designed, GPUs. In terms of utilizing those, the compute for different use cases, you know, the priority right now is, you know, HunYuan and our new AI products more generally…

…[Question] We’re seeing a growing number of your tech peers are prioritizing the development of in-house chip design capabilities. I’m just curious where in-house chip development fits into Tencent’s own AI priorities.

[Answer] I think at this point of time, it’s not the most critical thing that we’ll be focused on. So if you look at the chip, you know, there is, you know, a difference between training chip and inference chip, right? You know, and for training chip, it’s actually very, very difficult to design and you manufacture, and you actually want to have access to the most state-of-the-art, you know, training chips to the extent possible and in the most flexible way so that, you know, you can actually sort of keep training for the best model. 

And then, you know, if you’re talking about inference, right, you know, I think inference, it’s mostly for cost. I think for cost at this point in time, there’s actually a lot of different suppliers in China, which is actually very different from, let’s say, in the training space, right, where there’s essentially one player or two players who can actually command a very, very high margin, right? You know, in the inference world, people basically sort of, you know, are earning much lower margin, and there are many more solutions and, you know, options. So, I think, you know, the key for us is actually sort of leverage the best training chips to train the best model at this point in time, and there’s a lot of value in being focused.

Tencent’s management thinks it’s really difficult right now to tell which layer of the AI technology stack will be commodities

[Question] If we think about the AI stack between, you know, the models, the orchestration layer, the application layer and so on, which parts would you say are most critical for Tencent to be best in breed versus, you know, areas where we think these will be commoditized?

[Answer] I think at this point in time, it’s actually very dynamic, right? You know, you’re in a fast-moving market. I think, you know, it’s very difficult for someone to say sort of, you know, oh, you know, there will be one layer more important than the others, right? You know, I think, you know, we have the resources, we have the people, we have the team to actually invest in all these layers.

It’s currently not possible to use AI to build games completely from scratch

There is not yet the capability to create games, you know, completely from scratch using AI for a number of reasons that we can get into.

Tencent’s management is seeing AI create demand for memory chips in two ways, namely, (1) GPUs requiring high memory capacity, and (2) AI creating software that requires memory to execute

You know, when people utilize the agentic tools that we’ve been discussing, they’re using them and they create software. You know, that software, you know, then primarily, it needs to be executed. When it executes, most of it is not executing on a GPU. It’s executing on CPU, and then as it executes, it creates, you know, memory demands. It’s not just, you know, GPU, DRAM, HBM where we’re seeing demand picking up. It’s also, you know, CPU. It’s, you know, regular RAM. It’s SSD. It’s hard disk drive.

Veeva Systems (NYSE: VEEV)

Veeva’s management thinks core systems of record such as Veeva will incorporate and work seamlessly with AI and not be replaced by it; Anthropic’s recent launch of Claude for Life Sciences has Veeva as a launch partner; management thinks LLM (large language model) providers’ launches of life sciences products will not cannibalise Veeva’s products; management thinks AI is a very positive thing for Veeva because it helps Veeva create and improve its software faster; management thinks core systems of records will be used by both agents and humans; management thinks it’s still early days of AI and it will play out over 10-20 years; management thinks the LLM providers and Veeva will have a symbiotic relationship; management thinks the LLM providers will not be interested in industry-specific software

There’s a lot of hype and fear that AI will replace today’s software systems. The reality is, not all software

is the same. Core systems of record like Veeva, SAP, and Workday are essential and will incorporate and

work seamlessly with AI, not be replaced by…

…[Question] Anthropic made a lot of noise when they launched Claude for Life Sciences and signed up a lot of deals and maybe lost in that was Veeva is an enabling and launch partner of Claude for Life Sciences. So Peter, how should we be thinking about the opportunity for Veeva to work with Anthropic, OpenAI, all the different kind of model providers out there, provide your domain expertise, provide the workflow expertise and kind of have a rising tide lifts all boats situation rather than obviously the current market view of it being more cannibalistic?

[Answer] I certainly don’t view it being cannibalistic for Veeva, absolutely not. I mean let me state that clearly. AI is a very positive thing..

…And these core systems are going to be used by agents as well as human users. Yes, that’s new. But these systems are essential, and they’re not going away…

…So we’re really in these early days of AI and people get a lot of hyper and they think it’s going to play out over 1 or 2 months. It’s not. It’s going to play out over 10 or 20 years…

…Specifically for Veeva, AI, that’s going to help us create and improve our core systems faster than before. So that’s where it will help our software development but not at the expense of quality, predictability, regulatory compliance and the real value that customers depend on…

…Anthropic or OpenAI and others, that’s an engine, and their engine will be used for a lot of things. They will be used by the Veeva applications or by custom applications that customers develop. So yes, it’s good for those large model providers. Now they have to watch their profitability, et cetera, but they’re an engine in the new wave of cloud computing. So that’s the new AWS, et cetera. So it’s a good business there. But just as AWS itself and also Microsoft Azure, Google Cloud, et cetera, that was very good business for those hyperscalers. But I think what sometimes gets lost, that actually enabled Veeva. You couldn’t have built the industry Claude for Life Sciences. You couldn’t have built those long tail of applications without those cloud infrastructure providers. And it’s the same way here with these large language models. Veeva could not build the AI applications that we’re going to build without these foundational LLMs. So I don’t know if I’ll use this word correctly. I think the word is symbiotic. I think so…

…I don’t think the AI vendors are really making industry-specific software applications, right? It takes a lot of dedication and effort to do that. So I think it’s a very symbiotic relationship. Just like the cloud area, yes, Amazon didn’t make industry-specific applications either. I don’t really see — why would somebody like Anthropic do that, right? They’re going to make broad applications and applications for coding itself, et cetera. That’s what I feel would happen.

Veeva’s management thinks the agentic layer will provide far broader value than LLMs (large language models); management thinks AI agents is a substantial opportunity for Veeva; Veeva has Vault CRM Free Text Agent that captures rich, compliant call notes; Veeva has PromoMats agents that deliver approved content faster; management will be introducing regulatory and safety agents in 2026 (FY2027); management thinks building industry-specific AI is difficult and requires proprietary data, sophisticated logic, domain expertise, and more; management thinks Veeva’s agents, if built well, can provide a lot of value to customers; management thinks Veeva is in a great position to lead in industry-AI for the life sciences industry; management is making great progress on Veeva’s first two AI agents for safety, and they will be launched in April 2026; management is pleased with the progress of PromoMats agents; there are early adopters who are live with PromoMats agents; management thinks their approach to data is resonating with the life sciences industry when building AI use cases; customers are excited about the PromoMats (Promotional Materials) agents because the agents really work and the customers have been burnt by failed AI experiments; management is seeing PromoMats agents delivering very clear ROI (return on investment) for customers; the two AI agents for safety that will be launched in April 2026 provides clear value for customers because they automate workflows that would require expensive labour; management thinks it’s still early to nail down the right pricing model, but Veeva will be going with a token-based pricing model; management is seeing most customers go with Veeva’s agents instead of them building their own agents with Veeva AI

While the major large language models are the catalyst for this shift, the agentic layer provides far broader and more diverse value. The agentic transformation underway represents a substantial opportunity for Veeva and life sciences. With our core systems of record spanning the industry’s most critical functions and unique datasets, we can deliver industry-specific AI deeply integrated into our core applications. 

For example, Vault CRM Free Text Agent captures rich, compliant call notes for deeper customer insights. PromoMats agents help deliver approved content faster. Regulatory and safety agents coming this year can streamline health authority interactions and safety case processing. And this is just the beginning.

Building reliable industry-specific AI across a wide range of use cases for a highly regulated industry is hard. It takes time, focus, and the right skills. It integrates proprietary datasets, sophisticated logic, validated processes, and depends on specialized domain expertise and safeguards to maintain compliance and data integrity. If done well, our agents will provide significant value for customers and Veeva.  

It’s early days for industry AI, and we are in a great position to lead. We have a well-established life sciences cloud that’s expanding to connect the industry, strong momentum with Veeva AI, and much more innovation on the way…

…We are also making great progress on our first two Veeva AI Agents in safety, Case Intake and Case Narrative coming in April. Customer interest is high as the industry looks to AI to drive efficiency in safety case processing…

…I am also pleased with the progress of Veeva AI for PromoMats. A number of early adopters are now live, more projects are underway, and the success of these agents is generating a lot of interest…

…Our unique and modern approach to data is resonating with the industry, providing a harmonized data foundation that fits seamlessly with our commercial software. High quality, standardized and connected data is critical for speed and efficiency and is a required foundation for AI…

…For example, in the promotional materials management area, and they’re pretty excited like that I can have a winning AI application that really works and is really durable and is from Veeva because they’ve been — a lot of them have been burned on a lot of experiments, but it’s not easy for customers to admit failed experiments because that’s just the dynamics. You don’t like to admit that. And failed is too hard of a word. Sometimes the experiment doesn’t work out, but it’s not a failure. You got a lot of learnings. But the experiments that can actually scale, they’re rare so far, and they know Veeva’s — we won’t do things unless we can scale them…

…[Question] Can you maybe speak to early proof points that you’re seeing on AI agents that, I guess, you’re planning to roll out over the course of the year? Are there any sort of ROI or tidbits from clients that you’re hearing that you can kind of comment on ahead of these releases?

[Answer] The one that’s farthest along, and we have multiple projects underway, is the commercial content area. And that — the ROI is just very clear. It’s faster content, lower cost to create that content, and that’s what it’s all about. Lower cost to create that content, I won’t quote specific numbers, but that’s pretty clear to quantify. Faster content just means better launches. That means that drives the top line before the patent on that product expires. So I get asked by that — by customers all the time. They know in the age of really omni-channel experience for their customers, which are patients and health care providers, omni-channel experience that includes AI doctors and large language models, the speed that you can get your content out there in a compliant way is just going to be critical. So the old way of approving content is just not going to suffice anymore…

…In terms of AI, it’s pretty clear there in — there’s a lot of human processing of case intake and case narrative generation that’s done by people. That’s not necessarily that high risk, but it has to be done well. And it’s expensive to hire those people, and it’s not easy. So in safety, it’s just very clear. It’s about replacing that type of labor with automation, with AI software…

…It is, as you said, still quite early. As we’re starting this year, we’re really expecting to be using a token-based pricing model, and so that gives us a little bit of predictability around the margin profile. But that may evolve over time…

…[Question] Within Veeva AI, what is the mix of customer adoption you’re seeing right now between prepackaged agents that you’ve built and custom agents that they’re building using Veeva AI?

[Answer] The bulk of it is with our agents that we’re designing. So part of it is our — I guess, our agents are probably a little more robust than our custom tooling right now. But if you look at our agents, there’s detailed work in the agents, right? There’s detailed data curation. There’s detailed testing pipelines. There’s a lot of logic in the agents, right? When we talk about AI agents, there’s a lot of logic, specific logic written in our Java code that’s hard that needs great product management. So in general, customers would rather get that solution rather than build that themselves.

Veeva’s management is not seeing AI-considerations being a major theme with the company’s customer-wins in 2025 Q4 (FY2026 Q4)

[Question] I wanted to ask if Veeva is starting to see some programs funded maybe in the name of AI readiness. I would imagine for a top 20 to commit to Veeva in any of the R&D areas, RTSM, quality, safety, it would seem you’re going eyes wide open into really viewing Veeva as a future foundation for everything AI related that is to come. And so I’m wondering if there’s an AI influence that you’re starting to see that’s contributing to the strong demand here at year-end.

[Answer] I wouldn’t say that’s a broad theme. There are cases, and it varies by area. More of the theme is, hey, we need core systems that will scale, either their existing systems are aging. So we talked about a top 20 safety win. There, their existing systems, because they were doing other things over the past years and just lots of deferred maintenance and that was going to become a critical risk for the company, so they have to get that in. There are sometimes where it will help our data business. They’re trying to clean up their clean reference data because they know AI is not going to work because, okay, garbage in, garbage out. So there’s a little bit of that, but more it’s just modernizing, getting rid of legacy and looking for increased automation. AI is — really, the goal there is automation, right? That’s the goal. But AI is not the only way you do automation. Part of it is you do automation through a system to have clean workflow. So it’s a driver, but I wouldn’t say it’s a major driver.

Veeva’s management is seeing life sciences companies group AI players into 4 buckets, namely, (1) the LLM providers, (2) the point solution providers, (3) their own in-house development teams, and (4) core application providers such as Veeva; when life science companies talk to Veeva about AI, they want Veeva to provide more AI solutions that are tightly integrated with their core systems because they trust the company; Veeva’s management thinks the company’s customers really want it to win in AI applications

They bucket into 3 — maybe 4 types of people that might be able to help them. One is the infrastructure providers, the LLM providers themselves, Anthropic, OpenAI, Microsoft in that camp, Amazon, NVIDIA, those types of things, what — how can they be leveraged there? And then they would look for point solution providers. There’s a specialized group of people in the specialized department, and they can do this proof of concept or maybe you scale it for me here. And then there’s their own employees doing custom software, and then there’s system integrators. And then you get the core application people like Veeva, like Workday, like SAP…

…When they’re generally talking to us, they want us to provide more AI solutions that are tightly integrated with their core systems because they trust Veeva, and they know we deliver quality and really know when we say something is going to work, it’s going to work, right, because our reputation is on the line versus a small start-up can just say whatever they want…

…Our customers really want us to win in AI applications. And so we have a right to win, and we just have to execute.

Veeva’s management thinks the real bottlenecks in life sciences is not the pace of drug discovery, but finding patients for clinical trials, and the pace of a patient getting the right drug for treatment; management thinks these bottlenecks are where AI can play the biggest role, and where Veeva can help; management thinks AI cannot really speed up clinical trials

[Question] Given how mission-critical this is and maybe how much it can be tied not just to better revenue outcomes but more importantly, better patient and better health care outcomes and better societal outcomes, do you see an opportunity to not just automate and drive faster time to value and efficiency but even leveraging AI within the Veeva platform to allow for better drug development, safer drugs out of the market, basically better outcomes rather than just faster time to value?

[Answer] Drug discovery is one thing, and there’s a lot of focus on that. And yes, that will get faster, but that’s not the real bottleneck. The real bottleneck is the clinical trial, the experiment that’s done in the human. And we’re always going to have to do those experiments in the human, and the human biology runs at the same speed. So that always has to be done, and the bottleneck now is finding the patients around the world that can get in those trials. So that’s one.

But the biggest bottleneck by far is there’s a patient somewhere out there in the world. They’re diagnosed with something by a doctor. How long did it take them to get diagnosed? And when did they get the right medicine that will best treat them? That’s where 90% of the value in life sciences is lost, because of that impediment, the basics of is the patient informed. Can they get to the right doctor? Is the right doctor informed? Is the payer informed? It’s — that’s where 90% of the value is lost. And I said value is lost, but on the other side, there’s a lot of people who don’t get treated correctly or timely around the world. And that affects productivity. That affects their family…

…So this is really important for us, and AI can definitely, definitely, definitely bridge that gap. AI doctors and large language models can help bridge that gap between doctors and patients, so maybe that 90% inefficiency goes down to 50%, and that will be a tremendous boom. And yes, Veeva will definitely play a part in that by connecting our customers, the industry to its external ecosystem. And its external ecosystems are clinical researchers, patients and doctors and regulators. And the industry is not well connected, and AI is going to provide a better method to do that…

…About AI speeding up clinical trials, I think AI can speed up some maybe in the start-up and in the close down but not that much really. It’s still based on the clinical protocol of the medicine, which is based on the time of the human body it takes to deal with that medicine and to prove it out and then the patient recruitment, which I don’t think is actually an AI problem, the patient recruitment. So speed it up some but not so much in clinical trials.

Veeva’s broad product suite is an advantage for customers when they are trying to implement AI

Let’s say they’re doing something with us in safety and they start doing an AI solution with us in safety. And 2 years from now, they go with us in clinical data management, and a year later, they put in an AI solution for clinical data management. Well, that AI solution is going to work with their safety solution pretty much out of the box. And that’s a benefit they never planned for they’re going to get. So I think customers start to see that it kind of fits together with Veeva.

Veeva’s management thinks customers are starting to realise that Veeva is the only company that can provide AI solutions that are also connected to all their other systems; management thinks customers are also starting to realise it’s not so easy to build and maintain their own AI solutions

But I think they’re starting to realize if you if you want to have a potential future where you have a great core safety system that has safety AI on top of it and is connected to your other systems in your company, Veeva is the only place you’re going to do that unless you’re going to build it yourself. I think most people are starting also to realize now that it’s not that easy to build and maintain these things themselves. So that’s kind of what’s leaning into our favor on the AI.

Wix (NASDAQ: WIX)

Wix’s management thinks AI and the acquisition of Base44 has dramatically expanded Wix’s market opportunity; the addition of Base44 has allowed users to build applications, content, and websites that are much more powerful and sophisticated than before

What started as a simple do-it-yourself website builder has grown into the leading online presence creation platform serving not just self creators, but also businesses of all sizes as well as professional designers and developers. In recent years, the web has undoubtedly become much more AI-first. That shift is redefining how and what people build online. AI has dramatically expanded the world of what is possible and created new dimensions that had not existed before. As a result, Wix.com Ltd.’s market opportunity today is exponentially larger than in 2025, primarily driven by our expansion into the application space facilitated by our acquisition of Base44…

…With the addition of Base44 to our platform, users can now build tailored software applications, smart mobile applications, pro-level visual content, and, of course, websites, but so much more powerful and sophisticated than ever before. These are all things you can create on Wix.com Ltd. today, which is incredible, but the possibilities ahead are much, much bigger.

Wix Harmony is a first-of-its kind website builder that blends visual editing with vibe coding; Wix Harmony is an AI layer that spans the entire Wix experience; Wix Harmony was launched in English in January 2026 and management will expand Wix Harmony globally in other languages; management is very pleased with the early conversion and monetisation of Wix Harmony; management intends to make Wix Harmony the default Wix experience for new and existing users over time; management expects negligible AI inference costs associated with Wix Harmony in 2026; management is not seeing Wix Harmony and Base44 cannibalise each other’s customer base; management built Wix Harmony for the self-creator market; users of Wix Harmony are using it for the same purposes as the old Wix; Wix Harmony currently does not support a database, but will soon do so; early users of Wix Harmony have better conversion, faster monetization, and higher ARPU (average revenue per user)

Wix Harmony is the first-of-its-kind website creation platform that blends intuitive visual editing with the flexibility and power of Vibe coding. Wix Harmony provides a unified AI layer that spans across the full Wix.com Ltd. experience, allowing for a real AI partner to be with you every step of the way as you create, manage, and grow an online presence or business. After launching in English in January, we are now expanding Wix Harmony globally in other languages, and I am very pleased with the early performance we are seeing, particularly across conversion and monetization metrics. We believe Wix Harmony has the potential to fundamentally reshape how individuals and small businesses build and scale online, not just on Wix.com Ltd., but across the Internet as it becomes increasingly AI-driven. Over time, we plan to gradually make Harmony the default experience for new and existing users, an evolution we anticipate will drive meaningful long-term impact across conversion, engagement, retention, and monetization…

…Negligible AI inference costs associated with Wix Harmony as a result of proactive infrastructure optimization completed last year…

…[Question] Just stepping back, what types of businesses or applications are you seeing users set up with Base 44? And how much crossover is there with what you see on Wix.com Ltd.’s core platform?

[Answer] We do not see any kind of competition, and you can see that they are very mostly different usage also, as you can see now. Clearly, Harmony is accelerating, Base 44 is accelerating. So, obviously, we do not think they take from each other…

…Harmony is a product we built for the self creators…

…We are pretty much seeing everybody using Harmony that was using Wix.com Ltd. before. So it is everything from personal websites to the hair salon website to large company and enterprises, so pretty much everybody. At this stage, Harmony does not support a database, but that will be added soon…

…[Question] On Harmony, just curious what the early cohort KPIs that you are seeing there in terms of conversion, ARPU, attach rate, churn, relative to the traditional cohorts and how durable you see these KPIs across your geos?

[Answer] We see a very good performance of the new cohorts. We actually see a better conversion, faster monetization, and also higher ARPU. So we believe, we hope that this strong trend will continue. Again, I think that it is too early, but we feel very positive about the first reaction and performance of this product.

Base44 expands Wix’s reach into vibe coding; Base44’s user base is scaling rapidly, with the number of new Base44 users today nearly 2/3 of the number of new Wix users; Base44 has reached $100 million of ARR (annualised recurring revenue) just 1 year after its founding and 9 months after Wix’s acquisition (Base44’s ARR was just a few million dollars when Wix acquired it); management is starting to see Base44 being used by enterprises from different industries to build their own software solutions; Base44’s current growth is completely organic as Base44 has no sales team; management believes the potential for vibe coding still lies ahead as the technology reaches the broader online population; 1/3 of Base44’s AI inference costs today are for free users; Base44 has positive non-GAAP gross margin today; management thinks Base44 has a tROI (time return on investment) of less than one year; management thinks there is a great opportunity for partners to use Base44 in the future; Base44 is driving users who joined Wix 10-15 years ago to become paid users

The second new pillar of our strategy is Base44, our leading Vibe coding platform that expands our reach into the vast world of software creation and significantly grows our TAM…

…Base44’s user base is scaling rapidly. Today, the number of new users joining Base44 is nearly two-thirds of the number of new users joining Wix.com Ltd…

…Just one year after Moar founded the company and nine months after our acquisition, Base44 recently reached approximately $100,000,000 of ARR, placing it among the fastest growing software platforms in history. While Base44 is already emerging as a top platform to build lightweight personal projects, we are seeing adoption from a growing community of businesses and enterprise-sized organizations too. Companies in the tech, banking, and healthcare industries, as well as government organizations and nonprofits, are using Base44 to build customized software solutions. We are seeing users develop their own CRM capabilities, product and project management tools, ERP systems, workflow automation frameworks, and financial reporting applications.

Importantly, this momentum and growth is completely organic. With no sales team at Base 44 today, self-propelled adoption by enterprise-size organizations demonstrates the strength of the platform as well as our successful marketing execution…

…I believe the real potential still lies ahead as Vibe coding permeates beyond early tech-forward adopters to the broad online population…

…Base44 finished the year with approximately $59 million of ARR, above our expectations at the time of acquisition. Excitingly, Base44 recently reached approximately $100 million in ARR, a major milestone that underscores our rapid growth and growing market leadership. Strong ARR growth was driven by product innovation that has resonated, a rapidly expanding user base, improving conversion and consistent upgrade and renewal trends…

…Approximately one third of Base44’s AI inference costs today is attributed to token consumption of free users…

….Even after incorporating AI-related costs associated with free users into cost of revenue, Base44’s non-GAAP gross margin is positive today and is expected to improve as the year progresses…

…Base is a very young company, very young product. And, by the way, this is why we are very also conservative about the guidance. But right now, based on the information that we have, based on the history that we already have, we are looking at less than one year of tROI and this is how we manage the acquisition cost…

…Base44 has a ton of interesting things for our partners that they can actually use for their customers, and it is more revenue stream for them. So we believe that although right now most of it is self creator-led, we believe that it is a great opportunity also for partners to use Base44 in the future…

…Base 44 is a very young product, on the Wix.com Ltd. cohorts, we are seeing people who are converting who joined us ten or fifteen years ago. That is amazing

Wix’s partnership with OpenAI is not built on APIs in the standard way, but rather, it’s built on two AIs that are collaborating

[Question] In addition to the apps partnership with OpenAI, do you see potential opportunities in terms of how Wix.com Ltd. websites are navigated and searched by OpenAI in the future, particularly ChatGPT?

[Answer] It is not APIs in the standard way, it is essentially two intelligences that are discussing and working together to give you a website. And that is a fantastic pattern that can be grown a lot.

Wix’s management has given Wix users the ability to open their websites for LLMs to crawl and read if they want to; Wix users can even give LLMs more content than what is offered over a website

As for how OpenAI or any other LLM can read Wix.com Ltd. sites, we support pretty much everything. We support, of course, make text. If our customers choose so, we can make the text visible and easy to crawl and built in a way that is very easy for the LLMs to process. And we also have ways so we can give the LLMs more than just the content that we normally offer over the website, because LLMs like to read a lot of content, when humans tend to want to read less.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, Okta, Salesforce, Sea, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.

How Non-Tech Companies Are Thinking About AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed non-technology companies in the 2025 Q4 earnings season.

It has been more than three years since artificial intelligence (or AI) leapt into the zeitgeist with the public introduction of DALL-E2 and ChatGPT. As AI technology develops, I have been tracking how companies are thinking about and using it. 

The latest earnings season for the US stock market – for the fourth quarter of 2025 – has reached its tail end and there were a number of companies that I follow or have a vested interest in, where the management teams discussed the topic of AI and how the technology could impact their industry and businesses.

I shared the latest commentary from US-listed technology companies recently here. For the older commentary from non-technology companies:

Here’s the latest:

Chipotle Mexican Grill (NYSE: CMG)

Chipotle is leveraging AI to identify lapsed customers and re-engage with them in a personalised way

We’re leveraging the AI model to really identify those lapsed users and create journeys that get them reengaged with our brand. More importantly is we’re able to parse out deals or offers for consumers based on how often they frequent our brand in the past and what we anticipate their lifetime value to be, which is really a meaningful step change in how we really drive demand in the channel and targeting lapsed and at-risk consumers.

Costco (NASDAQ: COST)

Costco’s management thinks the company’s commitment to provide the best value will make the company a beneficiary in AI-driven commerce; management is working with leading AI companies to ensure Costco products will be visible as consumers engage with AI tools

On our digital sites, we continue to roll out new personalization capabilities, which are resonating well with our members and are starting to have measurable impact on e-commerce sales growth. As consumers embrace AI and their shopping habits, we believe our commitments to providing the best value on great quality items can make Costco a beneficiary of these shifts. We’re working closely with the leading AI companies to ensure our values will be visible to existing and potential future Costco members as they engage with these tools.

JPMorgan (NYSE: JPM)

JPMorgan’s management does not seem to be willing to increase AI spending by much in 2026; management thinks the return on investment (ROI) from AI spending by banks will only be transient, and it will only be customers who benefit

[Question] All right. If I could just — I guess, as you know, for any analysts, it’s trust but verify, right? So if I could just try one follow-up, just what do you think about your tech spending or AI spending for 2026?

[Answer] It’s going to go up a bit. But Mike, we have — we’re building more payment systems. We’re building more AI systems. We’re building more — we’re connecting more branches, which means the higher network expenses. We’re doing all the things you want us to do. But the tech spend is always one of the harder ones to measure and evaluate. That’s been true my whole life. You could imagine we are pretty detailed at what we’re doing, why we’re doing it, are we delivering on time.

But there isn’t an area where you — if you dug into it that you wouldn’t say, yes, you want to be — you got to be the best in the world in tech. So we spend money on trading. We spend money on payments. We spend money on consumer. We spend money in asset management. We spend money in corporate. We spend money — we need to have the best techs in the world. That drives investment. It drives margin. It drives competition. A lot of it is consumer-facing, digital, personalization, travel, offers, all these things, which we think are wonderful things. And I like the fact that we have these organic opportunities.

[Question] Are you spending more on AI?

[Answer] I think — I’m looking at it and saying, it’s a good thing that I can point out that we have, in every single area, in every single part of the company, we can grow. In some areas, it’s like trench warfare, think of certain trading and investment banking. In other areas we’re kind of out front and we want to build the next generation of technology. But investment, the thing about — you’ve heard me talk about this before, a lot of businesses, you build a new plant, you capitalize it and then you expense it over 20 years. In a lot of our businesses, everything gets expensed upfront. It doesn’t mean it isn’t a good return.

[Question] And you’re spending more on AI?

[Answer] We will be spending more — we — I think that AI — we will be spending more but it is not a big driver. I do think it will be driving more efficiency down the road. But I’d also point out about that, efficiency — because other banks have to do it too, it will eventually be passed on to the customer. This isn’t like you’re going to build 3 points of margin and you get to keep it. You don’t. So you need to build some of these things just to keep up.

And here we have — we look at — and we look at all of our competitors, those competitors include all the fintech. You have Stride, you have SoFi, you have Revolut, you have Schwab. You have everyone out there. And these are good players, and we analyze what they do and how they do it and how we would stay upfront. And we are going to stay upfront, so help us God. We’re not going to try to meet some expense target and then 10 years from now, you’d be asking us the question, how did JPMorgan left behind?

Medpace (NASDAQ: MEDP)

Medpace’s management thinks productivity changes to the CRO (contract research organisation) industry from using AI will occur slowly; management thinks the investment into AI will be equal to the benefits seen in the first year of rolling out AI applications; management thinks the productivity enhancements will be paid as rent to the AI model providers, and as benefits to clients; management thinks AI-driven productivity enhancements will be a negative for a CRO provider; the AI initiatives of Medpace fall into 2 categories, namely (1) initiatives to improve efficiency, and (2) data analytics for site selection

[Question] Maybe just one on AI since perceptions around that have had a pretty big impact on the space over the last weaker. So just maybe any thoughts you can share on how big of a technological step change you think this is for the space over the next few years? And then to what extent you think that’s a longer-term net positive or negative for Medpace? And how are you all positioning? Are you a little bit more insulated just given the kind of the nature of your client base? How are you positioned for this? Are you investing around that?

[Answer] I think it’s too early to know what kind of changes. I do think that they will occur slowly. I would not anticipate really any productivity advantage, overall net advantage to AI applications in 2026. And I think that’s not because we’re not rolling out and doing a lot of things in AI. It’s that I think the investment is going to at least equal the benefits seen in this first year of kind of rolling out applications.

Where this goes in terms of how much productivity enhancement there is in the long term and what that means to us, I mean, I do think that the productivity advances are going to be the — a part it’s going to be rent to the providers of the models, et cetera, but are going to be benefits to clients. And what that means in terms of encouraging more development, et cetera. But overall, you’d think on the surface of it, it’s negative to a service company that makes money by providing staff to perform work that is now made more efficient. But I think that the timing of this, it’s going to take years. Just what that means, what the opportunities for us are, are difficult to see. I don’t really think we have, say, barriers to prevent. I mean, we’re hoping to use AI in a lot of applications. We hope it does improve our productivity. And that means potentially in the long run, fewer staff that you’d otherwise have, and that means a little bit less revenue than you would have otherwise had, at least net revenue…

…[Question] You mentioned that 2026 is the first year you’re rolling out applications. Can you elaborate on that comment?

[Answer] They fall into 2 categories. One, just a number of different initiatives that are targeted on improving efficiency. And that the blurry line between like what do you call AI improvement that’s really tech-enabled support were different things across the organization that are focused in that category. And then the other category would be assisting with data analytics for feasibility and site selection and helping the team there with some AI-enabled tech.

When Medpace talks to clients about AI, it’s about (1) what Medpace is doing with AI to help with the clinical trials, and (2) ensuring data quality and security

[Question] When you’re talking to customers or involved in RFPs, what are they kind of focused on, if anything, from an AI angle?

[Answer] I was going to say it’s a balanced conversation because we do take a very measured approach to AI. We want to balance the benefits with risk management and ensure that, a, we have quality adoption; and b, that we’re not putting any of their information at risk. And so the conversations are kind of twofold. One, what are we doing with AI to help with their studies. And at the same time, how are we being good stewards of data to make sure that we continue with high quality and confidentiality.

Starbucks (NASDAQ: SBUX)

Starbucks’ management has scaled Green Dot Assist across the company’s North American coffeehouses; Green Dot Assist is an AI-powered search tool for Starbucks’ partners (Starbucks calls its employees partners)

To better support our Green Apron partners, we fully scaled Green Dot Assist across our North American coffeehouses this past November. This new AI-powered knowledge search tool provides a real-time resource to look up beverage builds, troubleshoot operational issues and adjust deployment plans. It also provides a strong foundation to test and learn, then develop and scale thoughtful AI solutions that reduce friction for partners and help them focus on craft and connection with our customers.

Tractor Supply (NASDAQ: TSCO)

Tractor Supply’s management has expanded the company’s use of AI, including its relationship with OpenAI; Tractor Supply is using AI to improve forecasting, inventory flow, and productivity

On the technology front, we expanded our use of AI across the enterprise including expanding our relationship with OpenAI. The capabilities are improving forecasting, inventory flow and team member productivity, helping us operate more efficiently and better serve our customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Chipotle Mexican Grill, Costco, Medpace, Starbucks, and Tractor Supply. Holdings are subject to change at any time.

Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

Last month, I published More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Coupang (NYSE: CPNG)

Coupang’s management is not concerned about disintermediation by AI because they believe that consumers will still want to shop where they can find the best combination of selection, service, and savings; management thinks there’s tremendous potential for AI to amplify the value Coupang brings

[Question] AI seems to be destroying a lot of things and agentic AI impact on e-commerce has been hardly debated in recent periods. So I was hoping you can talk about how you view platforms such as Coupang will not be somehow disintermediated by some chatbot or AI agent somewhere from somebody else.

[Answer] Ultimately, we believe customers care about selection, service and savings. And they’ll shop where they can find the best combination of all three. And as I mentioned in the call earlier, we’re a business that involves not only technology and software, but it’s not just a business made of electrons, but we’re really — we have real retail real infrastructure and people to move physical inventory. There’s tremendous potential for AI to amplify the value that we deliver across all 3 of the pillars that we strive to improve, selection, service and savings. And we believe AI will be a powerful means of us trying to — of us doing those jobs better over time, delivering the best experience at the lowest cost, and we intend to make a strong effort in the coming years to capture those opportunities.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management introduced an AI-enhanced search experience in the marketplace business in 2025 Q4; the new search experience expands product discovery in a persoanlised way; the new search experience includes an AI assistant that can help users refine broad searches; management has introduced a Seller Assistant in the marketplace business to scale onboarding and support for sellers; Seller Assistant helps sellers improve listing quality, create videos of their products, and handle customer queries; management wants to embed AI across the marketplace to improve discovery, increase relevance and conversion and deepen engagement; the Seller Assistant in MercadoLibre’s marketplace already advises 20% of the company’s GMV

In Q4’25, we introduced an AI-enhanced search experience in Argentina that uses insights from individual buyer behavior to expand product discovery from a single search term – for example, a search for “ball” will show tennis balls for tennis players, and footballs for football players.  It may also show specific brands or premium / value products, depending on our knowledge of the buyer from their extensive search and transaction history. An interactive assistant can refine broad searches, such as “smartphone”, into more personalized results by guiding users through key product attributes.

On the supply side, our Seller Assistant is helping us scale onboarding and support. It accelerates sellers’ progression to higher reputation tiers, improves listing quality through targeted recommendations, creates short-format videos from a single product photo, and handles inquiries that were previously managed by customer service teams. 

These initiatives are early steps in a broader effort to embed AI across the marketplace to improve discovery, increase relevance and conversion and deepen engagement…

…I think it’s worth highlighting the fact that we have a seller assistant today running in our platform, basically 20% of our GMV is somehow advised by our assistant.

Improvements in MercadoLibre’s advertising technology are driving higher adoption and spend; AI tools in MercadoLibre’s advertising business are supporting account managers for large advertisers, and are engaging directly with long-tail sellers

In Q4’25 we launched tools such as “budget orchestrator” and one-click campaigns, which performed well during peak season. In parallel, AI tools are supporting account managers for big brands and top sellers, while engaging directly with long-tail sellers to stimulate demand. We also launched our DSP for advertisers in China, which contributed to growth in Q4’25, and should support monetization of our growing CBT business.

MercadoLibre’s management launched the Mercado Pago AI Assistant in October 2025; the Mercado Pago AI Assistant handled more than 9 million conversations in 2025 Q4 and resolved 87% of these conversations without human intervention; management plans to expand the Mercado Pago AI Assistant’s capabilities in the coming months to handle more use cases; management sees potential to use the Mercado Pago AI Assistant for cross-selling fintech products in the future

In October, we launched the Mercado Pago AI Assistant, and early results are encouraging. In Q4’25, the Assistant handled more than 9mn conversations, with nearly 90% resolved without human intervention. There are dozens of use cases, including general inquiries, making transfers and paying bills, and in the coming months, we plan to expand the Assistant’s capabilities to make it increasingly proactive…

…Our Mercado Pago AI assistant is solving 87% of interactions without the need of human support…

…So far, we have been mostly dealing with these interactions that are initiated by users and the vast majority of them are responded by the agent without any kind of human intervention. But I would say, so far, we have not yet started using the agent for cross-sell, but it’s something that we will start doing. Given that you are in a conversation, you can, for example, tell the consumer that she has a credit offer or a credit card offer and the benefits of the credit card. We are not doing that yet, but we believe the opportunity there is significant and the system will become more proactive. And beyond cross-sell, it will also become more proactive in terms of acting like a personal banker. So helping you, I don’t know, allocate your portfolio or make the recommendations of what kind of credit is better for you.

AI tools are helping MercadoLibre’s Merchant Acquiring business’s sales teams by identifying new customers and deepening relationships with existing customers; the Merchant Acquiring business has 25% FX-neutral TPV growth in Brazil in 2025 Q4; the Merchant Acquiring business has 50% FX-neutral TPV growth in Mexico in 2025 Q4; the Merchant Acquiring business’s base of active POS (point of sales) is nearly equal to all of the incumbents combined

AI tools are improving the effectiveness of our sales teams by helping identify new customers and deepen relationships with existing merchants. In Brazil, this has supported higher TPV per merchant and shortened payback periods. This contributed to strong FX-neutral Acquiring TPV growth of 25% YoY in Brazil in Q4’25. In Mexico, growth is being driven by onboarding long-tail and SMB merchants, many of whom are accepting digital payments for the first time. Momentum remains strong with FX-neutral Acquiring TPV growing 50% YoY in Q4’25. As adoption continues to rise, our installed base of active POS devices is approaching that of all incumbents combined.

MercadoLibre’s management thinks MercadoLibre has the best features for agentic commerce and these features go beyond merely searching for an item; management’s focus with agentic commerce is on developing MercadoLibre’s own agentic experience inside the company’s marketplace; management believes MercadoLibre has the first-party data to create the best search, recommendation, and discovery engines for an agentic experience; management thinks the emergence of agentic commerce will mean an even faster transition from offline to online retail; management thinks MercadoLibre is well-positioned to capture advertising revenue from agentic commerce because it is the go-to place for online shopping; management thinks MercadoLibre’s advertising revenue also stands to benefit from agentic commerce activity that happens outside of MercadoLibre because the company has a unique set of data, customer knowledge, and attribution capabilities; management thinks there are many unknown aspects of agentic commerce today such as what hardware and AI models consumers will use, but there are also known aspects, such as what consumers value; management is cognisant of the risk of disintermediation of the MercadoLibre platform when it comes to agentic commerce, but they are confident that the company is coming from a position of strength

Let me start with the idea of Agentic commerce and how that will play out for us and potentially disintermediating, which is something that I’ve been asked over and over. So I think it’s still a bit early in the game, but we don’t think that solving one part of the value chain will actually change the rules of the game, meaning that we still think that the key is to provide the best end-to-end experience for our customer. So we know that searching for an item is one important task but reading reviews, making sure the package arrives on time, offering the widest selection, having the best prices, the best financing, preventing fraud, having the best customer support and so on are also key parts of the end-to-end job on — that we need to solve and that drive the decisions on where buyers will end up buying…

…Where we’re putting most of our efforts is in developing our own agentic experience inside MercadoLibre. We think and we are convinced that we have the first-party data to create the best search, best recommendation, best discovery engine on which we can personalize and lay over the agentic experience that the new technology drives…

…If you believe that there is a world of agentic commerce, that could mean that retail will move even faster from the offline to the online world. So all this to say that I do think that we are well-positioned to actually capturing ad revenues in the future because we still think that MercadoLibre will be go-to place for demand to do shopping online…

…What happens with all the agentic commerce that will occur outside of MercadoLibre because for sure, we will not have 100% market share. And we think that, that also represents an incremental opportunity for many, right? So today, we are providing with our tech stack advertising services to third parties, we do that with Google Ad Manager, with Disney. We do that with Roku with HBO Max. And the reason behind that is that we have a unique set of data, customer knowledge, attribution capabilities that we think are very hard to match…

…[Question] Essentially, how these independent agent systems could introduce new forms of the intermediation and engage clients directly, right, leading to potential changes in — the most obvious 1 we can think of and discuss a lot is the dollar full of advertising. So I really want to hear how you view these risks and how you’re approaching them strategically.

[Answer] There are things that we know and there are things that we don’t know. So we don’t know which hardware people will use in 10 years to buy. We don’t know whether the winning model will be X Y or Z and so on. We do know that consumers do value or do look for the best end-to-end experience. We do know — and that means not only searching for products, but also getting products fast, having the wider selection, pricing, the best financing alternatives, post-purchase support and so on. We also know there’s a technology today that can dramatically improve the product discovery process. And for that reason, we are putting all of our efforts and deploying lots of engineers in building our own agents and our own shopping assistant within MercadoLibre. It’s early to know what will happen with other shopping assistant. I take your point that it might present a risk. I understand where you’re coming from. But we are confident that we are playing this 1 from a position of strength that we have the relationship with consumers. We have a brand that Latin America loved. We have information and data about past purchases that allow us to offer them a great shopping assistant. And we are betting and putting our efforts on what we can control, which is building the best assistant possible

MongoDB (NASDAQ: MDB)

AI is not yet a material driver of MongoDB’s results, but management is encouraged by the growth in customers leveraging the company’s AI capabilities; the number of customers using Vector Search doubled year-on-year in 2025 Q4 (FY2026 Q4); the number of customers using Voyage embedding models has doubled since February 2025; management is seeing customers expand their use of MongoDB as a strategic data platform for both foundational and next-generation AI workloads; management thinks AI and agentic applications require memory, state, and high-quality retrieval capabilities, and these are all native to MongoDB’s OLTP (online transaction processing) platform without the need for ETL (extract, transform, load) or bolt-on systems

While AI is not yet a material driver to our results, we are encouraged by the growth we are seeing with customers leveraging our AI capabilities. The number of customers leveraging Vector Search has nearly doubled year-over-year, and the number of customers using Voyage embedding models has also doubled since the acquisition last February. This growth is across a diverse range of customers, AI natives, digital natives and large enterprises…

…Large enterprises are increasingly standardizing on MongoDB to power a wide spectrum of portals, including both core mission-critical applications and emerging agentic AI applications. Rather than treating AI as a stand-alone initiative, many are expanding their use of us as a strategic data platform that supports both foundational workloads and their next generation of intelligent applications…

…MongoDB is increasingly recognized as the architectural foundation powering innovation for frontier model companies, leading digital natives expanding into AI and AI native organization scaling globally. The database layer has endured through multiple technology shifts over the past 60 years, and it is even more critical in this AI shift. AI and agentic applications require memory, state and high-quality retrieval capabilities native to our modern OLTP [online transaction po platform, which powers real-time applications, without ETL odd bolt-on systems through integrated search, vector search and embeddings. In this platform shift, OLTP is the high ground and MongoDB’s purpose built to win.

MongoDB signed a $90 million deal with a large tech company for the tech company to expand both core and AI workloads on Atlas

We signed several large deals in the quarter, including an approximately $90 million transaction with a large tech company that plans to expand both core and AI workloads on Atlas.

Axon Networks, a global leader in telecom network management, is using Enterprise Advanced to power its operator-as-a-service platform which delivers a real-time digital twin and API-first architecture; Axon Networks’ operator-as-a-service platform is AI-first; management is hopeful that the Enterprise Advanced business can accelerate in the future; management is seeing a trend of companies wanting to keep critical data on-premise because of issues related to AI for mission-critical applications

Axon Networks, a global leader in telecom network management, serving 32 telcos and over 90 million homes and enterprises selected EA as the foundation for its operator-as-a-service platform. This platform delivers a real-time digital twin and API-first architecture designed to handle massive data peaks and high-volume time series workloads. EA provides the flexibility to run across mission-critical environments including hyperscalers and bare metal, along with the enterprise-grade security and operational tooling required to support Axon’s AI-first autonomous networking platform at scale…

…We are actually investing in EA to bring it to parity to Atlas. So certainly, our expectation and hope is that we continue to grow that and can even accelerate it in the future…

…Over a large set of very important customers that is definitely the trend that I’m speaking from our customers is, number one, that because of a variety of issues related to also AI that for mission-critical application, there is this trend I’m seeing where they do want to keep their critical data estates on-prem. And this is not just only in financial services, we are seeing that in health care and other verticals like government. But when I was in Europe and even in Asia, I’m also seeing there that there is a preference for those industries to also use MongoDB potentially with EA and only certain workloads in the cloud

Indian vibe coding startup, Emergent Labs, selected Atlas over PostgresSQL for agentic coding workloads; Atlas is helping Emergent Labs power 6 million applications across 190 countries

Emergent Labs, a leading AI white coding platform in India that just crossed $100 million run rate, selected Atlas over PostgresSQL to power AI agents that build production-ready applications from natural language prompts. They power nearly 6 million applications built across 190 countries and handle applications that averaged 35,000 lines of core with some reaching 300,000, all made possible with Atlas’ flexible document architecture and reliable scale.

AI startup Eleven Labs is using Atlas Search and Vector Search to power the long-term memory and knowledge base of their agents, and to deliver highly personalized interactions in real time, globally 

We are also fueling innovation at AI-native customer Eleven Labs, which is redefining conversational AI with its new enterprise agentic platform. Eleven Labs selected Atlas to power the critical long-term and knowledge base for their autonomous agents. By leveraging Atlas Search and Vector search, they enable their agents to retain complex context and deliver highly personalized interactions in real time and at global scale. Supporting the rapid expansion to $330 million of ARR and $11 billion valuation.

Adobe recently expanded its long-term commitment with MongoDB; Adobe now uses Atlas Vector Search to power its agentic experiences, and will soon include Voyage embeddings; Adobe also uses Enterprise Advanced 

A marquee example of the platform in action is Adobe, which expanded its strategic partnership and long-term commitment with us to accelerate AI-driven innovation. MongoDB now underpins a range of Adobe’s key initiatives, including Agentic experiences powered by Atlas Vector Search and soon Voyage embeddings. Adobe leverages Atlas to manage large fleets and always on database deployments at global scale, while also continuing to partner with us for support of self-managed business-critical workloads on EA, highlighting our ability to operate seamlessly across both cloud and on-prem environments.

MongoDB’s management does not expect AI native companies to contribute much to MongoDB’s revenue in 2026 (FY2027)

In terms of AI, we remain optimistic regarding our opportunity and are seeing encouraging trends with a number of AI native. While this subset of customers has significant potential — many of them remain early in their MongoDB journey and are not yet meaningful drivers of revenue…

MongoDB’s management has found that the key to win in an era where agents are going to be spinning up databases and not humans, is to get agents to love MongoDB as much as human-developers to love MongoDB; this view of management is validated by an AI-native customer of MongoDB that chose to build on the company’s database; management thinks that the way to get agents to love MongoDB is to ensure MongoDB has all the right integrations in place; a key focus area for management is to build MongoDB’s database in the way that would make agents love MongoDB; management has an ambitious roadmap, spanning 2026 (FY2027), to build MongoDB’s database for agents

[Question] How is your product and go-to-market strategy changing, if at all, ahead of the growing reality that agents are going to be the things that are spinning up most databases and not humans in the future.

[Answer] I have a very simple philosophy here. And the philosophy also was validated by one of the AI native companies that has completely built on MongoDB. They had many choices in many clouds and they chose MongoDB. And my initial intuition was the same as you outlined, is that MongoDB’s success over the last many years since the company was founded in 2007 was that builders or developers love MongoDB. And if that’s the premise, there was a lot of work done in the product to ensure that it’s a very natural way, flexible way while keeping the business agile as in the database agile so that it can move with the business.

We want to do the exactly same thing for agents. Agents also need to love MongoDB. That requires us to ensure that we have all the right integration with the right places, whether it’s MCP or whether we are looking at making sure that our APIs, in how you manage how we auto scale, how we ought to perform during the peaks and valleys. All of that truly needs to be autonomous and driven by machines. And that requires absolutely the focus from the engineering team that how would machines look at this if they want to provision an additional node or if they want to manage cluster because of resiliency across multiple clouds. So that will be the North Star for us that our agents will love MongoDB as much as today, human developers love MongoDB…

…We do have ambitious road map, of course. Today, we are already leveraged by some of the AI-native companies and some of them I outlined this time and also last time. And we are learning a lot from them. So we have ambitious road map in terms of truly machine friendly APIs or making sure that our protocol integration across a variety of protocols that machines demand and how do we Auto Scale, Auto chart. All of that will be throughout this coming year.

MongoDB has high-profile AI startup Anthropic as a customer, but MongoDB does not have any customer-concentration among AI natives; management thinks AI natives choose MongoDB for performance, scale, and security; management is seeing some AI natives make their initial database decisions without considering the database’s ability to scale; reads and writes are important with AI applications, and MongoDB is able to scale both for reads and writes; MongoDB can scale reliably with any AI native’s growth

[Question] Great to hear about Anthrapic as a customer at the MDB local event I’d love to hear how you think about the opportunity for Mongo to grow within large AI natives from here. And there’s also mention at the event that Agentic workflows require heavier storage and memory requirements. Love to hear why you think MDP architecturally is that suited for these growing types of AI use cases.

[Answer] The entire cohort, AI natives, frontier model companies, others, many of them choose MongoDB for performance, scale, security and other things. And I would say that the good news here from my standpoint is that we are not concentrated in any one customer when it comes to AI native cohort. So that’s number one. And as they scale, we will scale with them, but we are not concentrated. Even when I look at the growth as a percent of total, we were not concentrated…

…People are making initially database decisions in these AI native companies without realizing that they will run into scale issues or potentially, there was one of the choices that people could have gone with as an AI native company’s founders, had a massive security concern over the weekend where a couple of governments block them from being used. So what I find is that truly enterprise-class database that can scale, and when I say scale specifically, as for these AI native companies as their weekly active users or monthly active users continue to grow, like the example we had with Emergent or Eleven Labs and so on, they find that MongoDB scales better with them. Write performance as well as query performance really matters, and us being a native JSON with search, vector search, and embedding in one rather than multiple moving pieces — if I have to just simplify that, that is the strength because it’s an integrated platform that scales both for read and rights that as you scale your AI native company, they can rely that MongoDB will scale with them.

MongoDB’s management is seeing large enterprises from many different industries still wanting to pursue the modernisation of their technology stack; the enterprises that want to modernise want to shift to MongoDB, but they cannot do it all with AI tools and still require MongoDB’s help, that’s why MongoDB’s management still sees a huge opportunity with modernisation tooling

I was talking to a large financial institution in the U.K. And the Head of Transformation, she told me that, Hey, CJ, I have 50% of real estate that I want to modernize, I know that some of the AI tools can get me to some level, but I really, really need your health and your team’s help to make sure that for this mission-critical applications, we take help from MongoDB to help us land once you prove this out for the first workload, a very critical workload that is moving to MongoDB. The same thing happened, Alex, with a large customer in Spain when I was there a couple of weeks ago, this individuals said, “Hey, we are relying on MongoDBs, as we are modernizing. This is extremely critical workload, once you do that, we are going to open up the aperture and I know that AI will help us modernize, but we still need your help because the destination we want is absolutely MongoDB. So what I’m seeing is the feedback is the modernization and the need for modernization is still very much relevant in the high end of the enterprise, whether it’s a health care company, financial services or even government for that matter or health care. 

Number two, they know that AI tools can help you to some extent, but they definitely want to get there on a modern database to get AI ready where they won’t help from MongoDB to be on MongoDB. And then the last thing I would say is that even with some of the use cases, they try it and they’re like, hey, sometimes this is too hard to assure the reliability, security and all of those things for the application we build.

So I consider this as an opportunity in early stages, and this is definitely a top-down work that we have to do as MongoDB with the CTO and Head of Transformation, but the opportunity still exists and is massive.

MongoDB’s management is seeing that Fortune 500 companies across nearly all verticals are not scaling their agentic workloads into production right now; management thinks that it’s only a matter of time before enterprises scaling their agentic workloads into production

I would tell you it’s not if but when…

…I ask them that simple question, where are you on your Agentic workloads? And I’m talking about Fortune 500, okay, or big retail companies, health care companies pick one and ask them — where are you on agentic workloads? And are they really scaling? And the answer is still not yet. Yes, they have done a few productive productivity types of apps internally, but nothing of scale that is customer-facing, even including with a large retailer on agent commerce and so on. So my first thing is, 1 day, it is going to hit in a positive way. where you will have agents making a meaningful difference to the growth of our customers for either new product lines or existing product lines. We are not seeing that today in the large enterprises across pretty much most of the verticals that we speak to because as you know, MongoDB is across every vertical.

Nu Holdings (NYSE: NU)

Nu Holdings’ foundational AI model, nuFormer, is now in production for credit decisioning in Brazil; management is testing nuFormer in other use cases; management wants to expand nuFormer to more lending in Brazil and to credit cards in Mexico; Nu Holdings’ credit operations see a significant lift when using nuFormer

Our foundation model, nuFormer is now in production for credit decisioning in Brazil and in testing across additional use cases…

…We will expand nuFormer to lending in Brazil and credit cards in Mexico and continue putting AI directly into customers’ hands, moving closer to our long-term vision of an AI-powered personal banker in every customer’s pockets…

…We’ve discussed a few times over the past year, the significant lift that we’re seeing when we’re using our own foundation model on credit

Nu Holdings’ management’s long-term vision for AI is to have each customer have an AI-powered personal banker that they can access through their smartphones 

We will expand nuFormer to lending in Brazil and credit cards in Mexico and continue putting AI directly into customers’ hands, moving closer to our long-term vision of an AI-powered personal banker in every customer’s pockets.

Nu Holdings’ management sees AI as both an opportunity and a risk for Nu Holdings, but sees AI as more of an opportunity; management thinks there is one common denominator across every technology transformation and that is businesses that simply move bits from Point A to Point B gets hurt the fastest; in financial services, management thinks that the movement of money from one point to another has the higher risk of being disrupted by AI and that providing credit is the most sustainable activity; management thinks Nu Holdings is well-protected against AI-disruption because of its strength in credit and the proprietary data on credit it has; management thinks that AI will significantly enhance many aspects of a bank’s business; management thinks that Nu Holdings is well positioned to take advantage of AI to grow revenue and reduce costs

[Question] Do you see a risk that Nu could be disrupted by AI? Or do you see Nu as a potential winner in this transformation?

[Answer] It is both a challenge and has potential for disruption as well as significant opportunity. Net-net, we think it’s more opportunity than challenge for us…

…I think there is one specific trend or one common denominator across every technology transformation. And this goes all the way to even the internet era, which is any business model that relies on simply moving bits from point A to point B, where you’re effectively a broker tends to be hurt the quickest because one of the things that technology does is remove a lot of that friction in those processes. So I think to — some of the commentary that has been around in the market about financial services is, I think businesses in financial services that are simply moving money from one point to another point, will have the higher risk of potential disruption. You need to be able to add more value than that. And I think from that angle, we think — we have always believed that credit, specifically, credit revenue is actually the most sustainable type of revenue in financial services because of the capital intensity, the regulatory nature of it, the balance sheet aspect and the proprietariness of the data where AI plays a role and ultimately allows you to make a better decision on that. So I think from one angle, there is potential for challenging around the business model, but I think we’re very well positioned given the way we are set up in the strength around credit that we have…

…I think every single company really might benefit from that, where every function that you do, especially as a bank from customer service to compliance to regulatory to AML will be significantly enhanced or being significantly enhanced through AI…

…When you think about the fact that 95% of the world’s financial services profits are still concentrated in incumbent banks that still have significantly larger cost structures. Means that we’re very well positioned to take advantage of AI as a technology enabler for revenue and cost and ultimately be one of the winners in this technology shift.

In 2025, Nu Holdings’ management deployed new AI technologies for credit underwriting to increase credit limits in Brazil, under CLIP (Credit Limit Increase Policy); the full benefits of CLIP, especially in driving net income growth for Nu Holdings, has not appeared yet, because there are a few stages for CLIP’s effects to flow through, although the early signs are very promising; management thinks CLIP will continue growing credit limits for Nu Holdings in 2026 and beyond; management wants to deploy CLIP beyond Brazil; management wants to use the predictive AI technologies behind CLIP and apply it to other areas of Nu Holdings’ business

This was a year in which we have deployed this new technologies and approach to credit underwriting very successfully so far in allowing our customers to increase kind of their credit limits, especially in Brazil so far. And the best way for me to kind of illustrate the magnitude of this increase is Jorge, maybe, refer you to explanatory note, #32 of our financial statements in which we are then starting to provide what I call the unused credit limits. And you can see that unused credit limits went from about $18 billion to $29 billion. So an increase of about $11 billion, which accounts for about 60% increase in unused credit limits. It’s a big one. And I think it wouldn’t be possible for us to do so if we hadn’t be leveraging kind of the entirety of the predictive AI credit underwriting tools that have been kind of developed by us over the past now 18 to 24 months.

Have we seen all of those benefits translated into net income? The answer is no, not yet. So usually, I think at least I see kind of credit limits increases playing out in three steps. First, you have to offer the additional credit limits, then the credit limit translates into purchase volume. And then you have to see whether the purchase volume will then translate into IBB, We are starting to see the first step, Jorge, which is in the fourth quarter of 2025, our market share in purchase volume in Brazil has gone up by about 50 basis points. It was the biggest market share gain that we’ve seen in Nubank over the past 10 to 11 quarters. There’s two more to come, and then we still have to see kind of all of those purchase volumes reflecting into IBB.

Even though 2025 was, I think, a big sign of the magnitude of this ability to increase CLIP, I don’t think it will stop there. You will continue to see this kind of unfolding in new models and new improvements throughout 2027 — 2026, 2027 and onwards. And I would also say that the advent of the predictive AI technology will not stop at CLIP Brazil, right? It will be and is being exported to CLIP Mexico, CLIP Colombia, and then we’re going to go acquisition Brazil, acquisition in Mexico, what you’re going to go to fraud. It’s going to go to deposits, pricing and designs.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s management is seeing continued strong demand for demand for Blackwell; NVIDIA’s Data Center revenue again had very strong growth in 2025 Q4 (FY2026 Q4), driven primarily by chips from the Blackwell family; there are currently 9 GW (gigawatts) of Blackwell systems deployed; management expects sequential revenue growth throughout 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity management shared last year

Demand for our Blackwell architecture, extreme co-designed at data center scale continues to strengthen as inference deployments grow in addition to training…

…Q4 data center revenue of $62 billion increased 75% year-over-year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp…

…Nearly a year has passed since the release of our Grace Blackwell NVL72 systems. Today, nearly 9 gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers, hyperscalers, AI model makers and enterprises… 

…We look ahead, we expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027.

NVIDIA’s management is seeing the continued transition to accelerated computing and the infusion of AI across the hyperscalers’ workloads; management thinks the company’s hyperscaler customers are producing evidence of strong ROI; Meta Platforms’ GEM model drove a 3.5% increase in ad clicks on Facebook and 1% gain in conversations on Instagram

The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth…

…Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation and content recommender systems is encouraging our largest customers to accelerate their capital spending. For example, at Meta, advancements in their GEM model drove a 3.5% increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth.

NVIDIA’s management is seeing agentic and physical AI starting to drive the company’s business; NVIDIA’s management recently introduced Alpamayo, the world’s first open portfolio of reasoning Vision Language Action models; Alpamayo enables vehicles to think; the Mercedes-Benz CLA will be the first passenger car featuring Alpamayo; physical AI contributed $6 billion to NVIDIA’s revenue in 2025 (FY2026); management is seeing robotaxi rides grow exponentially; management thinks robotaxi vehicles will scale to millions of vehicles over the next 10 years, driving demand for orders of magnitude more compute from NVIDIA; management continues to advance robotics development; NVIDIA recently announced new partnerships to bring NVIDIA AI infrastructure Omniverse digital twins, World Models and CUDA-X libraries to millions of researchers, designers and engineers; OpenAI’s GPT-5.3-Codex agentic system was trained with, and will run inference on, NVIDIA’s systems; management thinks Anthropic’s Claude Cowork agent has ushered in the ChatGPT-moment for agentic AI; management is certain that agentic AI has reached an inflection point and the tokens generated are productive for users and profitable for the cloud service providers; in agentic AI, inference equates to revenue; all of NVIDIA’s engineers are using agentic coding tools

Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance…

…At CES, we introduced Alpamayo, the world’s first open portfolio of reasoning Vision Language Action models, simulation blueprints and data sets, enabling vehicles that can think. The first passenger car featuring Alpamyo built on NVIDIA DRIVE, will be on the road soon in the new Mercedes-Benz CLA.

Physical AI is here having already contributed north of $6 billion in NVIDIA revenue in fiscal year 2026. Robotaxi rides are growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide and Zoox, and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade, creating a market poised to generate hundreds of billions of dollars of revenue. This expansion will demand orders of magnitude more compute with every major OEM and service provider developing on NVIDIA’s platform.

We continue to advance robotics development. With the new NVIDIA Cosmos and Isaac Group, open models, frameworks and NVIDIA’s powered robots and autonomous machines for leading companies, including Boston Dynamics, Caterpillar, Franka Robotics, LG Electronics and NEURA Robotics. To accelerate industrial physical AI adoption, we also announced new expanding partnerships with Dassault Systemes, Siemens and Synopsys to bring NVIDIA AI infrastructure Omniverse digital twins, World Models and CUDA-X libraries to millions of researchers, designers and engineers building the world’s industries…

…We recently celebrated OpenAI’s launch of GPT-5.3-Codex trained with and inferencing on Grace Blackwell NVLink 72 systems. GPT-5.3-Codex can take on long running tasks that involve research, tool use and complex execution…

…Anthropic’s Claude Cowork agent platform is revolutionary and has opened up floodgates for enterprise AI adoption. Between Claude Cowork and OpenClaw, Anthropic’s Claude Cowork agent platform compute demand is skyrocketing and ChatGPT moment of agentic AI has arrived…

…I am certain that at this point with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them. All of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I am certain at this point that we are at the inflection point, we’ve reached the inflection point and we’re generating profitable tokens that are productive for customers and profitable for the cloud service providers…

…It’s really important to realize that inference equals revenues now for our customers because agents are generating so many tokens, and the results are so effective. When the agents are coding, it’s off generating thousands, tens of thousands, hundreds of thousands because they’re running for minutes to hours. And so these systems, these agentic systems are spawning-off different agents, working as a team. The number of tokens that are being generated is really, really gone exponential. And so we need to inference at a much higher speed. And when you’re inferencing at a much higher speed and each one of those tokens are dollarized, it directly translates into revenues. And so inference equals — inference performance equals revenues for our customers…

…Coding is obviously supported by agentic systems now, and all of our coders here at NVIDIA Corporation are using systems—either Claude Code or OpenAI Codex—enormously, and oftentimes both, and Cursor, oftentimes all three, depends on the use case. But they have agents and co-designed partners, engineering partners, to help them solve problems.

NVIDIA’s data center revenues have grown 13x since the introduction of ChatGPT; management is seeing NVIDIA’s business broaden beyond chat bots, driven by a few forces, namely, (1) a fundamental platform shift from classical machine learning to generative AI from the hyperscalers, (2) skyrocketing adoption of agentic systems, and (3) the growth of sovereign AI

We have now scaled our data center business by nearly 13x since the emergence of ChatGPT in fiscal 2023…

…Our demand profile is broad, diverse and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation and content recommender systems is encouraging our largest customers to accelerate their capital spending…

…Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing and tokens are profitable, driving extreme urgency to scale up compute. Compute directly translate to intelligence and revenue growth…

…Every country will build and operate some parts of its AI infrastructure, just like with electricity and Internet today. In fiscal year 2026, our sovereign AI business more than tripled year-over-year and over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore and the U.K. Over the long run, we expect our sovereign opportunity to grow at least in line with the AI infrastructure market as countries spend on AI proportional to their GDP.

Research firm SemiAnalysis recently declared NVIDIA as the Inference King; NVIDIA’s latest generation Blackwell system, GB300 NVL72, has 50x performance per watt and 35x lower cost per token compared to the Hopper systems for inference; optimisation of CUDA software has helped the GB300 NVL72 to perform 5x better on inference compared to just 4 months ago; management sees NVIDIA has having the lowest cost per token for inference; management thinks data centers using NVIDIA systems will generate the highest revenues; NVIDIA’s next generation of chips, the Rubin family, was recently unveiled at CES; the latest Rubin family of chips consists of 6 different chips; the Rubin chips can train MOE (mixture of experts) models and reduce inference; management has shipped the first Rubin systems to customers; management expects Rubin to have better resiliency and serviceability compared to Blackwell; management expects every cloud provider to deploy Rubin

SemiAnalysis declared NVIDIA, Inference King, as recent results from InferenceX reinforced our inference leadership with GB300 NVL72, achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper, and continuous optimization of CUDA software helped deliver up to 5x better performance on GB200 NVL72 just within 4 months. NVIDIA produces the lowest cost per token and data centers running on NVIDIA generate the highest revenues…

…We unveiled the Rubin platform last month at CES, comprised of 6 new chips, the Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9, SuperNIC, BlueField-4 DPUs and Spectrum-6 Ethernet Switch. The platform will train MOE models with 1/4th number of GPUs and reduce inference token costs by up to 10x compared to Blackwell. We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year. Based on its modular cable-free tray design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin.

NVIDIA’s management will use a $20 billion R&D budget, and the company’s strong system-design capabilities, to deliver X-factor performance leaps per watt for each new generation of AI chip systems

Our pace of innovation, particularly at our scale is unmatched, fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking across chips, systems, algorithms and softwares, we intend to deliver X factor leaps in performance per watt every generation and extend our leadership position over the long term.

NVIDIA’s management is seeing even older generations of the company’s AI chips being sold out in the cloud; NVIDIA’s older generation of chips continue to work well because all of the company’s GPUs are compatible, so the ongoing optimisation of NVIDIA’s software stack also benefits the older generation of chips

With NVIDIA infrastructure in high demand, even Hopper and much of the 6-year-old Ampere-based products are sold out in the cloud…

…All of our GPUs are architecturally compatible, which means that when I’m working on optimizing models today for Blackwell, all of that work and all of that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It’s the reason why A100 continues to feel fresh and continues to stay performant years after we’ve deployed it into the world. Architecture compatibility allows us to do that. Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization, knowing that our entire installed base in the cloud, on-prem, everywhere from generations of architectures and GPUs will all benefit.

NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q4 (FY2026 Q4) (networking revenue was $8.2 billion in 2025 Q3), driven by strong demand across NVLink, Spectrum-X Ethernet, and InfiniBand on a sequential basis, and by strong demand for NVLink 72 scale-up switches on a year-on-year basis; management thinks NVLink scale-up fabric has revolutionised computing; management recently announced that NVLink will be able to integrate with custom silicon from AWS (Amazon Web Services); management is seeing strong momentum with Spectrum-X Ethernet; NVIDIA’s networking revenue exceeded $31 billion in 2025 (FY2026), up more than 10x compared to (2020) FY2021; NVIDIA is now the largest networking company in the world, and is also now, or soon, the largest Ethernet networking company in the world; management has built an Ethernet capability that is powered by AI; NVLink 72 was really hard to develop 

 Networking, a cornerstone of our data center scale infrastructure offering, was a standout this quarter, generating $11 billion in revenue, up more than 3.5x year-over-year. Demand for our scale-up and scale-out technologies reached record levels, both growing double digits sequentially, driven by strong adoption of NVLink, Spectrum-X Ethernet and InfiniBand. On a year-over-year basis, growth was driven primarily by NVLink 72 scale-up switches as Grace Blackwell systems accounted for roughly 2/3 of data center revenue in the quarter.

NVLink scale-up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack. In Q4, we announced that we will enable AWS with NVLink to integrate with their custom silicon.

Momentum is strong with our Spectrum-X Ethernet scale up and scale across networking as customers work to unify distributed data centers into integrated gigascale AI factories.

For the full year, our networking business exceeded $31 billion in revenue, up more than 10x compared to fiscal 2021, the year we acquired Mellanox…

…We’re also now the largest networking company in the world and if you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching. And I think that we’re probably the largest Ethernet networking company in the world today and surely will be soon…

…We created an Ethernet capability that extends Ethernet with artificial intelligence, a way of processing in the data center, and we’re incredibly good at that…

…NVLink 72 has enabled us to deliver generationally 50x more performance per watt. It’s just an incredible leap. And it’s sensible. NVLink 72 is a great invention. It was hard to do. The creation of the switching technology, disaggregating the switches, building the system racks, all of that, we did it all in plain sight and everybody knew how hard it was for us to do. And — but the results are incredible.

NVIDIA’s management is seeing the company’s major customers (the hyperscalers) significantly increase their AI-related capex; the hyperscalers make up 50% of NVIDIA’s data center revenue; management thinks the hyperscalers’ revenues and cash flow will grow and this will generate more demand for NVIDIA’s systems because compute equals revenue; management is certain that agentic AI has reached an inflection point and the tokens generated are productive for users and profitable for the cloud service providers; management thinks inference tokens per watt translates directly into revenue for the CSPs (cloud services providers); management thinks all of NVIDIA’s major customers understand that without investing in compute, there can be no revenue growth

Analyst expectations for 2026 CapEx across the top 5 cloud providers and hyperscalers who collectively account for a little over 50% of our data center revenue are up nearly $120 billion since the start of the year and approaching $700 billion…

…[Question] When you look at your top cloud customers, cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed. So I know you’re very confident about your road map, right, and your purchase commitments and whatnot, but how confident are you about your customers’ ability to continue to grow their CapEx?

[Answer] I am confident in their cash flow growing. And the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You’re seeing incredible compute demand because of it. In this new world of AI, compute is revenues. Without compute, there’s no way to generate tokens. Without tokens, there’s no way to grow revenues. So in this new world of AI, compute equals revenues…

…I am certain that at this point with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them. All of the enterprise ISVs who are now working on agentic systems on top of their tools platforms. I am certain at this point that we are at the inflection point, we’ve reached the inflection point and we’re generating profitable tokens that are productive for customers and profitable for the cloud service providers…

…For the data centers, inference tokens per watt translates directly to the revenues of CSPs. And the reason for that is because everybody is power limited. And so I mean, no matter how many data centers you have, each data center, 100 megawatts or 1 gigawatt, has power limits. So the architecture that has the best performance per watt translates because each token, each — the performance tokens per watt, each token is dollarized. Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues…

…Without investing capacity today, without investing in compute, there cannot be revenue growth. And that, I think everybody understands.

NVIDIA is yet to generate revenue from China and management does not know if the company’s AI chips will ever be allowed into China; management thinks China’s AI companies could disrupt the structure of the global AI industry over the long term

While small amounts of H200 products for China-based customers were approved by the U.S. government, we have yet to generate any revenue. And we do not know whether any imports will be allowed into China. Our competitors in China bolstered by recent IPOs are making progress and have the potential to disrupt the structure of the global AI industry over the long term. To sustain its leadership position in AI compute, America must engage every developer and be the platform for choice for every commercial business, including those in China.

There was strong growth in NVIDIA’s gaming segment in 2025 Q4 (FY2026 Q4) driven by the AI-capabilities of the company’s gaming systems; management thinks the memory supply for NVIDIA’s gaming systems is very tight

Gaming revenue of $3.7 billion increased 47% year-on-year, driven by strong Blackwell demand and improved supply. GeForce RTX is the leading platform for PC gamers, creators and developers. In Q4, we added several new technologies and advancements, including DLSS 4.5, which uses AI to bring game visuals to a new level. G-SYNC Pulsar, bringing incredible clear graphics even in motion, and 35% faster LLM inference across leading AIPC frameworks…

…As much as we would love to have additional more supply, we do believe for a couple of quarters, it is going to be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. But it’s still too early for us to know at this time, and we’ll get back to you as soon as we can.

NVIDIA’s management expects tight supply for its advanced chip systems to persist

While we expect tightness in the supply for our advanced architectures to persist, we remain confident in our ability to capitalize on the growth opportunity ahead with our scale, expansive supply chain and the long-standing partnerships continuing to serve us well.

NVIDIA’s management is working on a partnership agreement with OpenAI and are thrilled with it

We continue to work with OpenAI toward a partnership agreement and believe we are close. We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company, we’ve had the pleasure of partnering with since their first days.

NVIDIA’s management recently announced that Meta will be deploying Blackwells and Rubins, and NVIDIA’s networking systems, for training and inference

Meta Superintelligence Labs is scaling up at lightning speed. Last week, we announced that Meta is deploying millions of Blackwells and Rubin GPUs, NVIDIA CPUs and Spectrum-X Ethernet for training and inference.

High-profile AI startup Anthropic recently announced a partnership with NVIDIA, and will run training and inference workloads on NVIDIA’s systems

This quarter, we announced a partnership with Anthropic, and a $10 billion investment in their company. Anthropic will train an inference on Grace Blackwell and Vera Rubin systems.

NVIDIA’s management recently entered into a non-exclusive licensing agreement with Groq for low latency inference technology (as part of the agreement, Groq’s top leaders have joined NVIDIA); management intends to extend NVIDIA’s chip architecture with Groq’s technologies as an accelerator

We recently entered into a nonexclusive licensing agreement with Groq for its low latency inference technology and welcome the team of brilliant engineers to NVIDIA. As we did with Mellanox, we will extend NVIDIA’s architecture with Groq’s innovations to enable new levels of AI infrastructure performance and value…

…What we’ll do with Groq is you’ll come to see GTC, but what we’ll do is we’ll extend our architecture with Groq as an accelerator in very much the way that we extended NVIDIA’s architecture with Mellanox.

NVIDIA has made strategic investments across its ecosystem because management thinks the investments will help to expand and deepen NVIDIA’s reach into its ecosystem

[Question] You talked about some of the strategic investments that you’ve made into Anthropic and potentially OpenAI, CoreWeave as well but also partners, Intel, Nokia, Synopsys. You’re clearly at the center of everything. Can you talk about the role of those investments?

[Answer] We used to be largely a computing platform on GPUs, but now we’re computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. And everything from computing to AI models to networking, to our DPU, all of that has computing stacks on top of it. And as I mentioned before, whether it’s in enterprise or in manufacturing, industrial or science or robotics, each one of these ecosystems have different stacks. And we want to make sure that we continue to invest into our ecosystem. So our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.

NVIDIA’s management thinks the implementation of the dilate architecture should be delayed for as long as possible

Everybody should want to extend, push out dilate as long as they can. And the reason for that is because every time you cross a dilate , you have a dilate , you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We’re not allergic to dilate . We use dielets already, but we try to use dielets only when we absolutely have no choice but to do so. And so we — if you look at the Grace Blackwell architecture and the Rubin architecture, we use 2 giant reticle-limited dies and we abut them, and that reduces the amount of architecture crossing. The dilate  [ tax ] shows up in the architecture effectiveness of the competitors.

The strategy of NVIDIA’s management is to deliver an entire AI infrastructure in each year

Our strategy is to deliver an entire AI infrastructure every single year.

NVIDIA’s management thinks the economics for data centers in space is currently poor, but will get better over time; management thinks the heat-dissipation methods used on Earth will be different from those used in space; NVIDIA’s Hopper is already the world’s first GPU in space; management thinks one of the best use cases of GPUs in space is for imaging; management sees very interesting applications for AI in space

[Question] I’d like to ask about space data centers, which some of your customers are considering. How feasible do you think that is and what kind of horizon? And what do the economics look like today? And how do you think that could evolve over time?

[Answer] The economics are poor today, but it’s going to improve over time. As you know, the way that space works is radically different than how it works down here. There’s an abundance of energy, but solar panels are large, but there’s plenty of space in space. The heat dissipation, it’s cold in space. However, there’s no airflow. And so the only way to dissipate heat is through conduction and the radiators that you need to create are fairly large. Liquid cooling is obviously out of the question because it’s kind of — it’s heavy and freezes. And so the methods that we use here on earth are a little different than the way we would do it in space. But there are many different computing problems that really wants to be done in space. And so NVIDIA is already the world’s first GPU in space, Hopper is in space.

And one of the best use cases of GPUs in space is imaging, to be able to image at extremely high resolutions using, of course, optics and artificial intelligence. And to be able to do that computation of reprojection of different angles and be able to up res and do noise reduction and just be able to see, be able to image at very large, very high resolutions, extremely large scales and very, very fast. It’s hard to do that by sending petabytes and petabytes of imaging data back here on earth and doing that work. It’s easier just to do it out in space. And then ignore all of the data collected and processed until you see something interesting. And so artificial intelligence in space will have very good, very interesting applications.

All 1.5 million AI models on Hugging Face run on NVIDIA’s CUDA software

There’s 1.5 million AI models on Hugging Face, all of it runs on NVIDIA CUDA.

NVIDIA’s management has designed CPUs for AI data centers that are very different from the CPUs designed by other companies; NVIDIA’s CPU is the only CPU that supports LPDDR5, and it is designed to have very high data processing capabilities; in agentic AI, the tools used by agents often run in CPUs only; NVIDIA’s Vera CPU was designed to be an excellent CPU for the post-training process of an AI model; some use cases in AI require a lot of CPUs; at the current phase of development of AI technologies, really fast, single-threaded CPUs are required; NVIDIA’s Grace and Vera CPUs are both great at single-threaded performance, but Vera is much better

At the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world’s CPUs. It’s the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. And the reason for that is because most of the computing problems that we’re interested in are data-driven, artificial intelligence being one. And the single-threaded performance and its ratio with bandwidth is just off the charts.

And we made those architectural decisions because in the entire phase, the different phases of AI from data processing, before you even do training, you have to do data processing. So you have data processing, pre-training and in post-training now, the AIs are learning how to use tools. And the usage of tools, many of those tools run in CPU-only environments or they run in CPU with GPU-accelerated environments. And Vera was designed to be an excellent CPU for post-training. And so some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs. We love CPUs as well as GPUs. And when you accelerate the algorithms to the limit as we have, Amdahl’s Law would suggest that you need really, really fast single-threaded CPUs, and that’s the reason why we built Grace to be extraordinary to be great at single-threaded performance, and Vera is off the charts better than that.

Salesforce (NYSE: CRM)

This is not the first SaaSpocalypse Salesforce’s management has been through; management thinks a SaaSquatch will be eating the SaaSpocalypse because the SaaS companies will be getting a lot better by also providing AaaS (agents-as-a-service)

This is not our first SaaSpocalypse, we have been through many SaaSpocalypses. I remember the horrible SaaS pockets of 2020 when not only the software industry was doing, but we were all dying. But we made it through that. And now everyone is back, doing great — so we’re so grateful to make it through that, and we’re going to make it through this at as well…

…If there is a SaaS polyps, I think it might be eaten by the SaaS watch because there are a lot of companies using a lot of SaaS because SaaS just got a lot better with agents-as-a-service.

In Agentforce’s 1st 15 months, Salesforce has closed 29,000 deals, up 50% sequentially; customers in production with Agentforce is up 50% sequentially in 2025 Q4 (FY2026 Q4); Agentforce and Data 360 reached nearly $2.9 billion in ARR (annual recurring revenue) in 2025 Q4 (FY2026 Q4), up 200% year-on-year (was $1.4 billion in 2025 Q3, up 114% year-on-year) including Informatica; more than 75% of Salesforce’s top 100 wins in 2025 Q4 (FY2026 Q4) had both Agentforce and Data 360; management thinks Agentforce has the potentially to be similar in size as Salesforce’s current software business; Agentforce ARR reached $800 million in ARR in 2025 Q4 (FY2026 Q4) up 169% year-on-year (was $540 million in 2025 Q3, up 330%); Salesforce’s most premium SKUs related to Agentforce saw new bookings triple sequentially in 2025 Q4 (FY2026 Q4); more than 60% of Agentforce and Data 360 bookings in 2025 Q4 (FY2026 Q4) came from existing customers expanding their commitments; all of Salesforce’s top 10 wins in 2025 Q4 (FY2026 Q4) included Agentforce and Data 360; Informatica was in 6 of Salesforce’s top 10 wins; management thinks Agentforce brings incremental value to Salesforce’s software

We’re seeing incredible demand for agent force. In its first 15 months, we closed 29,000 deals, up 50% quarter-over-quarter. Customers in production have increased as well, nearly 50% in Q4…

…Our Agentforce and Data 360 ARR, including Informatica, now exceeds $2.9 billion. I heard ARR doesn’t matter anymore. But in case it does, we have $2.9 billion, up 200% year-over-year. More than 75% of our top 100 wins in Q4 included both agent force and Data 360…

…I can’t tell you when the — an agent force is like about an $800 million business now. So I can’t tell you exactly when Agentforce will be a $46 billion or $30 billion. But it has the potential to go just like… 46×3 is 120 plus 18…

…Agent force and Data 360 ARR inclusive of Informatica Cloud ARR reached $2.9 billion. That’s up over 200% year-over-year. This includes Informatica Cloud ARR of $1.1 billion and Agentforce ARR of approximately $800 million, which is up 169% year-over-year. New bookings for Agentforce 1 Edition and Agentforce For Apps or as we call it, A4X, our most premium SKUs nearly tripled quarter-over-quarter…

…In the quarter, more than 60% of Agentforce and Data 360 bookings came from existing customers expanding their commitments…

…Every single 1 of our top 10 wins included Agentforce, Data, sales, service, platform and analytics. Our newest addition to our portfolio Informatica, landed in 6 of those top 10 wins, proving it is a critical component of us building the data foundation for the Agentic enterprise…

…What we see is now with Agentforce with the system that you laid out, the system with the agents, et cetera, we’re just seeing incremental value to our software.

Salesforce’s management launched Agentforce for life sciences in 2025 (FY2026) and it has won many global pharma companies, including existing customers of Veeva Systems

We built an amazing new life sciences product this year. Agentforce for life sciences and since we launched so many of the global pharma companies, and I’ve met with so many of the CEOs myself, they’re leaving Veeva, the purgatory of Veeva, including AstraZeneca, Novartis, Takeda and of course, Albert at Pfizer, they’re all saying that they are going to Salesforce Life Sciences, which is a product that has apps and agents. And this is amazing. They are the most regulated businesses in the world. and they’re choosing Salesforce.

When a Slack customer turns on Slack Bot, the bot will be able to look at the customer’s data and understand the customer’s business and provide advice and support; Salesforce is able to bring all the LLMs (large language models) from the AI labs into Data 360 to activate AI agents, and have Slack be the Salesforce layer that engages with, manages, and orchestrates AI agents; 90% of Forbes’ top 50 AI companies are using Salesforce and Slack; Slack handles 1 billion messages a day, and they are all about work; Slack Bot is able to orchestrate other agents; management thinks Slack Bot has the #1 AI ecosystem in the world with its partner marketplace having more than 350 AI apps and agents; the high-profile AI start-up Anthropic runs its business and products on Slack; management thinks every AI company runs their businesses on Slack; management sees the UI (user interface) of apps changing in the AI era because of the combination of humans and agents working together, and Slack is the best place to get work done between humans and agents with this changed-UI environment; management thinks Slack might be the most important piece of data Salesforce possess in the AI era

Customers tell me that they want to basically kind of get to that next level. And the way to do that is by including this context, the ability for the AI, the data to know you. No better example of that than Slack Bot immediately as you turn it on you’re a Slack customer, it looks at all your slack. It looks at your DMs, it looks through Salesforce. It looks through Google. It looks even that Microsoft teams as hard as that is for some agents to go and do, but we’ve told them how to do it. And then it says, I understand your business, and I can give you help, advice, support…

…We love all of our children equally and down below here, whether it’s anthropic or open AI or Mistral or Lama all of them, and there’s more coming. They’re amazing. World models are coming. They’re amazing. They’re all down below here, and we’re using them. And then, of course, we bring them into Data 360, and that lets you harmonize your data, integrate your data and federate that means connect into other data sources throughout your company and grab it. Other data repositories, you might be using Snowflake or data bricks you might be using big query or anything, even IBM mainframes and you can bring it into Data 360, you activate your data and then it comes up into your apps. So if you’re using the service app, and you want to have an experience like help that salesforce.com for your company. Now the service app has that Agentic capability, the data is coming up — and it comes up to the next level to agent force and you can build your agents, train your agents, put the guardrails in your agents, give them voice. They can talk now, they’re talking. And then all of a sudden, you can even manage and orchestrate and collaborate from Slack. So this is our architecture…

…Agentforce has the tooling to build, to manage, to orchestrate the agents, to make them talk, to give them determinism, to give them the capabilities if they want. And then we have the engagement layer to deliver agentic enterprises, where work happens in Slack across our apps…

…Nearly 90% of Forbes top 50 AI companies, Forbes top 50 AI companies, use Salesforce and Slack…

…Slack is hosting 1 billion messages a day. And remember, every one of them is about getting work done…

…Slack Bot can access all of those messages as well as your files, your calendar, your sales force, your Google, your Microsoft teams you’re this year that Slack bot goes around, pulls it all together, — and then it knows your business. So then it’s able to orchestrate with other agents. It has an incredible partner marketplace, really the #1 AI ecosystem in the world and has more than 350 AI apps and agents already. There is no other AI ecosystem like it…

…We love Anthropic. We love Dario, Daniella. I tweeted about what they did yesterday, incredible demo. Just yesterday, Dario demonstrated how he is doing something amazing with Salesforce in the enterprise. Every single one of their demos, whether it was for HR, engineering investment banking, started and ending in Slack, pretty awesome…

…Anthropic runs its whole global operation on Salesforce and Slack. I think actually every AI company does…

…Everybody through the past few years has been so enamored with the model, of course, it’s this brand new thing, this intelligence layer that we never had but also the data. But what’s really happening around us is the apps are changing. — the UI is changing, as Miguel is alluding to. And that’s really what we’re seeing because these old apps of these point-and-click buttons, those were designed for human beings to interact with. But what happens when you have human beings and agents in the same place. Right? Suddenly, a lot of those interactions, those UI paradigms kind of get thrown away. You don’t need all of this complex UI anymore. And that’s what makes Slacks powerful, and I think that’s what Anthropic knows. I think that’s what we saw in their demos yesterday. — right? You kind of like process the work. But ultimately, it’s coming — that work is getting done because some person or some agent is asking for it, and then you need to give it back to that person or that agent. And where do you do that? You do that in Slack. And that’s what makes Slack Bot so unbelievably powerful is you never have to leave. And of course, it’s powered by Claude. We love our partners of Anthropic but it knows all of the context of your business, not just the context of your systems of records as we think about it, but all of the conversations happening inside of Slack and has access to all of that and the knowledge that it gains from that truly unmatched. It might be our most important piece of data that we have.

Salesforce’s management thinks companies will be deploying hundreds or thousand different types of agents, and many of them will be from Salesforce; management thinks the deployed agents will need a home, and the home is Salesforce

We already know now, our customers aren’t going to deploy just 1 agent. There’s going to be many agents, many capabilities. the ability to automate many different types of work, and they’re going to deploy hundreds or thousands. Many are going to be from us…

…But these agents can’t work in isolation. — like it, each one of them needs to okay. So that home is Salesforce. And they are calling us through the MCP server or maybe even just through one of our core platforms, and the more agents that our company deploys us or anyone else, the more essential our platform becomes.

Salesforce’s management sees the company as one of the largest consumers of tokens in the world with 19 trillion tokens consumed to-date; management has introduced a new metric, Agentic Work Unit (AWU) that measures how much work agents have performed; Salesforce has delivered 2.4 billion AWUs to-date and 771 million AWUs in 2025 Q4 (FY2026 Q4), up 57% sequentially; AWUs came about because management wanted to really look at the ratio of tokens consumed to effective work produced

We are 1 of the largest consumers of tokens in the world to date, now over 19 trillion tokens. So we continue to show you that because — we want you to see that we’re actually doing what we say. I know that there’s been some enterprise software companies who say they’re doing agents or they’re doing AI, but then they’re not showing up in the token rankings from the language model companies…

…Today, we’re introducing an additional metric. The Agenticwork unit created by our very own Patrick Stokes sitting here at the table. — the AWU not to be confused with our customer, AWS. And AWU represents one unit of AI work, a genetic work unit. We’re rolling this out to see how you like it actually here in earnings. It’s a record updated, workflow triggered, decision made, MCP called. And to date, AI agents on the Salesforce platform delivered 2.4 billion agenetic work units. That is where AI isn’t just thinking or calling things, it’s getting work done, transactions, and in Q4 alone, we delivered about 771 million of them…

…When we started looking at that across our customers, we can start to see, okay, our top 10 customers are consuming this many tokens. We know how many tokens sales force is consuming internally. But it begs the question, well, is it — are they doing anything? Are they working? Are they providing any value? Or is it just input and output of intelligence, right? So you can ask it a question, it can write you a poem, but that’s not really all that valuable in the enterprise world, what’s valuable is creating a document for you or updating a record or helping us right here at this table, we all use Slack bot to prepare our notes here, our customer stories, we’re all preparing that with Slack bottom. So what we did is we said, what if we could count those individual work units. And then what if we could look at those work units relative to the tokens, and we said, “Oh, there’s a relationship between the 2. We can start to see a ratio of tokens being consumed and work coming out…

…The tokens are kind of a leading indicator, but the work unit we think is a much more valuable indicator in terms of where the value is actually coming from for our customers and for our own transformation into an agentic enterprise.

Salesforce’s service organisation did well in 2025 (FY2026) because it used its own Service Cloud with an omnichannel supervisor deployed with Agentforce; Salesforce’s sales organisation did well in 2025 Q4 (FY2026 Q4) because it deployed multiple agents; Salesforce used Agentforce to call back 50,000 customers in 2025 Q4 (FY2026 Q4) that they did not call back

Our service is so much better this year because we’re using our new Service Cloud with our omnichannel supervisor deployed with agent force. Our sales, Miguel just hit record sales numbers, you can see them. We’ve never sold or had so much ACV in our history in the fourth quarter because not only does he have 15,000 account executives. But he has all these agents who are out there doing this amazing work…

…Because like believe there’s $20 million, $30 million. We don’t even know, maybe 100 million people we didn’t call back in the last 26 years. But Miguel called back 50,000 people with agents last week that we would not have gotten to. Even though he’s got all these reps, he still doesn’t have the ability to call everybody back.

Salesforce has invested a total of $330 million in Anthropic, for about 1% of the company and management wishes they had invested more; management thinks the AI models could become platforms in the future, but the reality of today is that software companies are needed to get humans and agents to work together to deliver the desired outcomes of organisations; management thinks that SaaS is needed to convert the raw intelligence of AI models into reliable, accurate enterprise work, and Salesforce is in a great position because it is the system of work and system of agency, and it is already proven in 4,000 production customers

We’re so thrilled of our relationship with Dario and I think we just put another $100 million into the new round. We’re up about $330 million into Anthropic invested. It is almost about 1% of Anthropic. And believe me, I wish we had invested a lot more, John. I don’t know why we didn’t do more…

…Could those models themselves become platforms? So could Open AI then also be a platform? Could Anthropic be a platform, can Gemini be a platform, can DeepSeek be a platform, can Mistral be a platform, can Lama be its own platform? So that in the way that we have Windows and Mac, or HTML, or different things as platforms where applications all of a sudden appear, will all of a sudden, an application come in within one of those platforms and then use some of those services? Absolutely. Those could be new platforms, there will also be other new platforms. I have a platform right here as well – iOS. There are many platforms.

And our job as a software company is to help our customers to create success and to take that and help them connect with their customers in a whole new way. So we’ll deliver our products, our capabilities, our value proposition with our customer relationships, of course, we have over 150,000, I think, customers on our core, 1 million on Slack. We have 15,000 sales reps who are out there. Their job is to work with customers to help architect their future success with these ideas.

And our primary vision though, today, because this, in the current reality, this is about humans and agents working together. And these customers, like you saw today with Wyndham, with SharkNinja, even SaaStr, even Salesforce. Our job is to take what’s available today and make it successful. And that isn’t where those platforms are today, as you know. And in your business, you have — you work for an amazing company. Keith works for an amazing company. And these large banks where we are providing a lot of automation for the sales professionals, the service professionals. There is a lot to do, to not only automate those call centers, those contact centers, the sales forces, the employees with Slack, but then to also then unleash the agents in a way that is compliant, that is secure, that is available, that is scalable, that is reliable, that is able to operate in hand in hand…

…SaaS is more important than ever. In the world of LLMs, I mean, we are so happy that this raw intelligence exist, but to convert raw intelligence into reliable, accurate scalable enterprise work, you need a solver infrastructure like the one that Marc described with our 4 layers system of context, the system of work, this is our big differentiator… We are the systems of work. We have the system of agency, very sophisticated. Some companies are building it, whatever, but we have the best because we are proven in 4,000 production customers, 23,000 total customers. Nobody has that at the scale and the complexity because our agents are connected to the data, able to trigger actions, and then we have the system engagement, which is Slack.

Consumer products company SharkNinja used Salesforce to build a guided shopping agent in 8 weeks right before the holiday season; the shopping agent brought tremendous value to consumers; SharkNinja launched with Salesforce in 2025 Q4 (FY2026 Q4) and Salesforce agents have already participated in 0.25 million consumer engagements; the Salesforce agents have helped SharkNinja provide a better service experience for customers while lowering customer service costs

[SharkNinja executive] We set up with you and your team, a guided shopping agent in 8 weeks right before the holiday season. I was nervous about it as I went to my team and I said, we’re putting this in place in October. There’s generally kind of a cutoff in our business where after October 1, you don’t really do anything. And we launched this in 8 weeks, and it brought tremendous value to the consumer. I mean, it helped them with researching and buying and troubleshooting really all in one seamless conversation. So it was a great success for us this holiday season…

…[SharkNinja executive] Since we launched Salesforce in Q4, I mean, agents have participated in 0.25 million consumer engagements during that period of time… We put so many products out into the market and sometimes that many products creates complexity for the consumer. And so whether they’re calling about a service issue or a troubleshooting issue or where is my order issue, it’s allowed our customer service agents to focus on really the really challenging issues, and it’s freed up an enormous amount of time for them — it’s a win for the consumer because the consumer is getting their questions answered quickly, they’re not waiting. And it’s a win for us because it’s driving down cost. And it’s, in the end, just having a better service experience.

Hotel company Wyndham deployed Agentforce a year ago and now has 5,000 agent deployments across its 8,300 hotels; Agentforce is a crucial part of Wyndham’s agentic platform and Wyndham is starting to roll out the agentic experience internationally; Wyndham has used Salesforce’s products to build a single source of truth about each customer, called Wyndham Guest 360; Wyndham Guest 360 is a key enabler of Wyndham’s agentic experience; Wyndham’s management thinks agents are (1) saving significant labour costs in Wyndham’s operations and (2) driving higher revenue; before Wyndham was integrated with Salesforce, Wyndham had to spend time gathering basic information about every guest; Wyndham saw a 200 basis point increase in direct bookings from AI voice agent conversions; Wyndham’s guest satisfaction scores are up 400 basis points because of its agentic experience

[Wyndham executive] When you think about just how far we’ve come in the last year, today, we have over 5,000 deployments of agent force across our over 8,300 hotels. It is a huge, huge part of our Agentic platform, and we are really just getting started. We’re starting to roll out to Canada and internationally.

But with Salesforce tools like MuleSoft and Data 360, we have built a single source of truth, unified all of our guests reservation information and data, all of their loyalty information and all of their CRM data so that all of our agents now are operating with the same trusted and real-time guests and hotel information, which they weren’t before. We’re calling it Wyndham Guest 360. It is a key enabler for our agent foundry. And it is delighting in better guest experiences, improving those experiences and building on increased loyalty engagement.

But most importantly, Mark, you’ve talked a lot about labor, which is agenetic, — it is taking millions of dollars of labor costs from our small business owners in the front office out of their operation and it is driving millions of dollars of increased revenue for these franchisees…

…Before our integration with you all, our agents had to spend time gathering basic guest information on who Marc Benioff was before he checked in tonight. And that was not easily at their fingertips or even worse, asking Marc for his information that we should have had — and our agents now have encyclopedic knowledge. Think about it of all of your guests history, all of your booking behavior, all of your loyalty status because we tied it all together, giving us an ability to answer any question imaginable that any guests like you might have before you check in tonight before you stay. In moments, not minutes, and we’re booking you into your preferred room based on our knowledge, our guest, salesforce knowledge of your past day history. We are successfully working now. I hope to upsell you a suite upgrade if we haven’t already an early check-in Sounds like you’re getting in at a late checkout tomorrow if you’d like one. I don’t know if you’re bringing — if you have pets, but if you were, those agents would be selling you a pet or an F&B. This is all being done autonomously, which small business owners and operators would not have had time to do before.

We have been working so hard. It is generating so much money. We’re seeing faster average speeds of answer. 0 hold times. I’ve heard you talk a lot about why no customer should wait. And that’s why we’re doing it. we’re receiving and we’re moving more importantly, millions and millions of dollars, as I said, in the front office, but we’re generating millions of dollars of increased ancillary revenues to these small business owners. It’s not costing anything…

…We’re also seeing, which is really, really important, a 200 basis point increase in direct bookings. — from AI voice agents and AI voice agent conversion versus having to get those bookings through expensive third-party online travel agencies. That is increasing guest satisfaction. Our guest satisfaction scores are up 400 basis points, they’ve never been higher. And this customer experience that we’ve created is more efficient. Again, humans with agents driving customer success, we’re agent first, and we’re very proud of it.

The management of SaaS community builder company SaaStr thinks agents-as-a-service is good for Salesforce; SaaStr used Agentforce to close $2.7 million in contracts, and $3.5 million more are in the pipeline; management of SaaStr thinks complex agentic work was simply not possible a year ago because of hallucinations but the situation has changed; with the help of Agentforce, SaaStr recently called back 3,000 customers that it previously failed to do so; SaaStr’s management thinks Agentforce has the potentially to be similar in size as Salesforce’s current software business

[Salesforce executive] Now here you are as agents as a Service as well. You have your vision there now as well. So I guess once a visionary, always a visionary, — but give us your vision then. Where are we going? Because you’ve heard about the SaaSpocalypse. And you know that this isn’t our first SaaSpocalypse. We’ve had a few of them. But now where are we going over the next couple of years?

[SaaStr executive] I think this is good for Salesforce, but I think we’re underestimating how powerful these agents are. I think Look, for most people, AI is confusing, the media is confusing, what the hell is going on. Let me simplify this. I was just looking at our numbers on agent force this morning. So far, and again, we’re a small organization. We went from 15 humans to 2.5 and 20 agents, okay? That’s a lot of change. But an Agentforce alone, as a tiny organization, we closed $2.7 million. That’s not the army contract you got, but that’s a lot for us, $2.7 million with an agent, and we have $3.5 million more in the pipeline…

…[SaaStr executive] Not only was this not really possible a year ago — and this is this — a year — the problem — all of us, we were using ChatGPT in the early days. It was all hallucinations. It was hard to believe this stuff would work even 18 months ago, wasn’t it? It was hard to believe, but everything got okay last summer, and then at the end of the year, it got great. And there’s reasons that Salesforce has got great, but to be nerdy, even Anthropic, your customer, when they rolled out these 4-dot models, up to 4, 5 for B2B stuff like we do, it wasn’t a little bit better. It was like jaw-droppingly better. The hallucinations will be worse than a human mixes and the productivity side…

…[SaaStr executive] We did 3,000 with agent force. And for 1 — I was just looking at a couple of examples. We closed a $250,000 customer this week. But the first 1 with agent force was Freshworks. You know Freshworks. They do support and a bunch of other stuff. — but they’ve changed. Gaurish isn’t the CEO anymore. The marketing teams turned over. We don’t know anybody. The agent found the right person and close the deal. That’s sort of magical. That wouldn’t have really been possible without agents they are…

…[SaaStr executive] I think Agent force — and I’m not being infectious. I think it will be $150 billion at the table because I think the value is about 3x the software.

Salesforce’s management thinks token prices are going to decrease over time and commoditise; management thinks Salesforce’s gross margin will not be affected even with all the agentic work Salesforce is doing

Tokens, those prices, we’re working with our various partners, those are going to start to go down over time and commoditize…

…Short term, we don’t see gross margins getting worse. fairly neutral, long time. We’re doing everything in conjunction with our FY ’27 framework and our overall operating margin improvement to continue to get efficiencies in gross margin and operating margin.

Salesforce’s management has 3 ways of monetising AI, which are (1) upgrading seats to premium SKUs, (2) giving customers access to seats that they couldn’t get before because the agentic experience provides good ROI to customers, and (3) sale of flex credits

We have found the formula to monetize AI. There are 3 ways Three ways, distinct ways, and the main ones that we are using to monetize AI. Number one is our large installed base of 100 millions of seats, we are upgrading to our premium SKUs that contain already embedded AI and unlimited access to Agentic for employee use cases. Number one…

…The second way to monetize. this is very peculiar because now our apps are Agentforce Sales, Agentforce Service, all of them are agentic. So now the ROI that companies generate by implementing our apps has increased. So now we have access to new seats that before companies couldn’t afford to roll out sales force or any of our apps.

And the third way is for customer facing agent use cases, agents, we sell through the credits, flex credits. And companies, if you look at the bookings of Agentforce in Q4, 50% were credits, flex credits, fuel and 50% were higher SKUs.

The Trade Desk (NASDAQ: TTD)

Nearly 100% of Trade Desk’s clients are running through Kokai today (was 85% in 2025 Q3); management thinks Kokai is the most advanced AI-powered advertising-buying platform for the open internet; Kokai has enhanced every unique function in the advertising valuation process with AI; Kokai is an upgrade over Solimar in every aspect; Cheerios used retail data and Kokai to achieve 88% more conversions and 7x higher CPA (cost per acquisition); Deal Desk is a new innovation in Kokai that allows advertisers to centralise their deal creation, management, and analysis; prior to Deal Desk, advertisers have increasingly sought one-to-one deals, which has led to inefficiency in the supply chain; Deal Desk uses AI to forecast the performance of a deal; early results of Deal Desk are promising, with Deal Desk deals meaningfully outperforming legacy deals; more suppliers are signing up with Deal Desk, and 2 of the biggest SSPs (supply side platforms) in Germany recently announced an integration with Deal Desk; IKEA used Kokai to achieve a 17% reduction in CPA; Best Western used Kokai and achieved an 89% improvement in incremental reach

Almost 100% of our clients are running through Kokai today. We think Kokai is the most advanced AI-fueled buying platform ever pointed at the open internet. Kokai broke advertising into the basic elements of an advertising campaign and enabled every unique function in the valuation process to be enhanced with AI. From identity probabilities to valuing impressions to predicting performance to forecasting spend to predicting the right clearing price to detecting auction manipulation or even fraud to generating creatives to supply path optimizations or to even surfacing insights that could once easily be buried in a mountain of data. Kokai and AI enhanced and upgraded nearly every part of Solimar…

…Cheerios ran a display campaign in the U.K. recently using retail data for audience targeting on Kokai. They saw 88% more conversions and 7x better CPA…

…I want to share one more innovation built on Kokai, and that’s Deal Desk. Complexity has brought many advertisers to seek out one-to-one deals as a means of simplifying supply chains, much like they used to in a nondigital world. But in that process, some buyers have inadvertently given up buy-side decisioning power, especially in CTV. They have also given rise to inefficient supply chains or inadvertent oxygen to some bad players that a more efficient supply chain would not allow for. Deals can be a way to leverage size and get a better deal, but measuring the deal’s outcomes becomes very important. It is easier to do a bad deal than ever, especially when pursuing cheap cost. Historically, 90% of deal IDs never scaled, either because they were set up poorly, hard to troubleshoot or simply didn’t perform. Deal Desk centralizes the way buyers create, manage and analyze their deals. It uses AI to forecast how a deal is likely to perform relative to the open market and then highlights where things may go off track. Early results are encouraging. So far, deals that are set up and managed through Deal Desk are performing meaningfully better than those managed the legacy way. More suppliers are signing up for Deal Desk every week. Deal Desk is in early stages, but it is rolling out around the world. Most recently, the 2 biggest SSPs in Germany announced that they are integrating with it…

…IKEA, for example, is using Kokai to get a more intelligent perspective on how their ads perform across all channels. Thanks to Kokai’s AI-fueled omnichannel optimization, they saw cost per acquisition decrease by 17%, while also gaining valuable new insights on the effectiveness of different channel activations at different stages of the customer journey…

…Best Western saw their booking rate double when using Kokai to target live sports opportunities, thanks to an 89% improvement in incremental reach with Kokai.

Every engineer at Trade Desk is using AI coding tools; management has injected AI tools across Trade Desk, resulting in higher productivity

The most obvious of AI’s features is that it is a productivity enhancer. As one example, every engineer at TTD is using AI tools to write and/or test code. We’ve injected AI tools across the company and productivity is going up.

Trade Desk’s management thinks the company’s business will benefit more from AI than any of its competitors; Trade Desk’s scaled competitors are selling their owned and operated (O&O) inventory, whereas Trade Desk does not, and this aligns Trade Desk’s interests with ad buyers; management thinks the buying platform with the most objectivity and the most trust is the one most likely to win; Trade Desk is trying to make millions of complicated decisions every second and AI can help with this

We think our business model is more conducive and will benefit more from AI than any of our competitors. Every scaled competitor we have is first and foremost, selling their owned and operated inventory, O&O. We don’t have O&O. We have aligned our interest with buyers, and that is even more valuable in the AI-fueled ecosystem. AI makes it easier to make better decisions for advertisers and match the best ad opportunities. Valuable data like advertisers’ first-party data is way more valuable in an AI world. Retail data is more valuable in an AI world. The buying platform with the most objectivity and the most trust is the one most likely to create the most scale and win the most market share. At The Trade Desk, we have built the industry’s most advanced, trusted and objective data set, which is based on factors like these, 20 million ad opportunities every second, each with thousands of data variables and each valued objectively. 

Our clients’ valuable first-party data, which they trust us with, that we will never jeopardize. The industry’s most scaled data marketplace, including most of the world’s leading retailers, close integrations with thousands of suppliers and publishers across channels. In short, we are trying to make millions of complicated decisions every second based on massive data sets. This assignment can obviously be enhanced with AI…

…I don’t think there’s any company in our industry that is better positioned to take advantage of advances in AI.

Trade Desk’s management thinks that platforms with scaled, unique, data, and that are trusted, are in a great position to leverage AI, including agentic AI

There is an emerging narrative that AI will compress software value or disintermediate platforms altogether. That might be true for some SaaS businesses, especially those that deal in generic process or low-grade data. However, for platforms that have earned the trust of their clients and partners and have amassed data that is scaled, unique, refined and actionable, they are in the perfect position to leverage advances in AI to add more value…

…We are convinced that Agentic AI will ultimately accrete the most value to companies that already have deep customer trust that have scaled, refined and objective data sets and that prioritize objectivity, not by companies with limited data hoping an AI framework becomes their business model.

Trade Desk’s management recently introduced Audience Unlimited, a new data marketplace; management thinks Audience Unlimited will benefit the entire digital advertising ecosystem in the AI-era; management thinks 3rd-party data and retail data have been massively underutilised since the advent of programmatic advertising because of a lack of price discovery of the data; Audience Unlimited helps advertisers use the most relevant data for a given campaign at an all-in cost; Audience Unlimited was not possible to build before the arrival of agentic AI; management has already seen very positive results with early adopters of Audience Unlimited; the roll out of Audience Unlimited will enable Trade Desk’s partners to use more agentic AI; management sees data as being more powerful when it can be used in AI instead of in a simple algorithm

Audience Unlimited is one of our biggest innovations ever. This will change the usage and value of the data marketplace for both buyers and sellers, and we think that agencies, advertisers, data providers and retailers will all benefit from this innovation, and it is essential in this new AI-fueled world. There has been massive underutilization to third-party data and retail data, in particular, since the advent of programmatic about 20 years ago. I have argued that the data marketplace is anemic for one primary reason. There is no price discovery for data. The cost has really been complicated for marketers. So generally, they don’t use it. We can see though that the value is obvious, especially leveraging AI. And using a flat cost structure, Audience Unlimited helps advertisers use a wider range of the most relevant data to any given campaign for an all-in cost, where value and impact is clearly understood. This innovation wasn’t possible before advances in AI, particularly Agentic AI in this case, which allows us to surface the right data segment at the right moment. Of course, Audience Unlimited is completely optional. Clients can use it or continue to buy third-party data a-la-carte. We are already seeing very positive results with early adopters, and I’m excited for more advertisers to get access as this year progresses…

…The Audience Unlimited rollout is part of a much bigger effort to reform measurement and enable our partners to use more Agentic as well…

…In this AI-fueled world, the third-party data ecosystem that we power using things like Audience Unlimited and other things, all these innovations are meant to make it easier to bring data onto the platform and make it more powerful in an AI-fueled world. It just inherently is more powerful when you can use it in AI instead of a simple algo or a basic bid factor.

Trade Desk’s management thinks agentic AI is the best thing to happen to programmatic advertising because it makes it easier to make decisions in a very complex environment

Agentic AI, I believe, will be the best thing that ever happened to programmatic advertising. And it’s because it makes decisioning in a very complicated environment easier. And when I say easier, I don’t mean that the nature of the market is getting less complex. I mean that the power of man and machine together can reason through this really complicated decision that is in front of an advertiser, which is should I buy this ad that literally you’re deciding in milliseconds. So Agentic is just an amazing tool to use in that environment.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet (parent of Google), Amazon (parent of Amazon Web Services), Coupang, MercadoLibre, Meta Platforms, Microsoft, MongoDB, Nu Holdings, and Salesforce. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

Earlier this month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adyen (OTC: ADYYF)

 Adyen’s platform now has Dynamic Identification, which enables real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision; Dynamic Identification enables agentic commerce; 95% of Black Friday Cyber Monday shoppers were recognised by Dynamic Identification across online and in-store channels; Dynamic Identification was created to address the challenges AI was posing to document-based approaches to identity and risk; Dynamic Identification uses AI to draw insights from trillions of interactions across Adyen’s online and in-person flows, instead of performing static checks; Dynamic Identification powers Adyen Uplift, which makes payments decisions that balance conversion, cost, and risk; Dynamic Identification is the foundation for the Personalise module in Adyen Uplift that was developed in 2025 H2; Dynamic Identification helps merchants deal with policy abuse that includes exploitation of returns, promotions, and refunds; Dynamic Identification helped a global luxury group and a large sports and entertainment company identify highly problematic shoppers that were previously undetected; Dynamic Identification is not a product itself

Dynamic Identification adds an intelligence layer to our platform, enabling real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision as our customers scale across channels…

…This new foundational layer also addresses policy abuse and enables emerging models such as agent-led commerce. Peak events validate the strength of this new layer, with ~95% of Black Friday Cyber Monday shoppers recognized across online and in-store channels…

…Advances in AI, increasingly sophisticated fraud, and the growing misuse of digital systems are exposing the limits of static, document-based approaches to identity and risk. Designed for a different era, traditional controls add friction for legitimate businesses and shoppers, while struggling to prevent abuse at scale. To address this, we have integrated a third foundational layer: Dynamic Identification. Moving beyond static checks, we designed this layer to draw on trillions of interactions across our online and in-person flows. By embedding this intelligence directly into our stack, we assess risk dynamically and adapt decisions in real time, enabling us to eliminate friction while tightening security with surgical precision…

…The most immediate impact of Dynamic Identification is visible across our optimization and risk products. It is the intelligence layer that powers Adyen Uplift, enabling decisions that balance conversion, cost, and risk across the full payment flow rather than in isolation. 

Building on this foundation, we introduced the newest module within Adyen Uplift in 2025: Personalize. It was developed and validated through pilots with a select group of enterprise customers in the second half of the year, focusing on one of the most common trade-offs merchants face as they scale across channels: how to lower payment costs without negatively impacting conversion. Lower-cost payment methods are often available, but encouraging shoppers to choose them indiscriminately can increase checkout abandonment and degrade the customer experience. Dynamic Identification allows this trade-off to be managed intelligently. By understanding who the shopper is and how they behave across both online and in-person touchpoints, we can personalize the payment experience in real time, guiding shoppers toward preferred and lower-cost options only when the data indicates they are likely to complete the transaction…

…For our customers, an underestimated share of losses comes not only from traditional payment fraud, but also from policy abuse: repeated exploitation of returns, promotions, and refunds that often appear legitimate in isolation but compound into material cost over time. Without visibility into repeat behavior, merchants are left to rely on manual reviews or broad policy restrictions, increasing friction for legitimate customers while failing to address the underlying problem.

In the second half of 2025, we applied Dynamic Identification to this challenge through targeted pilots with enterprise customers. By linking refund activity at the identity level, rather than viewing transactions in isolation, we were able to surface patterns that had previously remained hidden.

The pilots showed strong engagement, with merchants using these insights on a daily basis, rather than only for ad hoc investigations. More importantly, they reported a step change in confidence: they were able to identify abuse clearly, measure its true scale, and pinpoint its sources. This replaced fragmented, manual processes with shared, data-driven visibility. Capabilities such as identifying top refund contributors at the shopper level were consistently cited as materially reducing investigation time and operational overhead…

…One global luxury group identified individual shoppers each receiving up to €5k in refunds, in some cases up to twenty times their average basket value, revealing potential material losses that had gone unnoticed. In another case, a large sports and entertainment customer identified a shopper with roughly 70% of transactions refunded over several years, exposing a long-standing abuse pattern that had not been visible through traditional transaction-level analysis…

…Dynamic Identification is our way of applying AI to the large data set we have…

…Dynamic Identification is in itself not a product. So one of the product suites that is built upon Dynamic Identification is Uplift.

Adyen’s management sees Dynamic Identification as an enabler of agentic commerce; management thinks merchants see clear potential in agentic commerce, but merchants also want to retain ownership of the customer relationship, control over payments and data, and the same level and type of risks; Dynamic Identification enables verification of shopper intent, adaptive authentication, and identity-informed risk decisions even without a human in the loop; Adyen is engaged with the broader ecosystem in enabling agentic commerce; agentic commerce currently has immaterial volume on Adyen; management is not including agentic commerce in Adyen’s 2026 guidance, but thinks it will be a growth driver in the long term; management sees trust as a really important component in agentic commerce, and that’s where Dynamic Identification helps; management thinks it’s really important for Adyen to work with key players in the agentic commerce ecosystem, such as OpenAI and Google, to develop protocols

Dynamic Identification is also a critical enabler of emerging models such as agentic commerce. As this evolution unfolds over time, traditional identity signals are likely to fall away. Transactions initiated by agents will require new trust frameworks, relying on infrastructure, behavioral context, and adaptive risk models rather than direct human interaction.

In H2, we focused on understanding our customers’ needs and how we can best build to meet them. We held extensive conversations with enterprise merchants across retail, luxury, travel, entertainment, and platforms to understand both their ambitions and their concerns. While merchants see clear potential in agent-led commerce, they are equally clear about what must not change: ownership of the customer relationship, control over payments and data, and confidence that new channels can be adopted without introducing new risk…

…Rather than building isolated agent experiences, we are extending our existing platform so that agent-initiated transactions become another channel within a merchant’s existing workflows, governed by the same principles of control, security, and interoperability. Dynamic Identification plays a central role here, enabling verifiable shopper intent, adaptive authentication, and identity-informed risk decisions even when a human is no longer directly in the loop…

…We deepened our engagement with the broader ecosystem by collaborating with partners including OpenAI, Google, Cloudflare, Visa, and Mastercard, and joining the Agentic AI Foundation. Together, we are contributing to the development of open standards that allow agent-led commerce to scale safely and interoperably, without locking merchants into closed systems or fragmenting the ecosystem…

At the moment, the number of transactions is still immaterial on our platform. We started with it. I think that’s very important, so we started with Agentic Commerce. It’s an additional sales channel, and the beauty of having a single platform globally is that we basically have all the building blocks to cater it and to start growing this sales channel with our customers…

Take agentic commerce as one example. It’s not gonna drive short-term revenues, right? So it’s not a big part of our 2026 revenue expectations, but if it’s a top priority for your customer, you want to be there, and you want to support them with it, and that’s where we’re well-positioned to do it, and it will help us drive growth over a longer period of time, right?…

…In this new world, we need to know who is the consumer behind the agent, and how do we know that we can trust the agent, that he’s indeed acting on behalf of the consumer? And that’s where Dynamic Identification really helps. So it helps to look at the signals that we get and compare that to the signals that we have in our system, and then come up with the right outcome or decision, whether this can be trusted or not…

…I’s also very important to shape the protocols with OpenAI, with Google, to make sure that that information is not get lost, and making sure that also our merchants do not lose the connection with the consumer behind the agent. Because that’s one of the key elements that our merchants find important, and we want to make sure that that connection is not lost.

In pilot tests, Personalise, which is powered by Dynamic Identification, helped merchants improve conversion by 6% while lowering transaction costs by 3%; mobility provider Hoppy used Personalise and achieved 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities; Personalise was able to to dynamically prioritise the payment methods riders were most likely to use for Hoppy

Insights from the H2 pilots demonstrate the value of this adaptive approach. Merchants observed conversion improvements of up to 6%, alongside transaction cost reductions of up to 3%, achieved through personalized optimization rather than static, rule-based, and generic logic…

…Mobility provider Hoppy realized 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities. By dynamically prioritizing the payment methods riders were most likely to use, while favoring cost-efficient options where possible, Hoppy protected margins without compromising conversion. Together, these results show how moving beyond static checkout logic enables businesses to better align shopper preferences with cost-efficient payment methods, turning checkout into a scalable driver of growth and profitability. This is the power of Dynamic Identification: translating real-time intelligence into decisions that drive tangible results.

Airbnb (NASDAQ: ABNB)

Airbnb’s management chose to deploy AI for customer support as the first use case within the company; Airbnb built an AI agent trained on millions of support interactions; Airbnb’s AI agent is now resolving 1/3 of support issues, and resolution times are now much faster; Airbnb’s AI agent is live in the US, and management plans to roll it out globally; management’s vision for the customer support AI agent is for guests to be able to call and talk to the agent; management thinks that an AI agent that can converse with guests via voice will (1) lower customer support costs for Airbnb, and (2) improve the quality of customer support

The final piece that accelerates everything we do is AI. Now we’ve taken a really intentional path here. While other companies rush to build chatbots into their existing apps, we started by solving the hardest problem, customer support. We built a custom AI agent trained on millions of our support interactions. It’s already resolving 1/3 of the support issues without needing a live specialist and resolution times are significantly faster. It’s live across North America, and we’re planning to roll it out globally…

…Right now, nearly 30% of tickets in North America that are English-based are handled by an AI agent. A year from now, if we’re successful, significantly more than 30% of tickets will be handled by a customer service agent in many more languages, in all the languages where we have live agents and AI customer service will not only be chat, it will be voice. You can actually call and talk to an AI agent. We think this is going to be massive because not only does this reduce the cost base of Airbnb customer service, but the kind of quality of service is going to be a huge step change. Not only can you get responses in seconds, but the agents using AI are going to be significantly more productive.

Airbnb’s management is building an AI-native experience within the app that knows guests and hosts and will help (1) guests plan their entire trip, and (2) hosts run their businesses better; management will build the AI-native experience without spending significant sums of money on data centers; management will build the AI-native experience without building AI models; management thinks Airbnb’s investments into AI will not affect the company’s profit; management thinks AI will help personalise the user-experience for guests on Airbnb 

We’re building an AI-native experience where the app doesn’t just search for you. It knows you. It will help guests plan their entire trip, help hosts better run their businesses and help the company operate more efficiently at scale…

…We don’t operate experiences, and we’re not building data centers. What we’re doing is finding small wins and scaling them profitably…

…I think one of the great things about Airbnb is that we have a very, very cost-efficient innovation model. So unlike other companies, we’re not building models. We do not have a huge CapEx cost base. So our investment in AI will not affect the P&L. I don’t think you’ll see it in the P&L…

…AI allows us to personalize. Some people come to Airbnb and all they want to see are unique homes. And before AI, like, personalization was a little more primitive. So if they saw a hotel, it might be jarring. Now we can really personalize. So people who just want to see Airbnbs can see Airbnbs. People just want to see hotels, we can eventually personalize, they can just see hotels. If people want to see both, we can know if you’re booking last minute, 1 night, then we’re going to show you a hotel. If you’re booking a family of 5 in Italy, we’re going to show you a home. So it really goes back to personalization.

Airbnb’s management believes that LLM (large language model) chatbots cannot disintermediate Airbnb because they lack access to the unique data and functionality that Airbnb has; management believes that adding an AI layer onto the Airbnb app will create something that is impossible to replicate; management thinks LLM chatbots will be very similar to online search in being good top-of-funnel discoveries for guests and this will be positive for Airbnb; management has seen that traffic from LLM chatbots converts at a higher rate than Google traffic; management sees AI models as being available for use by anyone; management thinks specialisation will win in travel with AI because Airbnb can use any leading AI model and customise it based on Airbnb’s millions of interactions, and hook up the model to important contact points; management does not think that one model builder will end up owning everything

This approach is also our strongest defense against disintermediation. A chatbot can give you a list of homes, but it can’t give you the unique points you find in Airbnb. A chatbot doesn’t have our 200 million verified identities or our 500 million proprietary reviews, and it can’t message the host, which 90% of our guests do. It can’t provide global payment processing, customer support or insurance. By layering AI over the entire Airbnb experience, we believe we’re building something that’s impossible to replicate…

…I think these chatbot platforms are going to be very similar to search. They’re going to be really good top-of-funnel discoveries. And in fact, what we’ve seen is, I think, they’re going to be positive for Airbnb. And I’m very, very deep in this space. And what we see is that traffic that comes from chatbots converts at a higher rate than traffic that comes from Google. But the other thing to know, and this is the most important point, is that these models are not proprietary. The models in ChatGPT, the models in Gemini, the models in Claude and the models like Kiwi are available to every single company. And so pretty soon, every company becomes an AI platform if they make the shift. We will be able to build everything everyone else will have if we use their models. And we believe specialization will win in travel because if somebody wants to find an Airbnb or have a trip, we can take their model, the same model they use, we can post-train it and tune it based on our millions of interactions. We can connect it to our customer support agents. We can connect it to our hosts. And that’s fundamentally what we think…

…I don’t think that one company is going to own everything. I think we’re going to be able to work together. And these companies will be very helpful top-of-funnel traffic generators for Airbnb just like Google was.

Airbnb’s management wants to nail down AI search for Airbnb first and then applying the AI search form factor to sponsored listings; Airbnb is currently conducting small-scale tests on AI search; management can’t pin down a concrete timeline for building AI search; management thinks AI search is difficult problem to solve for e-commerce because it is multi-modal; management thinks a chat interface for AI search for e-commerce (and travel) is not ideal, and Airbnb needs to innovate on the user interface

One of the things that’s been really clear with the — after the launch of ChatGPT was that traditional search was going to become essentially conversational AI search. And that what we wanted to do is really design AI search, really see how that works. And then if we are going to do sponsored listings, we design that ad unit in that form factor. So we’re focused, first and foremost, on the most perishable opportunity, which is AI search. Actually, funny enough, we are doing tests as we speak. So AI search is live to a very small percent of traffic right now. We’re doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we’re going to be experimenting with making AI search more conversational, integrating it into more of the trip. And eventually, we will be looking at sponsor listings as a result of that. But we want to first nail AI search…

…AI search will eventually — I can’t put a time line on it because AI is obviously highly unpredictable. But we want to be — we would love to be the first company in e-commerce that really nails AI search, conversational search. I think it’s really hard not just in travel, but all e-commerce. One of the reasons that chatbots are really hard for commerce is because they’re very visual. They’re photo forward. You need to be able to compare. You need to be able to open different tabs. So a text forward chatbot interface is not the ideal. So we have to actually innovate on the user interface.

Airbnb’s management thinks AI will significantly improve productivity for all Airbnb employees; more than 80% of Airbnb engineers are currently using AI tools

It’s going to make our engineers and everyone at Airbnb significantly more efficient. More than 80% of engineers are now using AI tools. That soon will be 100%.

Arista Networks (NYSE: ANET)

Arista Networks has exceeded its goal of earning $1.5 billion in AI center networking revenue in 2025; management has raised their AI center revenue-goal for 2026 and now expects Arista Networks’ AI center revenue in 2026 to be double that of 2025’s; management’s target for AI center revenue in 2026 includes both front-end and back-end networking

As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion as well as $1.5 billion in AI center networking…

…With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue…

…We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion…

…3 years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? 3 years later, I’m actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it’s tied to the back-end GPU clusters, and it’s an all Ethernet, all AI system for agentic AI applications.

Arista Networks’ products can interoperate with NVIDIA, but management sees Arista Networks emerging as the gold standard network for running training and inference models that process tokens at teraflops speed; Arista Networks is co-designing AI rack systems with 1.6T (1.6 terabits per second) switching coming in 2026

We interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage and VAST Data, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops…

…We are codesigning several AI rack systems with 1.6T switching emerging this year.

Arista Networks’ management recently launched its flagship 7800 R4 spine product for routing use cases that include AI spines

In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases.

In 2025, Arista Networks participated in Ethernet-based industry standards for AI scale-up and scale-out networking; Arista Networks’ networking portfolio are successfully deployed in scale-up, scale-out, and scale-across AI networks; management thinks AI networking architectures need to handle both training and inference frontier models to ease congestion; the key metric when handling training is job completion time, while the key metric when handling inference is time taken to a first token; management sees Arista Networks’ portfolio has having the features to handle the fidelity of AI and cloud workloads; management’s strategy for AI networking is based on Autonomous Virtual Assist, which helps instrument customers’ networks for enhanced security, observability and agentic AI operations

In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 Specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front-end of compute storage, WAN and classic cloud networking. Our AI accelerated networking portfolio consisting of 3 families of EtherLink spine-leaf fabric are successfully deployed in scale-up, scale-out and scale-across networks.

Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different. It’s the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow and all the patterns associated with it.

Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our published subscribed state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers’ networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for Network as a Service.

Arista Networks’ purchase commitments at the end of 2025 Q4 was $6.8 billion, up 42% sequentially; the sequential increase in purchase commitments was for chips related to new products and AI deployments, and was affected by the supply constraint on DDR4 memory chips; pricing for memory chips have gone up significantly for Arista Networks; management sees memory chips as the new gold in the AI sector

Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing such as the supply constraint on DDR4 memory and the lead times from our key suppliers…

…Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they’re more memory intensive. Add to that, that we’re expecting increases from the silicon fabrication that all the chips are made, as you know, essentially with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We’re having to smile and take it just about at any price we can get and the prices are horrendous. They’re an order of magnitude exponentially higher. So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector.

The demand for Arista Networks’ networking products in AI data centers comes only after the data centers are built and after the GPUs and other AI chips are purchased; management sees demand for Arista Networks’ products as being very good, but the exact timing for shipments is harder to pin down

That’s an important thing to understand, that we don’t track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network comes — lags a little. So demand is going to be very good, but whether the shipments exactly fall into ’26 or ’27, Todd, you can clarify when they really fall in, but there’s a lot of variables there.

Arista Networks was initially working with only a small handful for model builders and AI chip designers, but the company is now working with many more of such entities; NVIDIA had essentially 100% market share  just a year ago, but Arista Networks’ management now sees AMD AI chips as having about 20%-25% market share; Arista Networks is the preferred provider for AI data centers that utilise AMD AI chips

If you look at us initially, we were largely working with 1 or 2 model builders and 1 or 2 accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there’s really multiple layers in a cake where you’ve got the GPU accelerators…

…Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or xAI or Anthropic Claude or OpenAI and many more coming. These models and the multiprotocol algorithm or nature of these models is something we have to make sure we build a network correctly for. So that’s one…

…A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they’re building best-of-breed building blocks for the NIC, for the network, for the I/O and they want open standards as opposed to full-on vertical stack from one vendor.

Arista Networks’ management thinks AI model builders will be working with multiple cloud providers, and Arista Networks will be working with all the cloud providers

I think the biggest issue is not only the model builders, but they’re no more in silos in one data center, and you’re going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we’ve historically not worked with this. So I think you’ll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.

Arista Networks’ management are careful going into business with some AI neoclouds (the ones that converted from oil money or crypto money into AI) because their businesses and financial health are questionable

There are a set of neoclouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI. And over there we are going to be much more careful because some of those neoclouds are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a onetime. We don’t know the exact nature of their business and those will be smaller.

Arista Networks’ management does not believe that AI is eating software; management believes that AI enables better software to be built

I don’t think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software.

Arista Networks’ management thinks that the rise of agentic AI will increase demand for all kinds of XPUs

The rise of agentic AI will only increase, not just the GPU, but all gradations of XPU that can be used in the back end and front end.

Arista Networks’ 4 major AI customers are all deploying AI with Ethernet; 3 of the 4 customers have deployed 100,000 GPUs each, and they are growing; the remaining customer is migrating from Infiniband and is still below 100,000 GPUs

We are in all 4 customers deploying AI with Ethernet. So that’s the good news. 3 of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it’s still below 100,000 GPUs at this time, but I fully expect them to get there this year, and then we shall see how they get beyond that. 

Arista Networks has extended the ability to stream the state of a network into AI clusters

The EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, the CloudVision or whatever system can then receive it. And we’re extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what’s going on in the RDMA stack on the host, what’s going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull those — that information all together in CloudVision and give the operator a unified view of what’s happening in the network and what’s happening in the host.

Cloudflare (NYSE: NET)

A leading AI company expanded its relationship with Cloudflare, and Cloudflare is now the AI company’s only long-term infrastructure provider with 100% traffic allocation; Cloudflare’s management is seeing a trend of AI companies choosing Cloudflare as their infrastructure platform

A leading AI company expanded their relationship with Cloudflare, signing a 2-year $85 million pool of funds contract for our full platform, selecting Cloudflare as their single long-term infrastructure provider with 100% traffic allocation. Following a rigorous RFP, they selected Cloudflare over major hyperscalers not just for our unified stack and rapid innovation, but also for our strategic neutrality. This win underscores a growing trend, the most sophisticated AI companies are choosing Cloudflare as their mission-critical, independent platform to connect, protect and build the future of the AI-driven Internet.

A leading AI company expanded its relationship with Cloudflare; this AI company chose Cloudflare in a build versus buy scenario; Cloudflare enables the AI company to manage global traffic with 99.999% availability 

Another leading AI company expanded their relationship with Cloudflare, signing a 1-year $5.4 million contract for our Workers developer platform and application services. What’s most compelling about this win is that it was a classic build versus buy scenario against the hyperscalers. In an industry where being first matters, our ready-to-deploy developer platform provided the agility and speed to market they couldn’t find elsewhere. With Cloudflare, this customer is now able to manage heavy global traffic with 99.999% availability. This deal is a testament to our shift from being just a vendor to instead being a strategic co-innovation partner for the world’s most sophisticated AI companies.

A Fortune 100 company that is also a leader in AI expanded its relationship with Cloudflare; the Fortune 100 company requires zero downtime and chose Cloudflare not because of price, but because of performance

A Fortune 100 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million contract, representing a notable upsell from their initial engagement with us in mid-2025. As a leader in AI, this customer operates under a strict mandate for global resiliency requiring a multi-vendor architecture to ensure zero downtime for their application performance. We beat out the competition not on price but rather on performance and engineering innovation.

A European Global 2000 technology company expanded its relationship with Cloudflare, and is in discussions with Cloudflare about AI Crawl Control 

A European Global 2000 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million pool of funds contract to provide seamless access to our entire platform. We signed our first deal with this customer back in February. After quickly realizing the power of Cloudflare’s platform, they came back to us looking to move from a small variable commitment to a deep strategic partnership. Unlike their legacy incumbents, our combination of best-of-breed security and our Workers developer platform enables sophisticated automation to manage their global infrastructure and greater flexibility to innovate at scale. It’s early days with this customer, and we’re already in discussions regarding AI Crawl Control.

A US media company signed a contract with Cloudflare for AI Crawl Control; the media company was facing a massive increase in AI scraping and chose Cloudflare to gain visibility into which AI models are consuming their data; with the visibility on the AI models, the media company can better monetise its content  

A U.S. media company signed a 3-year $3.1 million contract for AI Crawl Control, along with application services and Workers. This customer was facing a massive increase in AI scraping, which was crushing their network and driving up infrastructure costs. They chose Cloudflare to gain visibility into which AI models are consuming their data, allowing them to protect and eventually monetize their unique content. By leveraging Cloudflare Workers to replace years of complex technical debt from an incumbent, they were able to migrate massive Internet properties into production in just 2 weeks. This deal proves that as AI accelerates, Cloudflare is the partner of choice for companies looking to protect their IP while improving performance, reducing operational costs and enhancing their security postures.

Cloudflare’s management is seeing the shift to AI and agents driving more demand for the company’s services; management thinks AI agents (1) look at significantly more sites when making decisions, (2) allow for much greater degree of software customisation, and (3) never need to rest, unlike humans; management thinks AI agents are changing the economics of software from a seat-based model, to one where the importance lies with providing the compute, connectivity, and guard rails for agents; management thinks Cloudflare is able to capture value on both sides of agentic interactions; most vibe coding platforms are either built on Cloudflare Workers or have it as their preferred deployment target; human developers are using Cloudlfare Workers to manage inference with caching, rate limiting and observability; usage of AI is driving adoption for Cloudflare’s Zero Trust platform; management is seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven apps; management sees Cloudflare, which has 20% of the web sitting behind its network, as the global control plane for the agentic internet; management thinks the agentic internet is creating new growth opportunities for Cloudflare; a Fortune 500 pharmaceutical company is using Cloudflare to build AI tools; a technology company is using Cloudflare Containers to allow its customers to deploy AI tools in a secure isolated environment; a leading financial services company used Cloudflare to launch a MCP (model context protocol) server for AI agents to interact directly with its payment services; management thinks companies like Cloudflare for deploying AI because it offers (1) a complete tool kit, (2) a modern architecture that fits agentic work, and (3) cost-efficient scalability; management sees AI as a pure tailwind for Cloudflare’s business

Second, we are seeing the shift to AI and agents drive more demand for Cloudflare services. What we’re witnessing is a fundamental replatforming of the Internet. AI is driving a paradigm shift in how software is both created and consumed, and that is turning out to be the biggest tailwind for Cloudflare’s network and Workers developer platform. If you look at the last 30-plus years of the Internet and software ecosystem, they were built for human consumption, people in seats and clicks. Now the agentic Internet is emerging, and we can already see its trends. If humans looked at 5 sites when they were making a decision, agents might look at 5,000. If humans had to fall back on generalized software and interfaces, agents allow for infinite customizability of every software application for every need. If humans follow a common circadian rhythm to work, agents never need to sleep. Agents in other words, are the ultimate infrastructure multiplier. In turn, they are reshaping the very economics of software. The industry is transitioning from a business model defined by seat licenses to one where the winners are those providing the compute, connectivity and rails and guardrails for these new digital workers at scale. Cloudflare was built for this moment. We are uniquely architected to capture value on both sides of the agentic interactions. That means we win when AI applications are built on Cloudflare Workers, but we also win just from the increased usage of all of our products and agentic Internet drives…

…When the cost of generating code drops to near 0, the volume of new applications explode. It’s not a coincidence that most so-called vibe coding platforms are either built on Cloudflare Workers or have us as their preferred deployment target. We exited 2025 with more than 4.5 million human developers active on our platform. It’s a lot more if we count their agents. Developers are using Workers to run autonomous logic across our global network, containers for sandboxes and AI gateway to manage inference with caching, rate limiting and observability. AI usage is even driving adoption of our Zero Trust platform to ensure that data is compartmentalized and access granted in limited and controlled ways…

…We’re seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven applications. Over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. This is driving increased demand for our whole platform. This is where Cloudflare’s scale becomes our moat. With more than 20% of the web already sitting behind Cloudflare’s network, we are effectively the global control plane for the agentic Internet. That’s creating a number of new growth opportunities, both with our traditional business as well as what we’ve begun calling Act 4, helping invent the future business model of the Internet. If AI agents are the new users of the Internet, Cloudflare is the platform they run on and the network they pass through. This creates a virtuous flywheel, more agents drive more code execution on our Workers development platform, which in turn drives more demand for Cloudflare’s performance, security and networking services…

There’s a Fortune 500 pharmaceutical company that literally built a vibe coding platform on Cloudflare where their internal developers are using Workers AI and Durable Objects to build AI-assisted tools…

…Another publicly traded technology company is migrating their plug-in sandbox infrastructure to Cloudflare Containers for secure isolated execution of code at scale, which let their customers then prompt deployments directly to their system, but do it in a way which is secure because one of the things that’s really scary sometimes about deploying AI tools, especially to customer-facing applications is there can be a lot of damage that they do if one of these agents goes rogue or something goes wrong, the way that we’ve architected sandboxes allows them to — and containers allows them to do this secure isolated code deployment. And again, it all comes as part of the toolkit of Cloudflare Workers, which is allowing them to go really quickly…

…A leading financial services company has partnered with us to launch an official MCP server designed to allow AI agents like Claude, Cursor or OpenAI to interact directly with the company’s payment services. The whole thing is built on Cloudflare Workers. And this allows merchants to manage commerce tasks, such as creating invoices, checking transactions, processing and payments using natural language command and using things that are running on Cloudflare…

…I think what they like about us is, first, you get a complete toolkit. Second, that toolkit has been architected in a modern way to build exactly what you need for agents and AI applications. And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

…I know that AI is putting pressure on some companies that are out there. It’s not putting pressure on Cloudflare. We are seeing it as nothing but a tailwind for us, both for our developer tools and kind of the Act 4 stuff that we’re working on, but actually for even our legacy products like application services and Zero Trust as well.

Cloudflare’s management thinks the hyperscalers have no incentive to figure out how to run AI workloads more efficiently, unlike Cloudflare; management thinks Cloudflare can get up to 10x the amount of work done off the same GPU compared to a hyperscaler; because of Cloudflare’s efficiency, its capex has not increased significantly to handle Ai workloads; management thinks Cloudflare’s infrastructure offers much higher levels of flexibility to users when it comes to scaling up or down AI compute consumption when compared to the hyperscalers; management thinks Cloudflare is increasingly shifting AI compute-spend away from the hyperscalers

Cloudflare is in the business of getting work done. And so what we are constantly doing is having research teams inside of Cloudflare figure out how you can run AI workloads significantly more efficiently. The hyperscalers is actually have no incentive to do that. They don’t want AI workloads to be more efficient because that just means you have to lease fewer machines from them. Whereas we — because we only charge you for the actual work that’s getting done, that means that we’re just getting oftentimes as much as 10x the amount of work off of the same GPU that you might get with a hyperscaler. That advantage is part of how we’re able to just bring much more out of the CapEx that we spend than others are. Our CapEx has ticked up a little bit, and I think that that’s in response to the fact that we’ve seen an increase in terms of workers, but it’s nowhere close to what we’re seeing from the hyperscalers…

…And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

… I think that the work that we’re doing to really embed with customers is driving success there. And again, we’re still not to a point where we’re going to be doing a $100 million deal a quarter, but we will get to that point. And I think we’ve seen an enormous total addressable market for the Cloudflare Workers platform. And I think that will shift more and more spend away from what people are using the hyperscalers for.

Cloudflare’s management thinks that the predominant business model of the internet in the AI era will shift away from advertising and subscriptions; Cloudflare’s recent acquisition, Human Native, will have an important role in helping the company come up with the next business model for the internet; Cloudflare is able to rewrite internet content that flows through its infrastructure, so it will be able to rewrite internet content in the best way for AI agents to consume; management thinks Cloudflare’s business is incredibly durable because it is able to automatically bring along the part of the internet that sits behind the company into whatever comes next in the AI era; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

In Human Native case, they’re really helping us think through what is the next business model of the Internet going to look like. It’s going to move, I think, away from advertisement. It’s going to move away from subscriptions. It’s going to move to something else. And Human Native who came out of Google and we are just extraordinary in thinking about what that future business model looks like. I think that you’re going to see extraordinary things from them and they fit right in a Cloudflare and we’re excited to have them…

…But then because our application services sit in front of people and one of the things that people don’t understand is, there’s a lot different than what people think of sort of just traditional CDNs or other things like that, is that we’re actually able to rewrite the content that flows through us as it flows through. So if it turns out that agents are better at speaking, I don’t know, Latin than they are speaking English, we can literally rewrite the content that’s behind Cloudflare in Latin rather than being in English. Now that’s not going to be what agents are good at, but they are going to be better probably at speaking code than they are going to be maybe speaking on other things that we might invent. So I think that what we’re able to do and part of the reason we think that our legacy business is going to be incredibly durable is that it’s going to be able to automatically bring along all of the rest of the Internet that already sits behind us into whatever comes next. And I think we’re going to figure that out…

So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Cloudflare’s management thinks that agentic commerce could put a lot of pressure on small businesses, and management is figuring out how they can bring all these small businesses along in an incredibly intuitive and easy way for the small businesses to adopt; management does not have the solutions yet, but they’re confident they can figure it out

One of the things I’m thinking a lot about is what happens to small businesses in a agentic commerce world. There’s a lot of ways where agents could be very consolidating and actually put a lot of pressure on small businesses. And so I think us in combination with great companies that we’re working with, like a Shopify or a Visa or PayPal or Mastercard, we’ve got to figure out how do we make sure that we bring all of these small business along, give them the right tools. And that’s exactly the sort of thing that we’re thinking about as we think about Act 4 and it’s not going to require you to have to go in and rebuild things. We want to make it one click simple where as soon as we figure out this is what really works, you push a button in that just whatever you had as your old shopping marketplace, that just comes along with it and gets to support whatever agents are going to be providing in the future. I don’t know exactly what all those things are going to look like, but we’ve got an incredible team

AI companies are looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services; non-AI companies are also looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services because the non-human users were generating an order of magnitude more volume than the human users

The first place that we saw just demand was actually from a lot of the AI companies, where the AI companies would say to us, we can’t continue to operate our systems unless we can have the security and ability to deal with the load, which Cloudflare provides by default. Every time you run a query against an AI company, it’s pretty expensive to deal with those queries. And so being able to sort out who’s a human and who’s not a human, which is something we’re the best in the world at, is really important for the AI companies, and that’s driven actually just a lot of those initial relationships that are there.

What really took off in Q4, though, was where we saw other companies, media companies, e-commerce companies, companies that were just doing more traditional things online, seeing such an enormous uptick in how agents were interacting with their systems. I mean if any of you have used a tool like a ChatGPT or a Grok or a Claude, and you just watch how many different things it is looking at for every query that you send out, that’s just an order of magnitude increase in the volume of queries that are coming to the Internet. And so the people who are providing what is that Internet that they’re querying against, they need ways to do that in a way which is efficient and able to continue to scale. And Cloudflare is — and again, those application services functions that we have, the kind of Act 1 products that we have, are really critical of being able to deliver that.

Cloudflare’s newer but still-legacy Zero Trust products are helping users to secure AI agents

If you look at something like the new agents that people are running on their own machines often, the amazing thing is that people are waking up very quickly. We’re sort of speedrunning all of the security challenges that are out there, where all of a sudden you say, I’ve just given my agent access to everything in my life, what could go wrong? People are very quickly figuring out a lot could go wrong and so you got to put controls in place. And that’s exactly where our Act 2 or Zero Trust products come into play, where we’ve actually seen a real uptick even in a self-service business of the Zero Trust products.

Content publishers have been overwhelmingly positive towards Cloudflare’s Crawl Control product; Cloudflare’s management has been positively surprised by the reaction from research teams in the finance industry towards Crawl Control; AI companies may not necessarily like Crawl Control, but Cloudflare’s management thinks the AI companies understand why Crawl Control needs to exist; large technology companies have tried to establish content marketplaces, but Cloudflare’s management thinks that content publishers have higher trust in Cloudflare as a neutral 3rd party; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

[Question] Just double-clicking into Act 4, particularly in light of the wins, like the media company signing that $3.1 million contract for AI Crawl Control. So as you’re engaging with publishers, can you share early feedback around adoption towards this opt out controls to block scraping, but also the evolution of a structured marketplace model here.

[Answer] We’ve been sort of that neutral honest broker between the 2 sides that can come together and say, okay, like in order for this to all work, the Internet needs to have a business model, like people who create content deserve to get paid. And one of the things that actually surprised me to some extent, which might be relevant to a lot of you listening in, is we’ve actually been getting called not just from like the Associated Press and BBC and New York Times, but we’ve been getting calls increasingly from banks where their research teams are saying, we’re actually seeing fewer people subscribe to and read our research because the AI companies, the people are just turning the AI companies, they’re slurping all the data down and taking that intellectual property. Again, I think journalists get deserved to get paid, but so do research analysts… 

…The reaction from the content creator side has been just overwhelmingly positive. And we come back to something pretty simple, which is just if you create content, it should be up to you who gets access to it and who doesn’t, and we can provide the tools to do that. On the AI company side, they also — again, nobody wants to pay for something that they were getting for free. But I think that they understand that we’re a fair broker. And when we walk them through what happens if we don’t create some healthy ecosystem here, they say, we get it. We just want to make sure that everyone is treated fairly…

…Microsoft and Amazon have announced content marketplaces. And they may be successful, but what we’re hearing from both the AI companies and from the content creators is that because Cloudflare is that trusted neutral third-party that we can be that honest broker between them that they would rather us be the one that figure out what that future business model looks like as opposed to one of the hyperscalers, which is out there creating their own foundational model themselves and might have a very different incentives. So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Datadog (NASDAQ: DDOG)

Datadog’s management sees a positive demand environment, driven by cloud migration; management is seeing strong growth from both non-AI native companies and AI-native companies; in particular, the AI-native companies have very high growth and are going into production

We continue to see broad-based positive trends in the demand environment. With the ongoing momentum of cloud migration, we experienced strength across our business, across our product lines and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products.

Datadog’s management sees the company’s AI initiatives as being split into 2 buckets; one bucket is AI for Datadog, where management is building AI products to make Datadog better for customers; in AI for Datadog, management made Bits AI SRE (site reliability engineering) Agent, which does root cause analysis, generally available in December 2025 and it had 2,000 trial and paying customers in January 2025; Datadog has other AI products, such as Bits AI Dev agent, Bits AI Security Agent, and the Datadog MCP (Model Context Protocol) server; Datadog MCP server saw an 11-fold increase in tool calls in 2025 Q4 compared to 2025 Q3; the other bucket is Datadog for AI, where management is building capabilities for end-to-end observability across the entire AI stack; management is seeing an acceleration in growth for the LLM (large language models) Observability product; LLM Observability has 1,000 customers and number of LLM spans customers are sending to Datadog is up 10x over 6 months; management will soon release AI Agent Console to monitor AI agents; management is working on GPU monitoring; management is seeing Datadog’s overall customers base increase their usage of GPUs; management is improving the ability of Datadog’s products to secure the AI stack against attacks; management continues to see customer interest grow for next-gen AI observability; 5,500 customers are sending AI data to one or more of Datadog’s AI integrations (was 5,000 in 2025 Q3); management recently launched Feature Flags, which could be the foundation for automatically validating applications written by AI agents; management thinks that observability products for LLMs are currently undifferentiated but it will be differentiated in the future; management thinks observability tools for LLMs should be the same as for the rest of an organisation’s systems because LLMs do not work in isolation

We are executing relentlessly on our very ambitious AI road map, and I will split our AI efforts into 2 buckets: AI for Datadog and Datadog for AI.

So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers. We launched Bits AI SRE Agent for general availability in December to accelerate root cause analysis and incident response. Over 2,000 trial and paying customers have run investigations in the past month, which indicates significant interest and shows great outcomes with Bits AI SRE. And we’re well on our way with Bits AI Dev agent, which detects code level issues, generates fixes in production context and can even help release the monitor a fix. And Bits AI Security Agent, which autonomously triages SIEM signals, conducts investigations and delivers recommendations. The Datadog MCP server is being used by thousands of customers in preview. Our MCP server responds to the AI agent and user prompts and uses real-time production data and rich Datadog context to drive troubleshooting, root cause analysis and automation. And we’re seeing explosive growth in MCP usage, with the number of tool calls growing 11-fold in Q4 compared to Q3.

Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for LLM Observability. Over 1,000 customers are using the product and the number of spans since has increased 10x over the last 6 months. In 2025, we broadened the product to better support application development and iteration adding capabilities such as LLM Experiments and LLM Playground, LLM Prompt Analysis and custom LLM-as-a-judge. And we will soon release our AI Agent Console to monitor usage and adoption of AI agents and coding assistance. We are working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. And we are building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking and data poisoning among many other risks…

…We continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI and LLM usage…

…In software delivery, in January, we launched Feature Flags. They combine with our real-time observability to enable canary rollouts, so teams can deploy new code with confidence. And we expect them to gain importance in the future as they serve as a foundation for automating the validation and release of applications in an AI agentic development world…

…We mentioned our LLM Observability product. There are a few other products in the market for that. I think it’s still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are, but we expect that to shake out more into the future. We think, in the end, there’s no reason to have observability for your LLM that is different from the rest of your system in great part because your LLM don’t work in isolation. The way they implement their smarts is by using tools, the tools on your applications and your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.

Example of an 8-figure land deal with a high-profile AI foundation model builder (most likely Anthropic); the model builder’s observability stack was fragmented; the model builder will consolidate more than 5 observability tools into Datadog; the model builder wants to focus on building its own products; this model builder is the 2nd high-profile model builder that Datadog has as a customer (with the other being OpenAI); every customer of Datadog is also using some in-house or open-source observability tools and the same goes for the AI companies; management is seeing AI model builders’ having the same reasons as non-AI companies for adopting Datadog and that is Datadog is able to prove its value very quickly

We landed an 8-figure annualized deal and our biggest new logo deal to date with one of the largest AI foundational model companies. This customer has a fragmented observability stack and cumbersome monitoring workflows leading to poor productivity. This is a consolidation of more than 5 open source, commercial, hyperscaler and in-house observability tools into the unified Datadog platform that has returned meaningful time to developers and has enabled a more cohesive approach to observability. This customer is experiencing very rapid growth. Datadog allows them to focus on product development and supporting their users, which is critical to their business success…

…[Question] It’s now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?

[Answer] Every customer we land has some — has had some at homegrown. They have some open source. They might still run some open source, like that’s typically where we see everywhere. The — it’s cheaper to do it yourself is usually not the case. So your engineers typically are very well compensated in the big part of the spend in this company. Their velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So it’s not any different from what we see with any other customer. And also within the AI cohort, it’s not original at all like — AI cohort in general is who’s who of the companies that are growing very fast and that are shaping the world in AI and they’re all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.

Datadog’s management continues to believe that digital transformation and cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks that agentic coding is beneficial for Datadog because it leads to more coding volume to observe, and the need for observability in areas that were not necessary before; Datadog’s management thinks it’s very hard to tell what level of model-inferencing will happen because of the gargantuan amount of capex from the hyperscalers, but they think it’s likely to lead to more complexity in the technology ecosystem, which will benefit Datadog’s business

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience and security needs. Meanwhile, we’re moving fast in AI, by integrating AI into the Datadog platform to improve customer value and outcome and by building products to observe, secure and act across our customers’ AI stack…

…[Question] In the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we’ve seen from Anthropic and new frontier models from OpenAI, just in terms of like what this means for observability as a category, defensibility of it in terms of can customers use these tools to build homegrown solutions for observability?

[Answer] There’s a few different ways to look at it. One is there’s going to be many more applications than there were before. Like people are building much more and they are building much faster. We covered that in previous calls, but we think that the — this is nothing, but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time. And you move a lot of the value from the act of writing the code, which now you actually don’t do yourself anymore to validating, testing, making sure it works in production, making sure it’s safe, making sure it interacts well with the rest of the world, with end users, make sure it does what it’s supposed to do for the business, which is what we do with observability. So we see a lot more volume there, and we see that as what we do basically where observability can help. The other part that’s interesting is that we — a lot happens — a lot more happens within these agents and these applications. And a lot of what we do as humans now starts to look like observability. Basically, we’re here to understand — we’re trying to understand what the machine does. We’re trying to make sure it’s aligned with us. We’re trying to make sure the output is what we expected when we started, and that we didn’t break anything. And so we think it’s going to bring observability more widely in domains that it didn’t necessarily cover before…

…[Question] I’m wondering if you’ve collected enough signal from the last couple of years of CapEx, that trend to estimate how much of that is training related and when it might convert to inferencing where Datadog might be required? In other words, are you looking at this wave of CapEx and able to say it’s going to create a predictable ramp in your LLM observability revenue?

[Answer] I think it’s pretty too reductive to peg that on LLM observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it’s kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value 2 or 3 or 4 years from now. So I think we’ll have to see what the conversion rate is on that. But look, it definitely points to very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy. And so we think it’s going to be — like it’s going to be of great help to our business, let’s put it this way.

Datadog experienced adoption growth in AI native customers in 2025 Q4 that significantly outpaced non-AI customers; Datadog now has more than 650 AI native companies (was 500 in 2025 Q3), of which 19 are spending more than $1 million (was 15 in 2025 Q3); 14 of the top 20 AI-native companies globally are Datadog customers; management chose not to share the percentage of revenue coming from AI native customers in 2025 Q4 (was 12% in 2025 Q3); the AI native companies are not dilutive for Datadog’s gross margin; the large AI native customers get the same kind of volume discount as the large non-AI customers

We are seeing continued strong adoption amongst AI-native customers with growth that significantly outpaces the rest of the business. We see more AI-native customers using Datadog with about 650 customers in this group. And we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers…

…[Question] Can you give us the percent of revenue of the AI cohort this quarter?

[Answer] We didn’t — have not put it in there…

…[Question] On margin, are the large AI-native customers significantly dilutive to gross margin?

[Answer] On a weighted average, they’re not. As we’ve always said, for larger customers, it isn’t about the AI-natives or non-AI-natives, it has to do with the size of the customer. We have a highly differentiated — diversified customer base. So I would say we’re essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it’s more or less what we’ve seen over the past couple of years, not really affected by AI or non-AI native.

Datadog’s management’s basis for guidance is to have conservative assumptions on usage growth trends observed in recent months; in setting guidance, management made the conservative assumption that Datadog’s core business is growing faster than the business from its large AI customer (OpenAI)

Our guidance philosophy overall remains unchanged. As a reminder, we based our guidance on trends observed in recent months and apply conservatism on these growth trends…

…We noted that with the guidance being 18% to 20% and the non-AI or heavily diversified business being 20% plus, that would imply that the growth rate of that core business assumed in the guidance is higher than the growth rate of the large customer. It doesn’t mean the large customer is growing any which way. It’s just that in our consumption model, we essentially don’t control that. And so we took a very conservative assumption there.

Datadog’s management thinks that as agentic developers proliferate, there will be a lot more automation in observability workflows, but there will still be a need for UIs (user interfaces) for human developers to interact; to prepare for the rise in automation in observability workflows, management is exposing a lot of Datadog’s functionality directly to agents; management thinks it’s likely that Datadog’s MCP (Model Context Protocol) server will be part of how agents interact with Datadog’s products

[Question] In a world where there’s a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it’s UI or how workflows work in observability and how maybe Datadog sort of tries to align with that evolution that’s likely to come in the next couple of years?

[Answer] There’s going to be an evolution, that’s certain. There’s going to be a lot more automation. We see it today, like we see the — all the signs we see point to everything moving faster, more data and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that. And a lot of the interaction will be automated by agent. So we’re building the products to satisfy both conditions. So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what their options are, give them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. We mentioned on the call, we have an MCP server that is currently in preview and that is really seeing explosive growth of usage from our customers. And so it’s a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.

Datadog’s management thinks that LLMs (large language models) are getting better all the time; management sees 2 parts to Datadog’s defensibility against LLMs; the 1st part is Datadog understands how all the data fits together; the 2nd part is Datadog has the foundation to provide proactive, real-time anomaly detection and solutions as Datadog is embedded in an organisation’s data plane; management thinks that the world of observability is shifting towards one where it’s important for observability providers to provide proactive, real-time anomaly detection and solutions; management is developing Datadog’s ability to provide proactive, real-time anomaly detection and solutions; the data planes in a typical organisation Datadog works with are real time and many orders of magnitude larger in volume than what an LLM typically sees; management is not seeing any change in the intensity of competition for Datadog’s business from LLMs; management thinks it’s only rational for all AI native customers to use Datadog’s products

We definitely see that LLMs are getting better and better, and we’ll bet on them getting significantly better every few months as we’ve seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So if you feed a lot of data to an LLM and ask for an analysis, you’re very likely to get something that is very good and that is going to get even better.

So when you think of what we have that is fundamentally our moat here, there’s 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that’s how we aggregate all the data we get, we parse out the benefits. We understand how everything fits together and we can feed that into the LMM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so customers can recombine that in different ways using different intelligence tools.

But the other part that we think where the world is going for observability is that right now, we are — the SDLC [software development life cycle] is accelerating a lot, but it’s still somewhat slow. And so it’s okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling for them. Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that’s happening in your system. So you need to be proactive. You’ll need to run analysis in stream as all the data flows through, you’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the [ fact ] 10, 15 minutes later. And that’s what we’re uniquely positioned to do.

We are building that. We’re not quite there yet, but we think that a few years from now, that’s what the world is going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems…

…The data plates we’re talking about are very real time, and there are many orders of magnitude larger in terms of data flows, data volumes than what you typically feed into an LLM. So it’s a bit of a different problem to solve…

…[Question] I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted?

[Answer] There hasn’t been any particular change in competition in that we see the same kind of folks and the positions are relatively similar. And we are pulling away. We’re taking share from anybody who has scale. And I know there’s been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, nothing that we saw in deals, nothing that had a large market impact. And so we don’t see that as changing the competitive dynamics for us in the near future…

…At the end of the day, it should be irrational for customers — for all customers in the AI cohort not to use our product…

…I think as you look at being in-stream looking at 3, 4, 5 orders of magnitude, more data, looking at the data in real time, and passing judgment in real time on what’s normal, what’s anomalous and what might be going wrong doing that hundreds, thousands, millions of times per second. I think that’s what is going to be our advantage and where it’s going to be much harder for others to compete, especially general purpose AI platforms.

Datadog’s management thinks the best way to justify the existence of Datadog in an environment where observability bills are going up because of AI usage, is to prove the cost-savings to customers 

[Question] Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up.

[Answer] There’s only 2 reasons people buy your product is to make more money or to save money. So whatever you do, when customers use a new product, they need to see a cost savings somewhere or they need to see that they’re going to get to customers they wouldn’t get to otherwise. So we have to prove that. We always prove that. Any time a customer buys a product, that’s what is happening behind the scenes. The — in general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.

Datadog’s management is seeing great productivity gains when employing AI internally

In terms of AI, to date, we are using it in our internal operations. So far, it’s — with the first signs of what we’re seeing is productivity and adoption…

…We’re getting a lot — we see great productivity gains with AI there, but at this point of detail, it helps us build more faster and get to solve more problems for our customers. And — but we’re very busy adopting AI across the organization.

Paycom Software (NYSE: PAYC)

IWant allows anyone to become an expert in the system without training; Forrester found that organizations with more than 500 employees that use IWant experienced an ROI of over 400%; with IWant, managers up to 600 hours, executives 60 hours, HR teams 240 hours and employees 3,600 hours, on an annual basis; the leaders of organisations using IWant get immediate value out of the product without any training; IWant usage is up 80% in January 2026 from 2025 Q4; IWant’s functionality is continuously being improved

Our most advanced AI solution, IWant, is designed to accelerate the speed to value by allowing anyone to become an expert in the system without any training. Forrester’s recent analysis of a composite organization with more than 500 employees found that organizations using IWant experienced an ROI of over 400%, driven by productivity gains at every level. Managers save as many as 600 hours per year, executives up to 60 hours, HR teams up to 240 hours and employees across the organization collectively reclaim 3,600 hours annually.

Leaders describe IWant as a catalyst for deeper insight and one CEO remarked, I get immediate value. Without any training or knowledge of Paycom, I can go in and immediately understand more about my business…

…IWant usage is up 80% in January alone just based and that’s from fourth quarter…

…We continue to build out the IWant system. We continue to add more and more functionality to it. It continues to get stronger and stronger.

Paycom’s management thinks that AI is not a threat to Paycom; management thinks AI will give Paycom the opportunity to enter adjacent industries that it was not able to in the past

think there’s a little misjudgment about the AI thesis materializing as a threat weapon that will be used against us. I mean AI is our friend at Paycom. And I’ve worked very hard to ensure that the misunderstanding of AI’s impact on us isn’t on our end.

And I just believe as you look into the future, we have opportunities now that we didn’t have in the past, right? Like the speed of development has increased, the pace of the user buyer being able to digest it might lag a little bit, but we can develop a lot more today than what we’ve been able to in the past. We’re in this age of software development and in some instances, replacement of specific software. Paycom can get into every adjacent industry now within weeks or months. And I’ll remind everybody that I was the first Bob [ coater ] back in 1998. So there are several easy-to-displace industries that don’t just sit ancillary to our industry, but they’re dependent upon our industry of where the data starts. And so now that we can develop anything very quickly and use all these technologies to replace other industries in a matter of weeks or months, we’re excited about how that — what that looks like for our future as well.

Paycom’s management is currently not seeing any impact on overall employment from AI, but is not dismissing impacts in the future; management thinks that Paycom still has ample growth opportunities even if AI does lead to lower overall employment

[Question] The AI impact to overall employment. How do you see that impacting Paycom business?

[Answer] I’d say we’re not seeing it. I’m not going to dismiss potential impacts for us to the future. I would say that we are not overexposed to any one industry, any one client, client size. And again, we only have 5% of the market. And so you could do some calculations and we’re the most automated product in the industry and the best product for the best value that someone is going to achieve throughout the industry. And so when you look at that, I think that you could do some adjustments in employment, which again, we have not seen. But I mean, even if you did, I still think our opportunities intact for us.

Shopify (NASDAQ: SHOP)

Shopify has been building for AI shopping for some time; orders coming to Shopify stores from AI search has increased 15x since January 2025, albeit from a small base; management thinks AI shopping helps serve smaller merchants to the right buyers who might otherwise never have discovered the merchants; management thinks AI shopping benefits consumers because they gain access to a personal shopper; management thinks AI shopping will increase e-commerce penetration faster than it would have otherwise; management thinks it’s important that AI shopping is at least as good as shopping at a merchant’s digital storefront; Shopify has introduced Shopify Agentic Storefronts, which lets all major AI platforms access billions of products from Shopify merchants accurately and in an up-to-date way; AI platforms are plugged into the best commerce source of truth with Shopify, and this translates to better experiences for consumers; through the Agentic plan, brand not already using Shopify will soon be able to sell through the same AI platforms as Shopify merchants; Shopify built Universal Commerce Protocol (UCP) with Google as the common rails to support agentic commerce; UCP is payments agnostic and keeps merchant’ essential checkout logic intact; UCP is the only protocol that covers the full commerce journey end-to-end; leading retailers are already using UCP; agentic commerce does not bypass Shopify’s checkout; management has no opinion on which LLM platform will be the dominant one for agentic commerce and they just want to allow merchants to sell through agentic commerce; comanagement sees merchants’ economics remaining the same between agentic commerce and selling directly from their stores

We’ve been building for this new era of AI shopping for a long time, and it’s now here. In fact, since January 2025, orders coming to Shopify stores from AI search are up 15x. Now that’s on a small base, but that’s still a really big jump in 12 months. For our merchants, it matters because it powers the long tail of commerce, servicing smaller merchants to the right buyers who might otherwise have never discovered them. This is merit-based discovery at scale. For buyers, it matters because it’s like having a personal shopper in your pocket, someone who really understands them, their taste, their preference, their size…

…For Shopify, it matters because we believe it can bend the curve of e-commerce penetration by stripping out friction, pulling late adopters in and moving more everyday purchases online…

…It is critical that shopping in an AI conversation is at least as good as shopping at the merchant’s online store…

…Shopify Agentic Storefronts syndicates billions of products through our catalog to all major AI platforms, Google AI Mode and Gemini, ChatGPT, Microsoft Copilot, one click and our merchants get instant access to millions of potential buyers who are actively looking for their products. We’ve already seen huge brands like Vuori, Glossier, Steve Madden and SPANX sign up and start selling. Plus through the catalog, our partners get the most accurate up-to-date data for billions of products for millions of the best brands on the planet. And this is really important because when they tap into our catalog, they’re not just ingesting another feed, they’re plugging into the best commerce source of truth. And that source of truth means cleaner matching and fresher data, which translates directly into faster and more trustworthy experiences.

The new Agentic plan means that any brand not already using Shopify will soon be able to sell through the same AI platforms as our merchants as well as on the Shop app. Why? Because frankly, when commerce flows freely across agents, everybody wins…

…We built the Universal Commerce Protocol or UCP. UCP is infrastructure. It’s not a product. It’s the common rails Agentic commerce runs on. Shopify co-developed this with Google because we know commerce better than anyone. It’s an open standard for any agent to connect with any brand on the Internet. UCP is built to flex to the many ways commerce happens. It’s payment agnostic by design. It keeps the merchants essential checkout logic intact without forcing them to rebuild their customizations over and over again to fit our system. UCP is the only protocol that covers the full commerce journey end-to-end from search to cart, then checkout to post order, and it’s already being used by the world’s leading retailers…

…LLMs do not bypass Shopify’s Checkout. Checkout is really 2 parts. Think of it this way. You have a front end that’s the user interface that buyers interact with and the back end that processing everything server to server. So if you think about a Shopify store today, Shopify runs both the front end and the back end. And under UCP, Shopify still powers the overall experience, but the merchant gets to keep their own checkout system on the back end. Now with something like ChatGPT, for example, OpenAI will run the front end, which is sort of the screens and the forms that the buyer uses. But Shopify still runs the back end. And so things like order processing and payments through Shopify Payments, that all runs through Shopify’s infrastructure…

….We want to make sure that whatever surface, whatever permutation is the one that actually becomes the mainstay in Agentic that it reflects exactly the experience that the merchants want similar to what they have in the online store as well. And so the economics for Shopify merchants’ economics are the same as if the transaction happened in the online store as well. There should be no difference there.

Shopify’s on-platform AI assistant, Sidekick, proactively helps merchants prioritise and execute tasks; Sidekick’s usefulness is enhanced because Shopify powers a merchant’s store, checkout data, and apps; in the 3 weeks since Sidekick’s latest edition was released, it has generated almost 4,000 custom apps, created over 29,000 automations, built almost 355,000 task lists, and edited over 1.2 million photos; Sidekick Pulse is a new feature in Sidekick that surfaces tailored advice for merchants; Sidekick Pulse recently recommended a Shopify jewelry merchant to bundle 4 products because the Sidekick Pulse knew the 4 products were best sellers and bundles tend to convert better

Our on-platform AI assistant, Sidekick has come a long lane in a year. Sidekick is effectively a co-founder for our merchants. It uses everything it knows about your business, and it proactively tells you which task to prioritize. And it will even help you execute those tasks. Because Shopify powers the store, checkout data and apps, Sidekick can see the entire picture and do the work in one place…

…In just 3 weeks after our latest edition drop, Sidekick generated almost 4,000 custom apps, created over 29,000 automations with Shopify Flow, built almost 355,000 task lists and edited over 1.2 million photos. So it’s clear that Sidekick is doing real heavy lifting for our merchants…

…Sidekick Pulse is our new feature that proactively helps merchants grow their business. It works in the background to surface tailored advice that’s grounded in each merchant’s business, powered by over 2 decades of data…

…Last week, Sidekick Pulse made a recommendation to one of our jewelry brands. It suggested bundling 4 separate products and selling them together as a stack. Why? Because it knew that those 4 products were already best sellers, and it also knew that bundles tend to convert better and drive up cart value. Personalized data analysis paired with intelligence gained from hundreds of millions of other transactions. This is where our AI assistant really becomes the AI co-founder. It’s bespoke, it’s intuitive.

Shopify’s new app SimGym simulates real buyer behavior to provide feedback on store changes before they are shipped

Our new app SimGym simulates real buyer behavior to give you feedback on changes to your store before you even ship them.

0.5 million merchants have used AI within Shopify’s online store editor to create 6.5 million custom elements; Shopify’s online store editor allows anyone to design without code

Within our online store editor, more than 0.5 million merchants have used AI to create 6.5 million custom elements. Now anyone can design without code. This is really Shopify at its best. Massive complexity transform into a tool for anyone with imagination, no technical skills required.

Shopify’s management believes AI advances will make Shopify even more essential for merchants

As AI advances, Shopify becomes even more essential. AI transforms interfaces and accelerates the pace of change, but it doesn’t alter the underlying architecture of commerce. Commerce will always require speed, reliability and trust at a global scale. When I say scale, consider the billions of transactions that we facilitate. But it’s not just about the volume. It’s the comprehensive commerce experience we support. When an AI agent surfaces a product in any interface, merchants still need a reliable, secure and compliant path to purchase and post purchase. They still need our ecosystem of buyers, developers and partners. We help merchants be everything everywhere all at once, representing over 14% of U.S. e-commerce today and rapidly growing percentages in many geographies across the globe, we have an unparalleled view of commerce. Simply, we are the experts at commerce. AI will be a force multiplier. It will help us achieve our goals of democratizing entrepreneurship, inspiring more merchants, driving more transactions and creating more commerce channels.

Shopify was able to accelerate product development in 2025 without growing the size of the team because of the use of AI

Throughout 2025, we achieved operating leverage in each of R&D, sales and marketing and G&A, largely due to disciplined headcount management. By leveraging AI, automation and our proprietary project management and talent management systems, we’ve been able to accelerate our product development capabilities without growing the size of the team.

Shopify’s management sees Agentic Plan as an on-ramp for non-Shopify merchants to enter the Shopify ecosystem, similar to how Commerce Components works

The Agentic plans opens our infrastructure to all brands. And I think this idea that we’re bringing Agentic Commerce to every brand, whether or not they’re on Shopify, we think will be — I mean, it certainly has already been an incredible way for us to start conversations with brands who might not be ready to migrate or have not anticipated a full forklift migration just yet, but they don’t want to miss out on this incredible opportunity that might be this Agentic Commerce. And so in a similar vein to how we started — we created Commerce Components a couple of years ago where non-Shopify merchants can use things like Shop Pay or they can simply use Shopify Checkout as a component. That allowed us to start conversations with brands that we weren’t otherwise talking to. In some cases, some of those brands who came to us initially just for Shop Pay are now entirely on Shopify. So certainly, we think this could be an incredible on-ramp just like the Commerce Components play was.

The Catalog is important for Shopify’s agentic commerce ambitions because it is a source of truth for agents, and agents do not have to rely on scraping information from the internet

It is incredibly important that Tobi said something recently about Catalog. He said that everyone else has to scrape the Internet, we actually have the source of it. The fact that we have structured billions of products so agents can surface the most relevant items in seconds, the fact that products are going to be then surfaced based on relevance and sort of this merit-based discovery is going to happen. I think that every retailer and every merchant on the planet is thinking about how they can get in front of as many buyers and consumers on Agentic. If they continue down that path and do the math, more and more, they realize that Shopify is the company that is front and center.

Shopify’s management appears to see UCP (Universal Commerce Protocol) as being the significantly more important rails for agentic commerce compared to OpenAI’s ACP (Agentic Commerce Protocol)

[Question] Can you help us understand the UCP versus ACP, the other standard that OpenAI and Stripe are putting forward. Are these overlapping standards? Do they compete? Are they complementary in any way?

[Answer] Yes. Look, the goal is simple with UCP. It’s one common language for agents and retailers. The idea is that merchants can keep the brand, the attributions, buyers get these incredibly trustworthy experiences and Agentic Commerce can scale. UCP is specifically geared towards being a protocol that covers the full commerce journey end-to-end from search to cart, then checkout. It includes post order. It keeps the merchants essential checkout logic intact.

It doesn’t force them to rebuild customizations over and over again. It’s payment agnostic by design. It’s built to flex in many ways. I mentioned a couple of examples in my prepared remarks. I mean you think about ButcherBox or you think of AG1, for example, those — that subscription logic is really complex because sometimes you want to skip a month, sometimes you want to double up. If you’re on vacation, you want to do a hold or some of the larger furniture companies on Shopify that do this incredible white glove delivery where you can set the exact time and date for your couch being delivered.

These things need to be ported over into the Agentic world, and UCP does that. So in our view, UCP covers the full commerce journey end-to-end. And we think — we have 20 years of doing this. Commerce is very complex. It is easy to get it wrong. And I think that it’s more than just a transaction. It’s an entire experience and UCP covers all of that. And we’re really proud of what we did with our friends at Google. It was an incredible experience to work on it with them, but it works, and we think we’re already seeing incredible adoption from some of the largest retailers on the planet.

Shopify’s management is not seeing a competitive threat develop in terms of companies choosing to replace or bypass Shopify’s solutions with vibe-coded tools

[Question] About the feedback from merchants having discussions at the Board level about moving to Shop. Specifically, AI, the feedback that you’re getting from companies in terms of the AI road map, is that — I imagine it’s influencing decisions. Are you also seeing merchants evaluate custom solutions in light of what they can do with AI tools?

[Answer] I think a lot of the largest retailers, certainly the ones I’m meeting with, I mentioned brands like General Motors or L’Oreal or SuitSupply or Amer Sports, who runs Wilson and Salomon. What we hear from them is they’re looking — if they’re not on Shopify already, usually, they come to us with a particular problem. In some cases, it’s — we want to make sure we don’t miss out on Agentic. In other cases, they’re coming to us because they want to replace their homegrown system that they built many years ago for e-commerce. They don’t want to have 400 engineers anymore. They want to effectively come to Shopify because they want to go back to what they do best, which is they want to build furniture. They want to be a cosmetics company. They don’t necessarily want to have this massive engineering team… I think the days of let’s just build everything ourselves in-house is long gone. And I think that gives Shopify an incredible opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adyen, Datadog, Mastercard, Paycom Software, Shopify, and Visa. Holdings are subject to change at any time.