More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

Earlier this month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adyen (OTC: ADYYF)

 Adyen’s platform now has Dynamic Identification, which enables real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision; Dynamic Identification enables agentic commerce; 95% of Black Friday Cyber Monday shoppers were recognised by Dynamic Identification across online and in-store channels; Dynamic Identification was created to address the challenges AI was posing to document-based approaches to identity and risk; Dynamic Identification uses AI to draw insights from trillions of interactions across Adyen’s online and in-person flows, instead of performing static checks; Dynamic Identification powers Adyen Uplift, which makes payments decisions that balance conversion, cost, and risk; Dynamic Identification is the foundation for the Personalise module in Adyen Uplift that was developed in 2025 H2; Dynamic Identification helps merchants deal with policy abuse that includes exploitation of returns, promotions, and refunds; Dynamic Identification helped a global luxury group and a large sports and entertainment company identify highly problematic shoppers that were previously undetected; Dynamic Identification is not a product itself

Dynamic Identification adds an intelligence layer to our platform, enabling real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision as our customers scale across channels…

…This new foundational layer also addresses policy abuse and enables emerging models such as agent-led commerce. Peak events validate the strength of this new layer, with ~95% of Black Friday Cyber Monday shoppers recognized across online and in-store channels…

…Advances in AI, increasingly sophisticated fraud, and the growing misuse of digital systems are exposing the limits of static, document-based approaches to identity and risk. Designed for a different era, traditional controls add friction for legitimate businesses and shoppers, while struggling to prevent abuse at scale. To address this, we have integrated a third foundational layer: Dynamic Identification. Moving beyond static checks, we designed this layer to draw on trillions of interactions across our online and in-person flows. By embedding this intelligence directly into our stack, we assess risk dynamically and adapt decisions in real time, enabling us to eliminate friction while tightening security with surgical precision…

…The most immediate impact of Dynamic Identification is visible across our optimization and risk products. It is the intelligence layer that powers Adyen Uplift, enabling decisions that balance conversion, cost, and risk across the full payment flow rather than in isolation. 

Building on this foundation, we introduced the newest module within Adyen Uplift in 2025: Personalize. It was developed and validated through pilots with a select group of enterprise customers in the second half of the year, focusing on one of the most common trade-offs merchants face as they scale across channels: how to lower payment costs without negatively impacting conversion. Lower-cost payment methods are often available, but encouraging shoppers to choose them indiscriminately can increase checkout abandonment and degrade the customer experience. Dynamic Identification allows this trade-off to be managed intelligently. By understanding who the shopper is and how they behave across both online and in-person touchpoints, we can personalize the payment experience in real time, guiding shoppers toward preferred and lower-cost options only when the data indicates they are likely to complete the transaction…

…For our customers, an underestimated share of losses comes not only from traditional payment fraud, but also from policy abuse: repeated exploitation of returns, promotions, and refunds that often appear legitimate in isolation but compound into material cost over time. Without visibility into repeat behavior, merchants are left to rely on manual reviews or broad policy restrictions, increasing friction for legitimate customers while failing to address the underlying problem.

In the second half of 2025, we applied Dynamic Identification to this challenge through targeted pilots with enterprise customers. By linking refund activity at the identity level, rather than viewing transactions in isolation, we were able to surface patterns that had previously remained hidden.

The pilots showed strong engagement, with merchants using these insights on a daily basis, rather than only for ad hoc investigations. More importantly, they reported a step change in confidence: they were able to identify abuse clearly, measure its true scale, and pinpoint its sources. This replaced fragmented, manual processes with shared, data-driven visibility. Capabilities such as identifying top refund contributors at the shopper level were consistently cited as materially reducing investigation time and operational overhead…

…One global luxury group identified individual shoppers each receiving up to €5k in refunds, in some cases up to twenty times their average basket value, revealing potential material losses that had gone unnoticed. In another case, a large sports and entertainment customer identified a shopper with roughly 70% of transactions refunded over several years, exposing a long-standing abuse pattern that had not been visible through traditional transaction-level analysis…

…Dynamic Identification is our way of applying AI to the large data set we have…

…Dynamic Identification is in itself not a product. So one of the product suites that is built upon Dynamic Identification is Uplift.

Adyen’s management sees Dynamic Identification as an enabler of agentic commerce; management thinks merchants see clear potential in agentic commerce, but merchants also want to retain ownership of the customer relationship, control over payments and data, and the same level and type of risks; Dynamic Identification enables verification of shopper intent, adaptive authentication, and identity-informed risk decisions even without a human in the loop; Adyen is engaged with the broader ecosystem in enabling agentic commerce; agentic commerce currently has immaterial volume on Adyen; management is not including agentic commerce in Adyen’s 2026 guidance, but thinks it will be a growth driver in the long term; management sees trust as a really important component in agentic commerce, and that’s where Dynamic Identification helps; management thinks it’s really important for Adyen to work with key players in the agentic commerce ecosystem, such as OpenAI and Google, to develop protocols

Dynamic Identification is also a critical enabler of emerging models such as agentic commerce. As this evolution unfolds over time, traditional identity signals are likely to fall away. Transactions initiated by agents will require new trust frameworks, relying on infrastructure, behavioral context, and adaptive risk models rather than direct human interaction.

In H2, we focused on understanding our customers’ needs and how we can best build to meet them. We held extensive conversations with enterprise merchants across retail, luxury, travel, entertainment, and platforms to understand both their ambitions and their concerns. While merchants see clear potential in agent-led commerce, they are equally clear about what must not change: ownership of the customer relationship, control over payments and data, and confidence that new channels can be adopted without introducing new risk…

…Rather than building isolated agent experiences, we are extending our existing platform so that agent-initiated transactions become another channel within a merchant’s existing workflows, governed by the same principles of control, security, and interoperability. Dynamic Identification plays a central role here, enabling verifiable shopper intent, adaptive authentication, and identity-informed risk decisions even when a human is no longer directly in the loop…

…We deepened our engagement with the broader ecosystem by collaborating with partners including OpenAI, Google, Cloudflare, Visa, and Mastercard, and joining the Agentic AI Foundation. Together, we are contributing to the development of open standards that allow agent-led commerce to scale safely and interoperably, without locking merchants into closed systems or fragmenting the ecosystem…

At the moment, the number of transactions is still immaterial on our platform. We started with it. I think that’s very important, so we started with Agentic Commerce. It’s an additional sales channel, and the beauty of having a single platform globally is that we basically have all the building blocks to cater it and to start growing this sales channel with our customers…

Take agentic commerce as one example. It’s not gonna drive short-term revenues, right? So it’s not a big part of our 2026 revenue expectations, but if it’s a top priority for your customer, you want to be there, and you want to support them with it, and that’s where we’re well-positioned to do it, and it will help us drive growth over a longer period of time, right?…

…In this new world, we need to know who is the consumer behind the agent, and how do we know that we can trust the agent, that he’s indeed acting on behalf of the consumer? And that’s where Dynamic Identification really helps. So it helps to look at the signals that we get and compare that to the signals that we have in our system, and then come up with the right outcome or decision, whether this can be trusted or not…

…I’s also very important to shape the protocols with OpenAI, with Google, to make sure that that information is not get lost, and making sure that also our merchants do not lose the connection with the consumer behind the agent. Because that’s one of the key elements that our merchants find important, and we want to make sure that that connection is not lost.

In pilot tests, Personalise, which is powered by Dynamic Identification, helped merchants improve conversion by 6% while lowering transaction costs by 3%; mobility provider Hoppy used Personalise and achieved 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities; Personalise was able to to dynamically prioritise the payment methods riders were most likely to use for Hoppy

Insights from the H2 pilots demonstrate the value of this adaptive approach. Merchants observed conversion improvements of up to 6%, alongside transaction cost reductions of up to 3%, achieved through personalized optimization rather than static, rule-based, and generic logic…

…Mobility provider Hoppy realized 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities. By dynamically prioritizing the payment methods riders were most likely to use, while favoring cost-efficient options where possible, Hoppy protected margins without compromising conversion. Together, these results show how moving beyond static checkout logic enables businesses to better align shopper preferences with cost-efficient payment methods, turning checkout into a scalable driver of growth and profitability. This is the power of Dynamic Identification: translating real-time intelligence into decisions that drive tangible results.

Airbnb (NASDAQ: ABNB)

Airbnb’s management chose to deploy AI for customer support as the first use case within the company; Airbnb built an AI agent trained on millions of support interactions; Airbnb’s AI agent is now resolving 1/3 of support issues, and resolution times are now much faster; Airbnb’s AI agent is live in the US, and management plans to roll it out globally; management’s vision for the customer support AI agent is for guests to be able to call and talk to the agent; management thinks that an AI agent that can converse with guests via voice will (1) lower customer support costs for Airbnb, and (2) improve the quality of customer support

The final piece that accelerates everything we do is AI. Now we’ve taken a really intentional path here. While other companies rush to build chatbots into their existing apps, we started by solving the hardest problem, customer support. We built a custom AI agent trained on millions of our support interactions. It’s already resolving 1/3 of the support issues without needing a live specialist and resolution times are significantly faster. It’s live across North America, and we’re planning to roll it out globally…

…Right now, nearly 30% of tickets in North America that are English-based are handled by an AI agent. A year from now, if we’re successful, significantly more than 30% of tickets will be handled by a customer service agent in many more languages, in all the languages where we have live agents and AI customer service will not only be chat, it will be voice. You can actually call and talk to an AI agent. We think this is going to be massive because not only does this reduce the cost base of Airbnb customer service, but the kind of quality of service is going to be a huge step change. Not only can you get responses in seconds, but the agents using AI are going to be significantly more productive.

Airbnb’s management is building an AI-native experience within the app that knows guests and hosts and will help (1) guests plan their entire trip, and (2) hosts run their businesses better; management will build the AI-native experience without spending significant sums of money on data centers; management will build the AI-native experience without building AI models; management thinks Airbnb’s investments into AI will not affect the company’s profit; management thinks AI will help personalise the user-experience for guests on Airbnb 

We’re building an AI-native experience where the app doesn’t just search for you. It knows you. It will help guests plan their entire trip, help hosts better run their businesses and help the company operate more efficiently at scale…

…We don’t operate experiences, and we’re not building data centers. What we’re doing is finding small wins and scaling them profitably…

…I think one of the great things about Airbnb is that we have a very, very cost-efficient innovation model. So unlike other companies, we’re not building models. We do not have a huge CapEx cost base. So our investment in AI will not affect the P&L. I don’t think you’ll see it in the P&L…

…AI allows us to personalize. Some people come to Airbnb and all they want to see are unique homes. And before AI, like, personalization was a little more primitive. So if they saw a hotel, it might be jarring. Now we can really personalize. So people who just want to see Airbnbs can see Airbnbs. People just want to see hotels, we can eventually personalize, they can just see hotels. If people want to see both, we can know if you’re booking last minute, 1 night, then we’re going to show you a hotel. If you’re booking a family of 5 in Italy, we’re going to show you a home. So it really goes back to personalization.

Airbnb’s management believes that LLM (large language model) chatbots cannot disintermediate Airbnb because they lack access to the unique data and functionality that Airbnb has; management believes that adding an AI layer onto the Airbnb app will create something that is impossible to replicate; management thinks LLM chatbots will be very similar to online search in being good top-of-funnel discoveries for guests and this will be positive for Airbnb; management has seen that traffic from LLM chatbots converts at a higher rate than Google traffic; management sees AI models as being available for use by anyone; management thinks specialisation will win in travel with AI because Airbnb can use any leading AI model and customise it based on Airbnb’s millions of interactions, and hook up the model to important contact points; management does not think that one model builder will end up owning everything

This approach is also our strongest defense against disintermediation. A chatbot can give you a list of homes, but it can’t give you the unique points you find in Airbnb. A chatbot doesn’t have our 200 million verified identities or our 500 million proprietary reviews, and it can’t message the host, which 90% of our guests do. It can’t provide global payment processing, customer support or insurance. By layering AI over the entire Airbnb experience, we believe we’re building something that’s impossible to replicate…

…I think these chatbot platforms are going to be very similar to search. They’re going to be really good top-of-funnel discoveries. And in fact, what we’ve seen is, I think, they’re going to be positive for Airbnb. And I’m very, very deep in this space. And what we see is that traffic that comes from chatbots converts at a higher rate than traffic that comes from Google. But the other thing to know, and this is the most important point, is that these models are not proprietary. The models in ChatGPT, the models in Gemini, the models in Claude and the models like Kiwi are available to every single company. And so pretty soon, every company becomes an AI platform if they make the shift. We will be able to build everything everyone else will have if we use their models. And we believe specialization will win in travel because if somebody wants to find an Airbnb or have a trip, we can take their model, the same model they use, we can post-train it and tune it based on our millions of interactions. We can connect it to our customer support agents. We can connect it to our hosts. And that’s fundamentally what we think…

…I don’t think that one company is going to own everything. I think we’re going to be able to work together. And these companies will be very helpful top-of-funnel traffic generators for Airbnb just like Google was.

Airbnb’s management wants to nail down AI search for Airbnb first and then applying the AI search form factor to sponsored listings; Airbnb is currently conducting small-scale tests on AI search; management can’t pin down a concrete timeline for building AI search; management thinks AI search is difficult problem to solve for e-commerce because it is multi-modal; management thinks a chat interface for AI search for e-commerce (and travel) is not ideal, and Airbnb needs to innovate on the user interface

One of the things that’s been really clear with the — after the launch of ChatGPT was that traditional search was going to become essentially conversational AI search. And that what we wanted to do is really design AI search, really see how that works. And then if we are going to do sponsored listings, we design that ad unit in that form factor. So we’re focused, first and foremost, on the most perishable opportunity, which is AI search. Actually, funny enough, we are doing tests as we speak. So AI search is live to a very small percent of traffic right now. We’re doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we’re going to be experimenting with making AI search more conversational, integrating it into more of the trip. And eventually, we will be looking at sponsor listings as a result of that. But we want to first nail AI search…

…AI search will eventually — I can’t put a time line on it because AI is obviously highly unpredictable. But we want to be — we would love to be the first company in e-commerce that really nails AI search, conversational search. I think it’s really hard not just in travel, but all e-commerce. One of the reasons that chatbots are really hard for commerce is because they’re very visual. They’re photo forward. You need to be able to compare. You need to be able to open different tabs. So a text forward chatbot interface is not the ideal. So we have to actually innovate on the user interface.

Airbnb’s management thinks AI will significantly improve productivity for all Airbnb employees; more than 80% of Airbnb engineers are currently using AI tools

It’s going to make our engineers and everyone at Airbnb significantly more efficient. More than 80% of engineers are now using AI tools. That soon will be 100%.

Arista Networks (NYSE: ANET)

Arista Networks has exceeded its goal of earning $1.5 billion in AI center networking revenue in 2025; management has raised their AI center revenue-goal for 2026 and now expects Arista Networks’ AI center revenue in 2026 to be double that of 2025’s; management’s target for AI center revenue in 2026 includes both front-end and back-end networking

As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion as well as $1.5 billion in AI center networking…

…With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue…

…We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion…

…3 years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? 3 years later, I’m actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it’s tied to the back-end GPU clusters, and it’s an all Ethernet, all AI system for agentic AI applications.

Arista Networks’ products can interoperate with NVIDIA, but management sees Arista Networks emerging as the gold standard network for running training and inference models that process tokens at teraflops speed; Arista Networks is co-designing AI rack systems with 1.6T (1.6 terabits per second) switching coming in 2026

We interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage and VAST Data, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops…

…We are codesigning several AI rack systems with 1.6T switching emerging this year.

Arista Networks’ management recently launched its flagship 7800 R4 spine product for routing use cases that include AI spines

In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases.

In 2025, Arista Networks participated in Ethernet-based industry standards for AI scale-up and scale-out networking; Arista Networks’ networking portfolio are successfully deployed in scale-up, scale-out, and scale-across AI networks; management thinks AI networking architectures need to handle both training and inference frontier models to ease congestion; the key metric when handling training is job completion time, while the key metric when handling inference is time taken to a first token; management sees Arista Networks’ portfolio has having the features to handle the fidelity of AI and cloud workloads; management’s strategy for AI networking is based on Autonomous Virtual Assist, which helps instrument customers’ networks for enhanced security, observability and agentic AI operations

In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 Specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front-end of compute storage, WAN and classic cloud networking. Our AI accelerated networking portfolio consisting of 3 families of EtherLink spine-leaf fabric are successfully deployed in scale-up, scale-out and scale-across networks.

Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different. It’s the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow and all the patterns associated with it.

Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our published subscribed state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers’ networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for Network as a Service.

Arista Networks’ purchase commitments at the end of 2025 Q4 was $6.8 billion, up 42% sequentially; the sequential increase in purchase commitments was for chips related to new products and AI deployments, and was affected by the supply constraint on DDR4 memory chips; pricing for memory chips have gone up significantly for Arista Networks; management sees memory chips as the new gold in the AI sector

Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing such as the supply constraint on DDR4 memory and the lead times from our key suppliers…

…Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they’re more memory intensive. Add to that, that we’re expecting increases from the silicon fabrication that all the chips are made, as you know, essentially with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We’re having to smile and take it just about at any price we can get and the prices are horrendous. They’re an order of magnitude exponentially higher. So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector.

The demand for Arista Networks’ networking products in AI data centers comes only after the data centers are built and after the GPUs and other AI chips are purchased; management sees demand for Arista Networks’ products as being very good, but the exact timing for shipments is harder to pin down

That’s an important thing to understand, that we don’t track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network comes — lags a little. So demand is going to be very good, but whether the shipments exactly fall into ’26 or ’27, Todd, you can clarify when they really fall in, but there’s a lot of variables there.

Arista Networks was initially working with only a small handful for model builders and AI chip designers, but the company is now working with many more of such entities; NVIDIA had essentially 100% market share  just a year ago, but Arista Networks’ management now sees AMD AI chips as having about 20%-25% market share; Arista Networks is the preferred provider for AI data centers that utilise AMD AI chips

If you look at us initially, we were largely working with 1 or 2 model builders and 1 or 2 accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there’s really multiple layers in a cake where you’ve got the GPU accelerators…

…Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or xAI or Anthropic Claude or OpenAI and many more coming. These models and the multiprotocol algorithm or nature of these models is something we have to make sure we build a network correctly for. So that’s one…

…A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they’re building best-of-breed building blocks for the NIC, for the network, for the I/O and they want open standards as opposed to full-on vertical stack from one vendor.

Arista Networks’ management thinks AI model builders will be working with multiple cloud providers, and Arista Networks will be working with all the cloud providers

I think the biggest issue is not only the model builders, but they’re no more in silos in one data center, and you’re going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we’ve historically not worked with this. So I think you’ll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.

Arista Networks’ management are careful going into business with some AI neoclouds (the ones that converted from oil money or crypto money into AI) because their businesses and financial health are questionable

There are a set of neoclouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI. And over there we are going to be much more careful because some of those neoclouds are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a onetime. We don’t know the exact nature of their business and those will be smaller.

Arista Networks’ management does not believe that AI is eating software; management believes that AI enables better software to be built

I don’t think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software.

Arista Networks’ management thinks that the rise of agentic AI will increase demand for all kinds of XPUs

The rise of agentic AI will only increase, not just the GPU, but all gradations of XPU that can be used in the back end and front end.

Arista Networks’ 4 major AI customers are all deploying AI with Ethernet; 3 of the 4 customers have deployed 100,000 GPUs each, and they are growing; the remaining customer is migrating from Infiniband and is still below 100,000 GPUs

We are in all 4 customers deploying AI with Ethernet. So that’s the good news. 3 of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it’s still below 100,000 GPUs at this time, but I fully expect them to get there this year, and then we shall see how they get beyond that. 

Arista Networks has extended the ability to stream the state of a network into AI clusters

The EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, the CloudVision or whatever system can then receive it. And we’re extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what’s going on in the RDMA stack on the host, what’s going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull those — that information all together in CloudVision and give the operator a unified view of what’s happening in the network and what’s happening in the host.

Cloudflare (NYSE: NET)

A leading AI company expanded its relationship with Cloudflare, and Cloudflare is now the AI company’s only long-term infrastructure provider with 100% traffic allocation; Cloudflare’s management is seeing a trend of AI companies choosing Cloudflare as their infrastructure platform

A leading AI company expanded their relationship with Cloudflare, signing a 2-year $85 million pool of funds contract for our full platform, selecting Cloudflare as their single long-term infrastructure provider with 100% traffic allocation. Following a rigorous RFP, they selected Cloudflare over major hyperscalers not just for our unified stack and rapid innovation, but also for our strategic neutrality. This win underscores a growing trend, the most sophisticated AI companies are choosing Cloudflare as their mission-critical, independent platform to connect, protect and build the future of the AI-driven Internet.

A leading AI company expanded its relationship with Cloudflare; this AI company chose Cloudflare in a build versus buy scenario; Cloudflare enables the AI company to manage global traffic with 99.999% availability 

Another leading AI company expanded their relationship with Cloudflare, signing a 1-year $5.4 million contract for our Workers developer platform and application services. What’s most compelling about this win is that it was a classic build versus buy scenario against the hyperscalers. In an industry where being first matters, our ready-to-deploy developer platform provided the agility and speed to market they couldn’t find elsewhere. With Cloudflare, this customer is now able to manage heavy global traffic with 99.999% availability. This deal is a testament to our shift from being just a vendor to instead being a strategic co-innovation partner for the world’s most sophisticated AI companies.

A Fortune 100 company that is also a leader in AI expanded its relationship with Cloudflare; the Fortune 100 company requires zero downtime and chose Cloudflare not because of price, but because of performance

A Fortune 100 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million contract, representing a notable upsell from their initial engagement with us in mid-2025. As a leader in AI, this customer operates under a strict mandate for global resiliency requiring a multi-vendor architecture to ensure zero downtime for their application performance. We beat out the competition not on price but rather on performance and engineering innovation.

A European Global 2000 technology company expanded its relationship with Cloudflare, and is in discussions with Cloudflare about AI Crawl Control 

A European Global 2000 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million pool of funds contract to provide seamless access to our entire platform. We signed our first deal with this customer back in February. After quickly realizing the power of Cloudflare’s platform, they came back to us looking to move from a small variable commitment to a deep strategic partnership. Unlike their legacy incumbents, our combination of best-of-breed security and our Workers developer platform enables sophisticated automation to manage their global infrastructure and greater flexibility to innovate at scale. It’s early days with this customer, and we’re already in discussions regarding AI Crawl Control.

A US media company signed a contract with Cloudflare for AI Crawl Control; the media company was facing a massive increase in AI scraping and chose Cloudflare to gain visibility into which AI models are consuming their data; with the visibility on the AI models, the media company can better monetise its content  

A U.S. media company signed a 3-year $3.1 million contract for AI Crawl Control, along with application services and Workers. This customer was facing a massive increase in AI scraping, which was crushing their network and driving up infrastructure costs. They chose Cloudflare to gain visibility into which AI models are consuming their data, allowing them to protect and eventually monetize their unique content. By leveraging Cloudflare Workers to replace years of complex technical debt from an incumbent, they were able to migrate massive Internet properties into production in just 2 weeks. This deal proves that as AI accelerates, Cloudflare is the partner of choice for companies looking to protect their IP while improving performance, reducing operational costs and enhancing their security postures.

Cloudflare’s management is seeing the shift to AI and agents driving more demand for the company’s services; management thinks AI agents (1) look at significantly more sites when making decisions, (2) allow for much greater degree of software customisation, and (3) never need to rest, unlike humans; management thinks AI agents are changing the economics of software from a seat-based model, to one where the importance lies with providing the compute, connectivity, and guard rails for agents; management thinks Cloudflare is able to capture value on both sides of agentic interactions; most vibe coding platforms are either built on Cloudflare Workers or have it as their preferred deployment target; human developers are using Cloudlfare Workers to manage inference with caching, rate limiting and observability; usage of AI is driving adoption for Cloudflare’s Zero Trust platform; management is seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven apps; management sees Cloudflare, which has 20% of the web sitting behind its network, as the global control plane for the agentic internet; management thinks the agentic internet is creating new growth opportunities for Cloudflare; a Fortune 500 pharmaceutical company is using Cloudflare to build AI tools; a technology company is using Cloudflare Containers to allow its customers to deploy AI tools in a secure isolated environment; a leading financial services company used Cloudflare to launch a MCP (model context protocol) server for AI agents to interact directly with its payment services; management thinks companies like Cloudflare for deploying AI because it offers (1) a complete tool kit, (2) a modern architecture that fits agentic work, and (3) cost-efficient scalability; management sees AI as a pure tailwind for Cloudflare’s business

Second, we are seeing the shift to AI and agents drive more demand for Cloudflare services. What we’re witnessing is a fundamental replatforming of the Internet. AI is driving a paradigm shift in how software is both created and consumed, and that is turning out to be the biggest tailwind for Cloudflare’s network and Workers developer platform. If you look at the last 30-plus years of the Internet and software ecosystem, they were built for human consumption, people in seats and clicks. Now the agentic Internet is emerging, and we can already see its trends. If humans looked at 5 sites when they were making a decision, agents might look at 5,000. If humans had to fall back on generalized software and interfaces, agents allow for infinite customizability of every software application for every need. If humans follow a common circadian rhythm to work, agents never need to sleep. Agents in other words, are the ultimate infrastructure multiplier. In turn, they are reshaping the very economics of software. The industry is transitioning from a business model defined by seat licenses to one where the winners are those providing the compute, connectivity and rails and guardrails for these new digital workers at scale. Cloudflare was built for this moment. We are uniquely architected to capture value on both sides of the agentic interactions. That means we win when AI applications are built on Cloudflare Workers, but we also win just from the increased usage of all of our products and agentic Internet drives…

…When the cost of generating code drops to near 0, the volume of new applications explode. It’s not a coincidence that most so-called vibe coding platforms are either built on Cloudflare Workers or have us as their preferred deployment target. We exited 2025 with more than 4.5 million human developers active on our platform. It’s a lot more if we count their agents. Developers are using Workers to run autonomous logic across our global network, containers for sandboxes and AI gateway to manage inference with caching, rate limiting and observability. AI usage is even driving adoption of our Zero Trust platform to ensure that data is compartmentalized and access granted in limited and controlled ways…

…We’re seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven applications. Over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. This is driving increased demand for our whole platform. This is where Cloudflare’s scale becomes our moat. With more than 20% of the web already sitting behind Cloudflare’s network, we are effectively the global control plane for the agentic Internet. That’s creating a number of new growth opportunities, both with our traditional business as well as what we’ve begun calling Act 4, helping invent the future business model of the Internet. If AI agents are the new users of the Internet, Cloudflare is the platform they run on and the network they pass through. This creates a virtuous flywheel, more agents drive more code execution on our Workers development platform, which in turn drives more demand for Cloudflare’s performance, security and networking services…

There’s a Fortune 500 pharmaceutical company that literally built a vibe coding platform on Cloudflare where their internal developers are using Workers AI and Durable Objects to build AI-assisted tools…

…Another publicly traded technology company is migrating their plug-in sandbox infrastructure to Cloudflare Containers for secure isolated execution of code at scale, which let their customers then prompt deployments directly to their system, but do it in a way which is secure because one of the things that’s really scary sometimes about deploying AI tools, especially to customer-facing applications is there can be a lot of damage that they do if one of these agents goes rogue or something goes wrong, the way that we’ve architected sandboxes allows them to — and containers allows them to do this secure isolated code deployment. And again, it all comes as part of the toolkit of Cloudflare Workers, which is allowing them to go really quickly…

…A leading financial services company has partnered with us to launch an official MCP server designed to allow AI agents like Claude, Cursor or OpenAI to interact directly with the company’s payment services. The whole thing is built on Cloudflare Workers. And this allows merchants to manage commerce tasks, such as creating invoices, checking transactions, processing and payments using natural language command and using things that are running on Cloudflare…

…I think what they like about us is, first, you get a complete toolkit. Second, that toolkit has been architected in a modern way to build exactly what you need for agents and AI applications. And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

…I know that AI is putting pressure on some companies that are out there. It’s not putting pressure on Cloudflare. We are seeing it as nothing but a tailwind for us, both for our developer tools and kind of the Act 4 stuff that we’re working on, but actually for even our legacy products like application services and Zero Trust as well.

Cloudflare’s management thinks the hyperscalers have no incentive to figure out how to run AI workloads more efficiently, unlike Cloudflare; management thinks Cloudflare can get up to 10x the amount of work done off the same GPU compared to a hyperscaler; because of Cloudflare’s efficiency, its capex has not increased significantly to handle Ai workloads; management thinks Cloudflare’s infrastructure offers much higher levels of flexibility to users when it comes to scaling up or down AI compute consumption when compared to the hyperscalers; management thinks Cloudflare is increasingly shifting AI compute-spend away from the hyperscalers

Cloudflare is in the business of getting work done. And so what we are constantly doing is having research teams inside of Cloudflare figure out how you can run AI workloads significantly more efficiently. The hyperscalers is actually have no incentive to do that. They don’t want AI workloads to be more efficient because that just means you have to lease fewer machines from them. Whereas we — because we only charge you for the actual work that’s getting done, that means that we’re just getting oftentimes as much as 10x the amount of work off of the same GPU that you might get with a hyperscaler. That advantage is part of how we’re able to just bring much more out of the CapEx that we spend than others are. Our CapEx has ticked up a little bit, and I think that that’s in response to the fact that we’ve seen an increase in terms of workers, but it’s nowhere close to what we’re seeing from the hyperscalers…

…And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…

… I think that the work that we’re doing to really embed with customers is driving success there. And again, we’re still not to a point where we’re going to be doing a $100 million deal a quarter, but we will get to that point. And I think we’ve seen an enormous total addressable market for the Cloudflare Workers platform. And I think that will shift more and more spend away from what people are using the hyperscalers for.

Cloudflare’s management thinks that the predominant business model of the internet in the AI era will shift away from advertising and subscriptions; Cloudflare’s recent acquisition, Human Native, will have an important role in helping the company come up with the next business model for the internet; Cloudflare is able to rewrite internet content that flows through its infrastructure, so it will be able to rewrite internet content in the best way for AI agents to consume; management thinks Cloudflare’s business is incredibly durable because it is able to automatically bring along the part of the internet that sits behind the company into whatever comes next in the AI era; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

In Human Native case, they’re really helping us think through what is the next business model of the Internet going to look like. It’s going to move, I think, away from advertisement. It’s going to move away from subscriptions. It’s going to move to something else. And Human Native who came out of Google and we are just extraordinary in thinking about what that future business model looks like. I think that you’re going to see extraordinary things from them and they fit right in a Cloudflare and we’re excited to have them…

…But then because our application services sit in front of people and one of the things that people don’t understand is, there’s a lot different than what people think of sort of just traditional CDNs or other things like that, is that we’re actually able to rewrite the content that flows through us as it flows through. So if it turns out that agents are better at speaking, I don’t know, Latin than they are speaking English, we can literally rewrite the content that’s behind Cloudflare in Latin rather than being in English. Now that’s not going to be what agents are good at, but they are going to be better probably at speaking code than they are going to be maybe speaking on other things that we might invent. So I think that what we’re able to do and part of the reason we think that our legacy business is going to be incredibly durable is that it’s going to be able to automatically bring along all of the rest of the Internet that already sits behind us into whatever comes next. And I think we’re going to figure that out…

So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Cloudflare’s management thinks that agentic commerce could put a lot of pressure on small businesses, and management is figuring out how they can bring all these small businesses along in an incredibly intuitive and easy way for the small businesses to adopt; management does not have the solutions yet, but they’re confident they can figure it out

One of the things I’m thinking a lot about is what happens to small businesses in a agentic commerce world. There’s a lot of ways where agents could be very consolidating and actually put a lot of pressure on small businesses. And so I think us in combination with great companies that we’re working with, like a Shopify or a Visa or PayPal or Mastercard, we’ve got to figure out how do we make sure that we bring all of these small business along, give them the right tools. And that’s exactly the sort of thing that we’re thinking about as we think about Act 4 and it’s not going to require you to have to go in and rebuild things. We want to make it one click simple where as soon as we figure out this is what really works, you push a button in that just whatever you had as your old shopping marketplace, that just comes along with it and gets to support whatever agents are going to be providing in the future. I don’t know exactly what all those things are going to look like, but we’ve got an incredible team

AI companies are looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services; non-AI companies are also looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services because the non-human users were generating an order of magnitude more volume than the human users

The first place that we saw just demand was actually from a lot of the AI companies, where the AI companies would say to us, we can’t continue to operate our systems unless we can have the security and ability to deal with the load, which Cloudflare provides by default. Every time you run a query against an AI company, it’s pretty expensive to deal with those queries. And so being able to sort out who’s a human and who’s not a human, which is something we’re the best in the world at, is really important for the AI companies, and that’s driven actually just a lot of those initial relationships that are there.

What really took off in Q4, though, was where we saw other companies, media companies, e-commerce companies, companies that were just doing more traditional things online, seeing such an enormous uptick in how agents were interacting with their systems. I mean if any of you have used a tool like a ChatGPT or a Grok or a Claude, and you just watch how many different things it is looking at for every query that you send out, that’s just an order of magnitude increase in the volume of queries that are coming to the Internet. And so the people who are providing what is that Internet that they’re querying against, they need ways to do that in a way which is efficient and able to continue to scale. And Cloudflare is — and again, those application services functions that we have, the kind of Act 1 products that we have, are really critical of being able to deliver that.

Cloudflare’s newer but still-legacy Zero Trust products are helping users to secure AI agents

If you look at something like the new agents that people are running on their own machines often, the amazing thing is that people are waking up very quickly. We’re sort of speedrunning all of the security challenges that are out there, where all of a sudden you say, I’ve just given my agent access to everything in my life, what could go wrong? People are very quickly figuring out a lot could go wrong and so you got to put controls in place. And that’s exactly where our Act 2 or Zero Trust products come into play, where we’ve actually seen a real uptick even in a self-service business of the Zero Trust products.

Content publishers have been overwhelmingly positive towards Cloudflare’s Crawl Control product; Cloudflare’s management has been positively surprised by the reaction from research teams in the finance industry towards Crawl Control; AI companies may not necessarily like Crawl Control, but Cloudflare’s management thinks the AI companies understand why Crawl Control needs to exist; large technology companies have tried to establish content marketplaces, but Cloudflare’s management thinks that content publishers have higher trust in Cloudflare as a neutral 3rd party; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge

[Question] Just double-clicking into Act 4, particularly in light of the wins, like the media company signing that $3.1 million contract for AI Crawl Control. So as you’re engaging with publishers, can you share early feedback around adoption towards this opt out controls to block scraping, but also the evolution of a structured marketplace model here.

[Answer] We’ve been sort of that neutral honest broker between the 2 sides that can come together and say, okay, like in order for this to all work, the Internet needs to have a business model, like people who create content deserve to get paid. And one of the things that actually surprised me to some extent, which might be relevant to a lot of you listening in, is we’ve actually been getting called not just from like the Associated Press and BBC and New York Times, but we’ve been getting calls increasingly from banks where their research teams are saying, we’re actually seeing fewer people subscribe to and read our research because the AI companies, the people are just turning the AI companies, they’re slurping all the data down and taking that intellectual property. Again, I think journalists get deserved to get paid, but so do research analysts… 

…The reaction from the content creator side has been just overwhelmingly positive. And we come back to something pretty simple, which is just if you create content, it should be up to you who gets access to it and who doesn’t, and we can provide the tools to do that. On the AI company side, they also — again, nobody wants to pay for something that they were getting for free. But I think that they understand that we’re a fair broker. And when we walk them through what happens if we don’t create some healthy ecosystem here, they say, we get it. We just want to make sure that everyone is treated fairly…

…Microsoft and Amazon have announced content marketplaces. And they may be successful, but what we’re hearing from both the AI companies and from the content creators is that because Cloudflare is that trusted neutral third-party that we can be that honest broker between them that they would rather us be the one that figure out what that future business model looks like as opposed to one of the hyperscalers, which is out there creating their own foundational model themselves and might have a very different incentives. So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.

Datadog (NASDAQ: DDOG)

Datadog’s management sees a positive demand environment, driven by cloud migration; management is seeing strong growth from both non-AI native companies and AI-native companies; in particular, the AI-native companies have very high growth and are going into production

We continue to see broad-based positive trends in the demand environment. With the ongoing momentum of cloud migration, we experienced strength across our business, across our product lines and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products.

Datadog’s management sees the company’s AI initiatives as being split into 2 buckets; one bucket is AI for Datadog, where management is building AI products to make Datadog better for customers; in AI for Datadog, management made Bits AI SRE (site reliability engineering) Agent, which does root cause analysis, generally available in December 2025 and it had 2,000 trial and paying customers in January 2025; Datadog has other AI products, such as Bits AI Dev agent, Bits AI Security Agent, and the Datadog MCP (Model Context Protocol) server; Datadog MCP server saw an 11-fold increase in tool calls in 2025 Q4 compared to 2025 Q3; the other bucket is Datadog for AI, where management is building capabilities for end-to-end observability across the entire AI stack; management is seeing an acceleration in growth for the LLM (large language models) Observability product; LLM Observability has 1,000 customers and number of LLM spans customers are sending to Datadog is up 10x over 6 months; management will soon release AI Agent Console to monitor AI agents; management is working on GPU monitoring; management is seeing Datadog’s overall customers base increase their usage of GPUs; management is improving the ability of Datadog’s products to secure the AI stack against attacks; management continues to see customer interest grow for next-gen AI observability; 5,500 customers are sending AI data to one or more of Datadog’s AI integrations (was 5,000 in 2025 Q3); management recently launched Feature Flags, which could be the foundation for automatically validating applications written by AI agents; management thinks that observability products for LLMs are currently undifferentiated but it will be differentiated in the future; management thinks observability tools for LLMs should be the same as for the rest of an organisation’s systems because LLMs do not work in isolation

We are executing relentlessly on our very ambitious AI road map, and I will split our AI efforts into 2 buckets: AI for Datadog and Datadog for AI.

So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers. We launched Bits AI SRE Agent for general availability in December to accelerate root cause analysis and incident response. Over 2,000 trial and paying customers have run investigations in the past month, which indicates significant interest and shows great outcomes with Bits AI SRE. And we’re well on our way with Bits AI Dev agent, which detects code level issues, generates fixes in production context and can even help release the monitor a fix. And Bits AI Security Agent, which autonomously triages SIEM signals, conducts investigations and delivers recommendations. The Datadog MCP server is being used by thousands of customers in preview. Our MCP server responds to the AI agent and user prompts and uses real-time production data and rich Datadog context to drive troubleshooting, root cause analysis and automation. And we’re seeing explosive growth in MCP usage, with the number of tool calls growing 11-fold in Q4 compared to Q3.

Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for LLM Observability. Over 1,000 customers are using the product and the number of spans since has increased 10x over the last 6 months. In 2025, we broadened the product to better support application development and iteration adding capabilities such as LLM Experiments and LLM Playground, LLM Prompt Analysis and custom LLM-as-a-judge. And we will soon release our AI Agent Console to monitor usage and adoption of AI agents and coding assistance. We are working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. And we are building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking and data poisoning among many other risks…

…We continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI and LLM usage…

…In software delivery, in January, we launched Feature Flags. They combine with our real-time observability to enable canary rollouts, so teams can deploy new code with confidence. And we expect them to gain importance in the future as they serve as a foundation for automating the validation and release of applications in an AI agentic development world…

…We mentioned our LLM Observability product. There are a few other products in the market for that. I think it’s still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are, but we expect that to shake out more into the future. We think, in the end, there’s no reason to have observability for your LLM that is different from the rest of your system in great part because your LLM don’t work in isolation. The way they implement their smarts is by using tools, the tools on your applications and your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.

Example of an 8-figure land deal with a high-profile AI foundation model builder (most likely Anthropic); the model builder’s observability stack was fragmented; the model builder will consolidate more than 5 observability tools into Datadog; the model builder wants to focus on building its own products; this model builder is the 2nd high-profile model builder that Datadog has as a customer (with the other being OpenAI); every customer of Datadog is also using some in-house or open-source observability tools and the same goes for the AI companies; management is seeing AI model builders’ having the same reasons as non-AI companies for adopting Datadog and that is Datadog is able to prove its value very quickly

We landed an 8-figure annualized deal and our biggest new logo deal to date with one of the largest AI foundational model companies. This customer has a fragmented observability stack and cumbersome monitoring workflows leading to poor productivity. This is a consolidation of more than 5 open source, commercial, hyperscaler and in-house observability tools into the unified Datadog platform that has returned meaningful time to developers and has enabled a more cohesive approach to observability. This customer is experiencing very rapid growth. Datadog allows them to focus on product development and supporting their users, which is critical to their business success…

…[Question] It’s now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?

[Answer] Every customer we land has some — has had some at homegrown. They have some open source. They might still run some open source, like that’s typically where we see everywhere. The — it’s cheaper to do it yourself is usually not the case. So your engineers typically are very well compensated in the big part of the spend in this company. Their velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So it’s not any different from what we see with any other customer. And also within the AI cohort, it’s not original at all like — AI cohort in general is who’s who of the companies that are growing very fast and that are shaping the world in AI and they’re all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.

Datadog’s management continues to believe that digital transformation and cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks that agentic coding is beneficial for Datadog because it leads to more coding volume to observe, and the need for observability in areas that were not necessary before; Datadog’s management thinks it’s very hard to tell what level of model-inferencing will happen because of the gargantuan amount of capex from the hyperscalers, but they think it’s likely to lead to more complexity in the technology ecosystem, which will benefit Datadog’s business

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience and security needs. Meanwhile, we’re moving fast in AI, by integrating AI into the Datadog platform to improve customer value and outcome and by building products to observe, secure and act across our customers’ AI stack…

…[Question] In the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we’ve seen from Anthropic and new frontier models from OpenAI, just in terms of like what this means for observability as a category, defensibility of it in terms of can customers use these tools to build homegrown solutions for observability?

[Answer] There’s a few different ways to look at it. One is there’s going to be many more applications than there were before. Like people are building much more and they are building much faster. We covered that in previous calls, but we think that the — this is nothing, but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time. And you move a lot of the value from the act of writing the code, which now you actually don’t do yourself anymore to validating, testing, making sure it works in production, making sure it’s safe, making sure it interacts well with the rest of the world, with end users, make sure it does what it’s supposed to do for the business, which is what we do with observability. So we see a lot more volume there, and we see that as what we do basically where observability can help. The other part that’s interesting is that we — a lot happens — a lot more happens within these agents and these applications. And a lot of what we do as humans now starts to look like observability. Basically, we’re here to understand — we’re trying to understand what the machine does. We’re trying to make sure it’s aligned with us. We’re trying to make sure the output is what we expected when we started, and that we didn’t break anything. And so we think it’s going to bring observability more widely in domains that it didn’t necessarily cover before…

…[Question] I’m wondering if you’ve collected enough signal from the last couple of years of CapEx, that trend to estimate how much of that is training related and when it might convert to inferencing where Datadog might be required? In other words, are you looking at this wave of CapEx and able to say it’s going to create a predictable ramp in your LLM observability revenue?

[Answer] I think it’s pretty too reductive to peg that on LLM observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it’s kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value 2 or 3 or 4 years from now. So I think we’ll have to see what the conversion rate is on that. But look, it definitely points to very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy. And so we think it’s going to be — like it’s going to be of great help to our business, let’s put it this way.

Datadog experienced adoption growth in AI native customers in 2025 Q4 that significantly outpaced non-AI customers; Datadog now has more than 650 AI native companies (was 500 in 2025 Q3), of which 19 are spending more than $1 million (was 15 in 2025 Q3); 14 of the top 20 AI-native companies globally are Datadog customers; management chose not to share the percentage of revenue coming from AI native customers in 2025 Q4 (was 12% in 2025 Q3); the AI native companies are not dilutive for Datadog’s gross margin; the large AI native customers get the same kind of volume discount as the large non-AI customers

We are seeing continued strong adoption amongst AI-native customers with growth that significantly outpaces the rest of the business. We see more AI-native customers using Datadog with about 650 customers in this group. And we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers…

…[Question] Can you give us the percent of revenue of the AI cohort this quarter?

[Answer] We didn’t — have not put it in there…

…[Question] On margin, are the large AI-native customers significantly dilutive to gross margin?

[Answer] On a weighted average, they’re not. As we’ve always said, for larger customers, it isn’t about the AI-natives or non-AI-natives, it has to do with the size of the customer. We have a highly differentiated — diversified customer base. So I would say we’re essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it’s more or less what we’ve seen over the past couple of years, not really affected by AI or non-AI native.

Datadog’s management’s basis for guidance is to have conservative assumptions on usage growth trends observed in recent months; in setting guidance, management made the conservative assumption that Datadog’s core business is growing faster than the business from its large AI customer (OpenAI)

Our guidance philosophy overall remains unchanged. As a reminder, we based our guidance on trends observed in recent months and apply conservatism on these growth trends…

…We noted that with the guidance being 18% to 20% and the non-AI or heavily diversified business being 20% plus, that would imply that the growth rate of that core business assumed in the guidance is higher than the growth rate of the large customer. It doesn’t mean the large customer is growing any which way. It’s just that in our consumption model, we essentially don’t control that. And so we took a very conservative assumption there.

Datadog’s management thinks that as agentic developers proliferate, there will be a lot more automation in observability workflows, but there will still be a need for UIs (user interfaces) for human developers to interact; to prepare for the rise in automation in observability workflows, management is exposing a lot of Datadog’s functionality directly to agents; management thinks it’s likely that Datadog’s MCP (Model Context Protocol) server will be part of how agents interact with Datadog’s products

[Question] In a world where there’s a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it’s UI or how workflows work in observability and how maybe Datadog sort of tries to align with that evolution that’s likely to come in the next couple of years?

[Answer] There’s going to be an evolution, that’s certain. There’s going to be a lot more automation. We see it today, like we see the — all the signs we see point to everything moving faster, more data and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that. And a lot of the interaction will be automated by agent. So we’re building the products to satisfy both conditions. So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what their options are, give them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. We mentioned on the call, we have an MCP server that is currently in preview and that is really seeing explosive growth of usage from our customers. And so it’s a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.

Datadog’s management thinks that LLMs (large language models) are getting better all the time; management sees 2 parts to Datadog’s defensibility against LLMs; the 1st part is Datadog understands how all the data fits together; the 2nd part is Datadog has the foundation to provide proactive, real-time anomaly detection and solutions as Datadog is embedded in an organisation’s data plane; management thinks that the world of observability is shifting towards one where it’s important for observability providers to provide proactive, real-time anomaly detection and solutions; management is developing Datadog’s ability to provide proactive, real-time anomaly detection and solutions; the data planes in a typical organisation Datadog works with are real time and many orders of magnitude larger in volume than what an LLM typically sees; management is not seeing any change in the intensity of competition for Datadog’s business from LLMs; management thinks it’s only rational for all AI native customers to use Datadog’s products

We definitely see that LLMs are getting better and better, and we’ll bet on them getting significantly better every few months as we’ve seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So if you feed a lot of data to an LLM and ask for an analysis, you’re very likely to get something that is very good and that is going to get even better.

So when you think of what we have that is fundamentally our moat here, there’s 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that’s how we aggregate all the data we get, we parse out the benefits. We understand how everything fits together and we can feed that into the LMM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so customers can recombine that in different ways using different intelligence tools.

But the other part that we think where the world is going for observability is that right now, we are — the SDLC [software development life cycle] is accelerating a lot, but it’s still somewhat slow. And so it’s okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling for them. Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that’s happening in your system. So you need to be proactive. You’ll need to run analysis in stream as all the data flows through, you’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the [ fact ] 10, 15 minutes later. And that’s what we’re uniquely positioned to do.

We are building that. We’re not quite there yet, but we think that a few years from now, that’s what the world is going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems…

…The data plates we’re talking about are very real time, and there are many orders of magnitude larger in terms of data flows, data volumes than what you typically feed into an LLM. So it’s a bit of a different problem to solve…

…[Question] I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted?

[Answer] There hasn’t been any particular change in competition in that we see the same kind of folks and the positions are relatively similar. And we are pulling away. We’re taking share from anybody who has scale. And I know there’s been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, nothing that we saw in deals, nothing that had a large market impact. And so we don’t see that as changing the competitive dynamics for us in the near future…

…At the end of the day, it should be irrational for customers — for all customers in the AI cohort not to use our product…

…I think as you look at being in-stream looking at 3, 4, 5 orders of magnitude, more data, looking at the data in real time, and passing judgment in real time on what’s normal, what’s anomalous and what might be going wrong doing that hundreds, thousands, millions of times per second. I think that’s what is going to be our advantage and where it’s going to be much harder for others to compete, especially general purpose AI platforms.

Datadog’s management thinks the best way to justify the existence of Datadog in an environment where observability bills are going up because of AI usage, is to prove the cost-savings to customers 

[Question] Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up.

[Answer] There’s only 2 reasons people buy your product is to make more money or to save money. So whatever you do, when customers use a new product, they need to see a cost savings somewhere or they need to see that they’re going to get to customers they wouldn’t get to otherwise. So we have to prove that. We always prove that. Any time a customer buys a product, that’s what is happening behind the scenes. The — in general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.

Datadog’s management is seeing great productivity gains when employing AI internally

In terms of AI, to date, we are using it in our internal operations. So far, it’s — with the first signs of what we’re seeing is productivity and adoption…

…We’re getting a lot — we see great productivity gains with AI there, but at this point of detail, it helps us build more faster and get to solve more problems for our customers. And — but we’re very busy adopting AI across the organization.

Paycom Software (NYSE: PAYC)

IWant allows anyone to become an expert in the system without training; Forrester found that organizations with more than 500 employees that use IWant experienced an ROI of over 400%; with IWant, managers up to 600 hours, executives 60 hours, HR teams 240 hours and employees 3,600 hours, on an annual basis; the leaders of organisations using IWant get immediate value out of the product without any training; IWant usage is up 80% in January 2026 from 2025 Q4; IWant’s functionality is continuously being improved

Our most advanced AI solution, IWant, is designed to accelerate the speed to value by allowing anyone to become an expert in the system without any training. Forrester’s recent analysis of a composite organization with more than 500 employees found that organizations using IWant experienced an ROI of over 400%, driven by productivity gains at every level. Managers save as many as 600 hours per year, executives up to 60 hours, HR teams up to 240 hours and employees across the organization collectively reclaim 3,600 hours annually.

Leaders describe IWant as a catalyst for deeper insight and one CEO remarked, I get immediate value. Without any training or knowledge of Paycom, I can go in and immediately understand more about my business…

…IWant usage is up 80% in January alone just based and that’s from fourth quarter…

…We continue to build out the IWant system. We continue to add more and more functionality to it. It continues to get stronger and stronger.

Paycom’s management thinks that AI is not a threat to Paycom; management thinks AI will give Paycom the opportunity to enter adjacent industries that it was not able to in the past

think there’s a little misjudgment about the AI thesis materializing as a threat weapon that will be used against us. I mean AI is our friend at Paycom. And I’ve worked very hard to ensure that the misunderstanding of AI’s impact on us isn’t on our end.

And I just believe as you look into the future, we have opportunities now that we didn’t have in the past, right? Like the speed of development has increased, the pace of the user buyer being able to digest it might lag a little bit, but we can develop a lot more today than what we’ve been able to in the past. We’re in this age of software development and in some instances, replacement of specific software. Paycom can get into every adjacent industry now within weeks or months. And I’ll remind everybody that I was the first Bob [ coater ] back in 1998. So there are several easy-to-displace industries that don’t just sit ancillary to our industry, but they’re dependent upon our industry of where the data starts. And so now that we can develop anything very quickly and use all these technologies to replace other industries in a matter of weeks or months, we’re excited about how that — what that looks like for our future as well.

Paycom’s management is currently not seeing any impact on overall employment from AI, but is not dismissing impacts in the future; management thinks that Paycom still has ample growth opportunities even if AI does lead to lower overall employment

[Question] The AI impact to overall employment. How do you see that impacting Paycom business?

[Answer] I’d say we’re not seeing it. I’m not going to dismiss potential impacts for us to the future. I would say that we are not overexposed to any one industry, any one client, client size. And again, we only have 5% of the market. And so you could do some calculations and we’re the most automated product in the industry and the best product for the best value that someone is going to achieve throughout the industry. And so when you look at that, I think that you could do some adjustments in employment, which again, we have not seen. But I mean, even if you did, I still think our opportunities intact for us.

Shopify (NASDAQ: SHOP)

Shopify has been building for AI shopping for some time; orders coming to Shopify stores from AI search has increased 15x since January 2025, albeit from a small base; management thinks AI shopping helps serve smaller merchants to the right buyers who might otherwise never have discovered the merchants; management thinks AI shopping benefits consumers because they gain access to a personal shopper; management thinks AI shopping will increase e-commerce penetration faster than it would have otherwise; management thinks it’s important that AI shopping is at least as good as shopping at a merchant’s digital storefront; Shopify has introduced Shopify Agentic Storefronts, which lets all major AI platforms access billions of products from Shopify merchants accurately and in an up-to-date way; AI platforms are plugged into the best commerce source of truth with Shopify, and this translates to better experiences for consumers; through the Agentic plan, brand not already using Shopify will soon be able to sell through the same AI platforms as Shopify merchants; Shopify built Universal Commerce Protocol (UCP) with Google as the common rails to support agentic commerce; UCP is payments agnostic and keeps merchant’ essential checkout logic intact; UCP is the only protocol that covers the full commerce journey end-to-end; leading retailers are already using UCP; agentic commerce does not bypass Shopify’s checkout; management has no opinion on which LLM platform will be the dominant one for agentic commerce and they just want to allow merchants to sell through agentic commerce; comanagement sees merchants’ economics remaining the same between agentic commerce and selling directly from their stores

We’ve been building for this new era of AI shopping for a long time, and it’s now here. In fact, since January 2025, orders coming to Shopify stores from AI search are up 15x. Now that’s on a small base, but that’s still a really big jump in 12 months. For our merchants, it matters because it powers the long tail of commerce, servicing smaller merchants to the right buyers who might otherwise have never discovered them. This is merit-based discovery at scale. For buyers, it matters because it’s like having a personal shopper in your pocket, someone who really understands them, their taste, their preference, their size…

…For Shopify, it matters because we believe it can bend the curve of e-commerce penetration by stripping out friction, pulling late adopters in and moving more everyday purchases online…

…It is critical that shopping in an AI conversation is at least as good as shopping at the merchant’s online store…

…Shopify Agentic Storefronts syndicates billions of products through our catalog to all major AI platforms, Google AI Mode and Gemini, ChatGPT, Microsoft Copilot, one click and our merchants get instant access to millions of potential buyers who are actively looking for their products. We’ve already seen huge brands like Vuori, Glossier, Steve Madden and SPANX sign up and start selling. Plus through the catalog, our partners get the most accurate up-to-date data for billions of products for millions of the best brands on the planet. And this is really important because when they tap into our catalog, they’re not just ingesting another feed, they’re plugging into the best commerce source of truth. And that source of truth means cleaner matching and fresher data, which translates directly into faster and more trustworthy experiences.

The new Agentic plan means that any brand not already using Shopify will soon be able to sell through the same AI platforms as our merchants as well as on the Shop app. Why? Because frankly, when commerce flows freely across agents, everybody wins…

…We built the Universal Commerce Protocol or UCP. UCP is infrastructure. It’s not a product. It’s the common rails Agentic commerce runs on. Shopify co-developed this with Google because we know commerce better than anyone. It’s an open standard for any agent to connect with any brand on the Internet. UCP is built to flex to the many ways commerce happens. It’s payment agnostic by design. It keeps the merchants essential checkout logic intact without forcing them to rebuild their customizations over and over again to fit our system. UCP is the only protocol that covers the full commerce journey end-to-end from search to cart, then checkout to post order, and it’s already being used by the world’s leading retailers…

…LLMs do not bypass Shopify’s Checkout. Checkout is really 2 parts. Think of it this way. You have a front end that’s the user interface that buyers interact with and the back end that processing everything server to server. So if you think about a Shopify store today, Shopify runs both the front end and the back end. And under UCP, Shopify still powers the overall experience, but the merchant gets to keep their own checkout system on the back end. Now with something like ChatGPT, for example, OpenAI will run the front end, which is sort of the screens and the forms that the buyer uses. But Shopify still runs the back end. And so things like order processing and payments through Shopify Payments, that all runs through Shopify’s infrastructure…

….We want to make sure that whatever surface, whatever permutation is the one that actually becomes the mainstay in Agentic that it reflects exactly the experience that the merchants want similar to what they have in the online store as well. And so the economics for Shopify merchants’ economics are the same as if the transaction happened in the online store as well. There should be no difference there.

Shopify’s on-platform AI assistant, Sidekick, proactively helps merchants prioritise and execute tasks; Sidekick’s usefulness is enhanced because Shopify powers a merchant’s store, checkout data, and apps; in the 3 weeks since Sidekick’s latest edition was released, it has generated almost 4,000 custom apps, created over 29,000 automations, built almost 355,000 task lists, and edited over 1.2 million photos; Sidekick Pulse is a new feature in Sidekick that surfaces tailored advice for merchants; Sidekick Pulse recently recommended a Shopify jewelry merchant to bundle 4 products because the Sidekick Pulse knew the 4 products were best sellers and bundles tend to convert better

Our on-platform AI assistant, Sidekick has come a long lane in a year. Sidekick is effectively a co-founder for our merchants. It uses everything it knows about your business, and it proactively tells you which task to prioritize. And it will even help you execute those tasks. Because Shopify powers the store, checkout data and apps, Sidekick can see the entire picture and do the work in one place…

…In just 3 weeks after our latest edition drop, Sidekick generated almost 4,000 custom apps, created over 29,000 automations with Shopify Flow, built almost 355,000 task lists and edited over 1.2 million photos. So it’s clear that Sidekick is doing real heavy lifting for our merchants…

…Sidekick Pulse is our new feature that proactively helps merchants grow their business. It works in the background to surface tailored advice that’s grounded in each merchant’s business, powered by over 2 decades of data…

…Last week, Sidekick Pulse made a recommendation to one of our jewelry brands. It suggested bundling 4 separate products and selling them together as a stack. Why? Because it knew that those 4 products were already best sellers, and it also knew that bundles tend to convert better and drive up cart value. Personalized data analysis paired with intelligence gained from hundreds of millions of other transactions. This is where our AI assistant really becomes the AI co-founder. It’s bespoke, it’s intuitive.

Shopify’s new app SimGym simulates real buyer behavior to provide feedback on store changes before they are shipped

Our new app SimGym simulates real buyer behavior to give you feedback on changes to your store before you even ship them.

0.5 million merchants have used AI within Shopify’s online store editor to create 6.5 million custom elements; Shopify’s online store editor allows anyone to design without code

Within our online store editor, more than 0.5 million merchants have used AI to create 6.5 million custom elements. Now anyone can design without code. This is really Shopify at its best. Massive complexity transform into a tool for anyone with imagination, no technical skills required.

Shopify’s management believes AI advances will make Shopify even more essential for merchants

As AI advances, Shopify becomes even more essential. AI transforms interfaces and accelerates the pace of change, but it doesn’t alter the underlying architecture of commerce. Commerce will always require speed, reliability and trust at a global scale. When I say scale, consider the billions of transactions that we facilitate. But it’s not just about the volume. It’s the comprehensive commerce experience we support. When an AI agent surfaces a product in any interface, merchants still need a reliable, secure and compliant path to purchase and post purchase. They still need our ecosystem of buyers, developers and partners. We help merchants be everything everywhere all at once, representing over 14% of U.S. e-commerce today and rapidly growing percentages in many geographies across the globe, we have an unparalleled view of commerce. Simply, we are the experts at commerce. AI will be a force multiplier. It will help us achieve our goals of democratizing entrepreneurship, inspiring more merchants, driving more transactions and creating more commerce channels.

Shopify was able to accelerate product development in 2025 without growing the size of the team because of the use of AI

Throughout 2025, we achieved operating leverage in each of R&D, sales and marketing and G&A, largely due to disciplined headcount management. By leveraging AI, automation and our proprietary project management and talent management systems, we’ve been able to accelerate our product development capabilities without growing the size of the team.

Shopify’s management sees Agentic Plan as an on-ramp for non-Shopify merchants to enter the Shopify ecosystem, similar to how Commerce Components works

The Agentic plans opens our infrastructure to all brands. And I think this idea that we’re bringing Agentic Commerce to every brand, whether or not they’re on Shopify, we think will be — I mean, it certainly has already been an incredible way for us to start conversations with brands who might not be ready to migrate or have not anticipated a full forklift migration just yet, but they don’t want to miss out on this incredible opportunity that might be this Agentic Commerce. And so in a similar vein to how we started — we created Commerce Components a couple of years ago where non-Shopify merchants can use things like Shop Pay or they can simply use Shopify Checkout as a component. That allowed us to start conversations with brands that we weren’t otherwise talking to. In some cases, some of those brands who came to us initially just for Shop Pay are now entirely on Shopify. So certainly, we think this could be an incredible on-ramp just like the Commerce Components play was.

The Catalog is important for Shopify’s agentic commerce ambitions because it is a source of truth for agents, and agents do not have to rely on scraping information from the internet

It is incredibly important that Tobi said something recently about Catalog. He said that everyone else has to scrape the Internet, we actually have the source of it. The fact that we have structured billions of products so agents can surface the most relevant items in seconds, the fact that products are going to be then surfaced based on relevance and sort of this merit-based discovery is going to happen. I think that every retailer and every merchant on the planet is thinking about how they can get in front of as many buyers and consumers on Agentic. If they continue down that path and do the math, more and more, they realize that Shopify is the company that is front and center.

Shopify’s management appears to see UCP (Universal Commerce Protocol) as being the significantly more important rails for agentic commerce compared to OpenAI’s ACP (Agentic Commerce Protocol)

[Question] Can you help us understand the UCP versus ACP, the other standard that OpenAI and Stripe are putting forward. Are these overlapping standards? Do they compete? Are they complementary in any way?

[Answer] Yes. Look, the goal is simple with UCP. It’s one common language for agents and retailers. The idea is that merchants can keep the brand, the attributions, buyers get these incredibly trustworthy experiences and Agentic Commerce can scale. UCP is specifically geared towards being a protocol that covers the full commerce journey end-to-end from search to cart, then checkout. It includes post order. It keeps the merchants essential checkout logic intact.

It doesn’t force them to rebuild customizations over and over again. It’s payment agnostic by design. It’s built to flex in many ways. I mentioned a couple of examples in my prepared remarks. I mean you think about ButcherBox or you think of AG1, for example, those — that subscription logic is really complex because sometimes you want to skip a month, sometimes you want to double up. If you’re on vacation, you want to do a hold or some of the larger furniture companies on Shopify that do this incredible white glove delivery where you can set the exact time and date for your couch being delivered.

These things need to be ported over into the Agentic world, and UCP does that. So in our view, UCP covers the full commerce journey end-to-end. And we think — we have 20 years of doing this. Commerce is very complex. It is easy to get it wrong. And I think that it’s more than just a transaction. It’s an entire experience and UCP covers all of that. And we’re really proud of what we did with our friends at Google. It was an incredible experience to work on it with them, but it works, and we think we’re already seeing incredible adoption from some of the largest retailers on the planet.

Shopify’s management is not seeing a competitive threat develop in terms of companies choosing to replace or bypass Shopify’s solutions with vibe-coded tools

[Question] About the feedback from merchants having discussions at the Board level about moving to Shop. Specifically, AI, the feedback that you’re getting from companies in terms of the AI road map, is that — I imagine it’s influencing decisions. Are you also seeing merchants evaluate custom solutions in light of what they can do with AI tools?

[Answer] I think a lot of the largest retailers, certainly the ones I’m meeting with, I mentioned brands like General Motors or L’Oreal or SuitSupply or Amer Sports, who runs Wilson and Salomon. What we hear from them is they’re looking — if they’re not on Shopify already, usually, they come to us with a particular problem. In some cases, it’s — we want to make sure we don’t miss out on Agentic. In other cases, they’re coming to us because they want to replace their homegrown system that they built many years ago for e-commerce. They don’t want to have 400 engineers anymore. They want to effectively come to Shopify because they want to go back to what they do best, which is they want to build furniture. They want to be a cosmetics company. They don’t necessarily want to have this massive engineering team… I think the days of let’s just build everything ourselves in-house is long gone. And I think that gives Shopify an incredible opportunity.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adyen, Datadog, Mastercard, Paycom Software, Shopify, and Visa. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the fourth quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Alphabet (NASDAQ: GOOG)

The Gemini app now has 750 million monthly active users (was 650 million in 2025 Q3); the Gemini app is seeing significantly higher engagement per user after the launch of Gemini 3 in December 2025; Alphabet offers the most extensive model portfolio in the world and leads across text, vision, and image-to-video model leaderboards; Gemini 3 Pro is state-of-the-art in reasoning and multimodal understanding; Gemini 3 Pro has the fastest adoption of any model in Alphabet’s history; Gemini 3 Pro has consistently processed 3x the number of daily tokens that 2.5 Pro has; Gemini 3 is powering Google Antigravity, a software development platform with more than 1.5 million weekly active users since its launch 2 months ago; Alphabet’s 1st-party AI models, including Gemini, now process 10 billion tokens per minute from direct APIs (was 7 billion in 2025 Q3); Gemini 3 is now integrated into AI Mode and AI Overviews in Google Search; more than 120,000 enterprises are using Gemini today; a sheer majority of the top SaaS companies in the world are using Gemini; management is confident of maintaining the innovation momentum with Alphabet’s 1st-party models; management is not seeing Gemini cannablising Google Search; management is not in any hurry to introduce advertising to the Gemini app; 

Our Gemini app now has over 750 million monthly active users. We are also seeing significantly higher engagement per user, especially since the launch of Gemini 3 in December…

…We offer the most extensive model portfolio in the world and lead across text, vision and image to video LMArena leaderboards. Gemini 3 Pro drives the state-of-the-art in reasoning and multimodal understanding. It has seen the fastest adoption of any model in our history. Since launch, Gemini 3 Pro has consistently processed 3x as many daily tokens on average as 2.5 Pro.

Our latest model powers Google Antigravity, our new development platform where agents can autonomously plan and execute complex software tasks. It already has more than 1.5 million weekly active users after launching just over 2 months ago.

Our first-party models like Gemini now process over 10 billion tokens per minute via direct API used by our customers, up from 7 billion last quarter…

…We have integrated Gemini 3 directly into AI mode in search. Now search can better understand your query, dive deeper on the web and generate interactive UI experiences. And last week, we upgraded AI Overviews to Gemini 3, giving users a best-in-class AI response at the top of the search results page…

…Today, more than 120,000 enterprises use Gemini, including AI unicorns like Lovable and OpenEvidence and global enterprises like Airbus and Honeywell. 95% of the top 20 and over 80% of the top 100 SaaS companies use Gemini, including Salesforce and Shopify. Gemini is becoming the AI engine for the world’s most successful software companies…

…We are obviously improving these models across many paradigms, right, on pretraining, post-training, test and compute and so on. And we are bringing multimodal models into the picture. We are bringing agentic capabilities, the coding area is showing a lot of progress. And obviously, integrating all of this together and offering a great customer experience for our — to our products as well as through our APIs to our cloud customers. To me, it feels like there’s a lot of headroom ahead. And as you’ve seen our trajectory over the past 2 years in terms of how we have been making progress. I think we are in a very, very relentless innovation cadence. And I think we are confident about maintaining that momentum as we go through ’26…

…People are obviously using search, experiencing AI Overviews and AI Mode as part of it and Gemini app as well. And the combination of all of that, I think, creates an expansionary moment. I think it’s expanding the type of queries people do with Google overall. And so overall, some of it all is what we see as a growth opportunity, and we haven’t seen any evidence of cannibalization there…

…In terms of the Gemini app today, we are focused on our free tier and subscriptions and seeing great growth, as Sundar discussed. But ads have always been part of scaling products to reach billions of people. And if done well, ads can be really valuable and helpful commercial information. And at the right moment, we’ll share any plans. But as we’ve said, we’re not rushing anything here.

Google Cloud saw accelerating growth in 2025 Q4; Google Cloud backlog grew 55% sequentially to $240 billion in 2025 Q4 (was $155 billion in 2025 Q3); Google Cloud was able to lower Gemini serving unit costs by 78% over 2025; Google Cloud had double the new customer velocity in 2025 Q4 compared to 2025 Q1; the number of Google Cloud deals in 2025 exceeding $1 billion surpassed the past 3 years combined; existing Google Cloud customers are outpacing their initial commitments by over 30%; nearly 75% of Google Cloud customers have used Google Cloud’s end-to-end vertically integrated AI stack; Google Cloud has 14 product lines that each exceed $1 billion in annual revenue; Google Cloud is offering a wide range of Alphabet’s 1st-party leading generative AI models to customers; 350 customers each processed more than 100 billion tokens in December 2025; revenue from products built on Alphabet’s 1st-party AI models was up 400% year-on-year in 2025 Q4; the integration of Gemini and Google Workspace is driving wins for Google Cloud; revenue from AI solutions built by Google Cloud’s partners increased nearly 300% year-on-year in 2025 Q4

Cloud significantly accelerated with revenues growing 48%, now on an annual run rate of over $70 billion. Backlog grew by 55% quarter-over-quarter to $240 billion, representing a wide breadth of customers driven by demand for AI products…

…Google Cloud’s backlog increased 55% sequentially and more than doubled year-over-year, reaching $240 billion at the end of the fourth quarter. The increase in backlog was driven by strong demand for our cloud products, led by our enterprise AI offerings from multiple customers….

…As we scale, we are getting dramatically more efficient. We were able to lower Gemini serving unit cost by 78% over 2025 through model optimizations, efficiency and utilization improvements…

…We are winning more new customers faster. We exited the year with double the new customer velocity compared to Q1…

…We are also signing larger customer commitments. The number of deals in 2025, over $1 billion surpassed the previous 3 years combined…

…We continue to deepen our relationships with existing customers who are outpacing their initial commitments by over 30%.

Nearly 75% of Google Cloud customers have used our vertically optimized AI from chips to models to AI platforms and enterprise AI agents, which offer superior performance, quality, security and cost efficiency. These AI customers use 1.8x as many products as those who do not, enabling us to diversify our product portfolio, deepen customer relationships and accelerate revenue growth. Our product line has multiple monetization levers spanning infrastructure, platform and high-margin AI-powered products and services with 14 product lines each exceeding $1 billion in annual revenue…

…We also offer our leading generative AI models, including Gemini, Imagen, Veo, Chirp and Lyria to cloud customers. In December alone, nearly 350 customers each processed more than 100 billion tokens. In Q4, revenue from products built on our generative AI models grew nearly 400% year-over-year, significantly accelerating from the prior quarter…

…Our integration of Gemini and Google Workspace is driving wins with global brands like Schwarz Group and public sector organizations like the U.S. Department of Transportation. We are also seeing momentum with independent software vendors. Revenue from AI solutions built by our partners increased nearly 300% year-over-year and commitments from our top 15 software partners grew more than 16x year-over-year.

Google Cloud has the widest variety of compute options, from NVIDIA’s GPUs to Alphabet’s own TPUs; Google Cloud will be the among the first cloud providers to offer NVIDIA’s latest Vera Rubin GPU; Alphabet has been working on its 1st-party TPUs for 10 years; Alphabet’s TPUs are being used by leading frontier AI labs (likely referring to Anthropic) and organisations in financial services, automotive, and public service; management seems to not be willing to sell TPUs to 3rd-party data centers

We have the industry’s widest variety of compute options. That includes GPUs from our partner, NVIDIA, who announced at CES, that we’ll be among the first to offer their latest Vera Rubin GPU platform, plus our own TPUs that we have been developing for a decade…

…We offer leading infrastructure for AI training and inference to our cloud customers with the industry’s widest variety of compute options from our own seventh-generation Ironwood TPU to the latest NVIDIA GPUs. Our 10-year track record in building our own accelerators with expertise in chips, systems, networking and software translates to leading power and performance efficiency for large-scale inference and training. Our cloud AI accelerators serve the leading Frontier AI labs, capital markets firms like Citadel Securities, enterprises like Mercedes-Benz and governments for high-performance computing applications…

…[Question] How should we think about the potential for TPUs to move outside of Google Cloud and into external data centers and develop as an incremental revenue stream?

[Answer] In terms of TPUs, I would think about it as it’s reflected in our overall part of what makes Google Cloud an attractive choice is the wide choice of accelerators we bring to bear here, and we meet customers in terms of what their needs are and the choice as well as other things we bring as part of Google Cloud, the end-to-end efficiencies in our data centers, all of that comes to bear. And that’s what you see in the strong momentum in Google Cloud. And given the overall investment we are making, we expect to be able to drive that momentum there. 

Alphabet’s management recently launched personal intelligence in AI Mode in Google Search; management recently introduced the Universal Commerce Protocol as a new open standard for agentic commerce; Google Search saw more usage in 2025 Q4 than ever before, with AI being an expansionary force; management has shipped 250 product launches within AI Mode and AI Overviews in Google Search in 2025 Q4; Gemini 3 is now integrated into AI Mode and AI Overviews in Google Search; management has made the transition from AI Overview to AI Mode completely seamless; daily AI Mode queries per user doubled in the US since launch; AI Overviews continue to perform well; queries in AI Mode are 3x longer than traditional searches, and a significant portion of queries in AI Mode lead to a follow-up question; people are searching in new ways beyond text, with 1 in 6 AI Mode queries being in non-text format; users of AI Mode can soon use a new checkout experience to buy directly because of the universal commerce protocol

In January alone, we have launched personal intelligence in AI mode in search and the Gemini app…

…And we laid the groundwork for shopping in the AI era by introducing a new open standard for agentic commerce, the Universal Commerce Protocol built alongside many retail industry leaders…

…Search saw more usage in Q4 than ever before as AI continues to drive an expansionary moment…

…We shipped over 250 product launches within AI Mode and AI Overviews just last quarter. We have integrated Gemini 3 directly into AI mode in search. Now search can better understand your query, dive deeper on the web and generate interactive UI experiences. And last week, we upgraded AI Overviews to Gemini 3, giving users a best-in-class AI response at the top of the search results page. We have also made the search experience more cohesive, ensuring the transition from an AI Overview to a conversation in AI Mode is completely seamless…

…First, once people start using these new experiences, they use them more. In the U.S., we saw daily AI Mode queries per user double since launch and AI Overviews continue to perform very well. Second, people are engaging in longer, more complex sessions. Queries in AI Mode are 3x longer than traditional searches. We are also seeing sessions become more conversational with a significant portion of queries in AI Mode now leading to a follow-up question. Third, people are searching in new ways beyond text. Nearly 1 in 6 AI Mode queries are now nontext using voice or images…

…We are building the era of agentic commerce and working with our partners to introduce the universal commerce protocol in our consumer products and across the web. We’ve received tremendous feedback from the industry. Soon, people can use a new checkout experience to buy directly in AI mode in Gemini from select merchants.

Alphabet’s management is seeing strong demand for its 1st-party enterprise AI agents; Alphabet has sold more than 8 million paid seats of Gemini Enterprise to 2,800 companies; Gemini Enterprise managed over 5 billion customer interactions in 2025 Q4, up 65% year-on-year

Leading enterprises are also driving strong demand for our enterprise AI agents. We have sold more than 8 million paid seats of Gemini Enterprise, our enterprise AI platform to more than 2,800 companies, including BNY and Virgin Voyages to streamline knowledge management and automate processes. Gemini Enterprise managed over 5 billion customer interactions in Q4, growing 65% year-over-year for customers, including Wendy’s, Kroger and Woolworths Group. 

Alphabet is Apple’s preferred cloud provider

We are collaborating with Apple as their preferred cloud provider and to develop the next generation of Apple Foundation Models based on Gemini technology.

1 million channels used Alphabet’s new AI creation tools in December 2025 each day; 20 million viewers used Youtube’s new Gemini-powered Ask tool in December 2025

On average, every day in December, over 1 million channels used our new AI creation tools to supercharge their creativity. During that same month, more than 20 million viewers used our new Ask tool powered by Gemini to learn more about the content they watched.

Waymo recently raised its largest investment round to date; Waymo surpassed 20 million fully autonomous trips in December 2025; Waymo is now providing 400,000 rides per week; Waymo recently launched its 6th market in Miami; Waymo will soon expand to the UK and Japan; Alphabet participated in Waymo’s latest investment round

This week, Waymo raised its largest investment round to date and is well positioned to continue its momentum with safety at the core. In December, we surpassed 20 million fully autonomous trips and are now providing more than 400,000 rides every week. Waymo continues to expand its service territory. Its sixth market, Miami, launched 2 weeks ago, and Waymo will soon expand its service to multiple cities across the U.S. and in the U.K. and Japan. The team has made incredible progress on important capabilities, including opening up public service to airports and freeways…

…Alphabet funded a significant portion of the $16 billion investment round that Waymo announced on Monday, which will allow the business to accelerate its global expansion.

Alphabet’s management is investing in AI to drive improvements across all areas of marketing; management thinks AI gives businesses the ability to reach more customers in more places than before; Gemini improves advertising quality, advertiser tools, and new advertising experiences; Gemini helps Alphabet evaluate advertising relevance with greater accuracy than before; Gemini helps Alphabet deliver ads on longer, more complex searches that were previously challenging to monetize; Gemini helps Alphabet improve understanding of non-English languages, thus helping businesses scale globally; Gemini helps businesses generate new advertising campaigns through a conversational experience; advertisers used Gemini to create 70 million creative assets in AI Max and PMax in 2025 Q4; Artizia used AI Max to achieve an 80% incremental uplift in conversion value in 2025 Q4; L’Oreal used AI Max in 2025 to increase revenue for DTC (direct to consumer) brands by 23%; management is in the early stages of experimenting with AI Mode monetization, with an example being Direct Offers, which allow advertisers to show exclusive offers to shoppers who buy directly in AI Mode

We’re investing in AI to drive significant improvements across all areas of marketing. We’re expanding the entire playing field that advertisers can compete on. AI gives businesses the ability to reach more customers in more places than ever before. Gemini uniquely positions us to bring the transformational benefits of AI to ads in 3 critical areas for our customers: ads quality, advertiser tools and new AI user experiences.

First, ads quality. We’ve been deploying Gemini models to improve query understanding at a rate of almost a launch per month for the last 2 years. These improvements drive better query matching, ranking and quality, making search ads even more effective. With Gemini across our ads quality stack, we evaluate relevance with greater accuracy than with previous generations of models. This has significantly improved our ability to systematically deliver more helpful high-quality ads, contributing to a meaningful reduction in irrelevant ads served. Gemini’s understanding of intent has increased our ability to deliver ads on longer, more complex searches that were previously challenging to monetize. Gemini models also have a significant impact on query understanding in non-English languages, expanding opportunities for businesses to scale globally.

Second, we’re building more agentic actions into our advertiser tools. Businesses can now leverage Gemini in conversational experiences within Ads and Analytics Advisor to identify and run recommended actions such as generating new campaigns. Advertisers use Gemini as a real-time partner to assemble creatives. In Q4 alone, they used Gemini to create nearly 70 million creative assets via text customization in AI Max and PMax. For instance, Aritzia, Canada’s premier fashion house used AI Max to find new high-value customers that traditional strategies miss, delivering an 80% incremental uplift in conversion value for Q4. L’Oreal, one of the first alpha testers, used AI Max in 2025 across 800 unique campaigns in 23 countries and 30 brands. AI Max enabled the L’Oreal Group to maximize its presence across the full consumer journey, fuel its consumer growth and increase revenue for DTC brands like NYX by 23%.

The third area is how we monetize new AI user experiences in search. We have significantly increased our focus on AI mode and are in the early stages of experimenting with AI mode monetization like testing ads below the AI response with more underway. For example, we announced Direct Offers, a new Google Ads pilot, which will allow advertisers to show exclusive offers for shoppers who are ready to buy directly in AI mode. This new type of sponsored content uses AI to match the right offer provided by the retailer to the right user.

Google Cloud had 48% revenue growth in 2025 Q4 (was 34% in 2025 Q3) driven by growth in GCP; GCP grew at a much higher rate than Google Cloud’s overall growth, driven by enterprise AI products, which have billions in quarterly revenue; the enterprise AI products included enterprise AI infrastructure (i.e. usage of TPUs and NVIDIA’s GPUs) and enterprise AI solutions; the core GCP, non-AI business was also a meaningful contributor to growth; Google Cloud operating margin was 30.1% (was 23.7% in 2025 Q3 and was 17.5% in 2024 Q4)

The Google Cloud segment delivered outstanding results in the fourth quarter as the business continued to benefit from strong demand for our enterprise AI products. Cloud revenue accelerated meaningfully and were up 48% to $17.7 billion. Revenues were driven by strong performance in GCP, which continued to grow at a rate that was much higher than cloud’s overall revenue growth rate…

…GCP’s performance was driven by accelerating growth in enterprise AI products, which are generating billions in quarterly revenues. We had strong growth in both enterprise AI infrastructure, driven by deployment of TPUs and GPUs and enterprise AI solutions, which benefited from demand for our industry-leading models, including Gemini 3. Core GCP was also a meaningful contributor to growth due to strong demand for infrastructure and other services such as cybersecurity and data analytics. We also had double-digit growth in Workspace, driven by an increase in average revenue per seats and the number of seats. Cloud operating income was $5.3 billion, more than doubling year-over-year, and operating margin increased from 17.5% in the fourth quarter of last year to 30.1%.

In terms of Alphabet’s outlook, management notes that Google Cloud is seeing significant demand, and that the demand/supply situation is still tight; management notes that Alphabet’s AI investments have already translated into strong performance in the business; management expects capex of $175 billion to $185 billion in 2026 (nearly double from $91.4 billion in 2025, which was itself up 65% from $55.4 billion in 2024, and 2024’s capex was up 69% from 2023); the capex will be for AI compute capacity to build frontier models, as well as for compute to (1) improve user experiences and drive higher advertiser ROI in Google Services, and (2) meet Google Cloud customer demands; management expects the growth rate in depreciation expense to accelerate in 2026 Q1 and meaningfully increase for the year; Google Cloud’s supply is tight even when it has been ramping up supply, and management expects tight supply throughout 2026; when management makes capex decisions, they go through a rigorous process of assessing the return on the investment; the capex in 2026 will be split 60-40 in terms of servers, and data centers and networking equipment; just over half of Alphabet’s compute capex in 2026 is expected to go towards the cloud business

In Google Cloud, we’re seeing significant demand for our products and services, which we expect to continue to drive strong growth despite the tight supply environment we’re operating in…

…The investment we have been making in AI are already translating into strong performance across the business, as you’ve seen in our financial results. Our successful execution, coupled with strong performance reinforces our conviction to make the investments required to further capitalize on the AI opportunity. For the full year 2026, we expect CapEx to be in the range of $175 billion to $185 billion with investments ramping over the course of the year. We’re investing in AI compute capacity to support Frontier model development by Google DeepMind, ongoing efforts to improve the user experience and drive higher advertiser ROI in Google Services, significant cloud customer demand as well as strategic investments in Other Bets. Keep in mind that the availability of supply, pricing of components and timing of cash payments can cause some variability in the reported CapEx number…

…We’ve been supply constrained even as we’ve been ramping up our capacity…

…I expect the demand we are seeing across the board across our services, what we need to invest for future work for Google DeepMind as well as for cloud, I think, is exceptionally strong. And so I do expect to go through the year in a supply-constrained way…

…We have a highly rigorous framework that we use internally where we look at all the needs for investment, whether it’s from our own organization or from external customers and have an estimate of what that investment could potentially yield, obviously, not just near term but long term as well. So we take that into consideration when we make the following decision. The first one is the total investment that we make across the company. This was, for example, in 2025, the $91 billion we invested in CapEx and our estimate for CapEx investment this year. So what’s the total envelope that we want to invest to ensure that we can drive both near-term and long-term growth for the company. And then the second way we use that framework is to just allocate these funds across the organization, determine where we should make these investments. And throughout the year, as you can imagine, we always look to understand where things are moving, whether it’s external dynamics or internal dynamics, and I’ve mentioned some of the supply chain pressures we’re seeing externally. So we look at this with a highly rigorous framework to make sure that we’re making the right decision.

It was exciting to see the fact that we’re already monetizing and you saw it in the results that we just issued this quarter, the investments that we’ve made in AI. It’s already delivering results across the business. I know it in cloud, it’s very obvious external, but you’ve heard the comments on the success we’re seeing in search, the comments from Sundar and from Philipp and then the Frontier model development that really serves as the foundation for the organization. We then also look at just the cash flow, cash flow generation and the health of our financials and the balance sheet. That’s important as well…

…Approximately 60% of our investment in 2025, and it’s going to be fairly similar in 2026, went towards machines, so the servers. And then 40% is what you referred to as long-duration assets, which is our data centers and network and equipment…

…For 2026, just over half of our ML compute is expected to go towards the cloud business.

In agentic use cases, Alphabet’s management thinks coding is the area where progress was most felt

I’ll take the agentic part first. I definitely think ’25 was more about laying the foundation, getting the models to start being more robust in agentic use cases. And obviously, coding is an area where the progress was the most felt.

The launch of the Universal Commerce Protocol (UCP) in January 2026 has been really well received; management is integrating UCP into all of Alphabet’s AI surfaces; management thinks 2026 is the year where consumers can actually experience agentic commerce; management sees the UCP making it much easier for (1) consumers to complete transactions, and (2) merchants to showcase their offerings

I think the launch of Universal Commerce Protocol at NRF in January with a bunch of partners, founding partners, I think has been super well received. So I’m excited now that we have laid the foundation of interoperability on which agentic commerce can work. And now we are integrating those experiences into Gemini, AI Mode and so on. So I think this is a year where you will see consumers actually being able to use all of this, and I’m excited about the opportunity ahead…

…Part of what’s been good in designing the Universal Commerce Protocol is it makes it much easier for users to complete transactions. But at the same time, it allows merchants to help showcase the range of their offerings, if they want to make promotions, et cetera. So all of that is built into the protocol.

About 50% of code used within Alphabet is written by AI agents that are then reviewed by engineers; Alphabet is employing AI widely within the company

About 50% of our codes are written by agents, coding agents, which are then reviewed by our own engineers. But certainly, it helps our engineers do more and move faster with the current footprint. We look at how we run the business across the organization. So using AI within the business to drive daily operations. It can be all the way from the engineering team to small teams within our back office, even within my finance team, for example, we deployed agents within our treasury organization. We’re deploying agents within how we run — how we pay and reconcile invoice, et cetera.

Alphabet’s management is seeing successful SaaS companies incorporate Gemini deeply into their products and internal processes; management thinks SaaS companies that seize the moment with AI can continue growing; management is seeing very robust token consumption growth by SaaS companies in 2025 Q4

[Question] It just seems like there’s a market belief that the software companies are kind of losing seat power, losing pricing power, and it looks like it could be a really terrible customer base. I can’t imagine that that’s actually going to happen. But could you just talk about it? You’re at the forefront of AI and the impact that that’s having on software companies.

[Answer] in terms of Gemini adoption and how — what this moment means for SaaS, et cetera. Look, at least from my vantage point, I definitely see we have very, very good SaaS customers who are leaders in their respective categories. And what I see the successful companies doing is they are definitely incorporating Gemini deeply in critical workflows, be it on improving their product experience and driving growth or using it to drive efficiency within their organizations. And I think it is an enabling tool, just like it has been an enabling tool for us across our products and services, be it Search, YouTube, et cetera. I think the companies who are seizing the moment, I think, have the same opportunity ahead. And at least we are excited about the partnerships we have there. And the momentum, if I look at it in terms of their tokens usage, et cetera, the growth has been very robust in Q4.

A major concern of Alphabet’s management at the moment is the ability to build AI compute capacity

I think specifically at this moment, maybe the top question is definitely around compute capacity, all the constraints, be it power, land, supply chain constraints, how do you ramp up to meet this extraordinary demand for this moment, get our investments right for the long term and do it all in a way that we are driving efficiencies and doing it in a world-class way.

Amazon (NASDAQ: AMZN)

AWS grew 24% year-on-year in 2025 Q4 (was 20% in 2025 Q3), and is now growing at its fastest pace in 13 quarters; AWS’s run rate has reached $142 billion (was $132 billion in 2025 Q3); AWS’s chips business, including Graviton and Trainium, are over $10 billion om annual revenue rate, and growing triple-digits; AWS is where most companies’ data and workloads reside, and why most companies want to run AI in AWS; AWS’s non-AI workloads are growing faster than expected; management thinks that if companies want to use AI well, their data and applications need to be hosted in the cloud, and this is driving cloud migration; AWS’s backlog is $244 billion in 2025 Q4, up 40% year-on-year (was $200 billion in 2025 Q3)

AWS growth continued to accelerate to 24%, the fastest we’ve seen in 13 quarters, up $2.6 billion quarter-over-quarter and nearly $7 billion year-over-year. AWS is now a $142 billion annualized run rate business, and our chips business, inclusive of Graviton and Trainium is now over $10 billion in annual revenue run rate, growing triple-digit percentages year-over-year…

…We consistently see customers wanting to run their AI workloads where the rest of their applications and data are…

…If you look at the capital we’re spending and intend to spend this year, it’s predominantly in AWS. And some of it is for our core workloads, which are non-AI workloads because they’re growing at a faster rate than we anticipated…

…If you really want to use AI in an expansive way, you need your data in the cloud and you need your applications in the cloud. Those are all big tailwinds pushing people towards the cloud…

…[Question] Maybe a few parts just on AWS. Can you speak to the current state of your revenue backlog as of Q4?

[Answer] I’ll start with the first one, which is on backlog, our backlog is $244 billion. That’s up 40% year-over-year. I think it’s up 22% quarter-over-quarter.

Amazon’s management is seeing that AI applications tend to use multiple models, as different models are better on different dimensions; Amazon Bedrock, AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps, makes it easy to use multiple models for inference; Amazon Bedrock now has a multi-billion dollar annualised revenue run rate, and customer spend was up 60% sequentially in 2025 Q4

Customers are realizing as they get further into AI that they need choice as different models are better on different dimensions. In fact, most sophisticated AI applications leverage multiple models, whether customers want frontier models like Anthropic’s Claude or open models like Mistral or Llama, Frontier Intelligence with lower cost and latency like Amazon Nova or video and audio models like TwelveLabs or Nova Sonic. Amazon Bedrock makes it easy to use these models to run inference securely, scalably and performantly. Bedrock is now a multibillion-dollar annualized run rate business and customer spend grew 60% quarter-over-quarter.

Amazon’s management sees that a lot of work is needed to post-train and fine tune an AI model before it can be used in an application; AWS’s SageMaker AI service, makes it easy for users to post-train and fine tune AI models

Customers sometimes think if they have a good model, they will have a good AI application. It’s not really true. It takes a lot of work to post-train and fine-tune a model for your application. Our SageMaker AI service, along with fine-tuning tools in Bedrock make this much easier for customers.

Enterprises using AI models are currently infusing their proprietary data into the models late in the process through fine tuning or post-training; Amazon’s management believe enterprises will want AI models to train on their proprietary data earlier in the process, through pre-training; AWS’s NovaForge service allows enterprises to mix their own proprietary data into the pre-training phase of Amazon’s 1st party frontier Nova models; NovaForge is the 1st of its kind feature

To date, companies have tried to shape models with their own data late in the process, usually with fine-tuning or post-training. There’s a debate in the industry about this, but we believe that enterprises will want models trained on their own data at an early stage of pretraining if possible. So their models have the best possible foundation for what matters most to each enterprise on which to learn and evolve. It’s a little like teaching a child of foreign language early in their life. That becomes part of their learning foundation moving forward, and it makes it easier to pick up other languages later in their life. To solve for this need, we just launched Nova Forge, which give customers early checkpoints on our Amazon Nova models, allows them to securely mix their own proprietary data with the models data in the pretraining stage and enables their own uniquely customized versions of Nova, what we call Novellas, trained with their data early in the process. This will be very useful for companies as they build their own agents on top of the model. There is nothing else out there like this today and a potential game changer for companies.

Amazon’s management is seeing customers want better price performance from AI chips; Amazon has landed over 1.4 million of its 1st-party AI chip, Trainium 2; Trainium 2 has 30%-40% better price performance than comparable GPUs (likely referring to NVIDIA’s GPUs); Trainium 2 is currently at a multibillion-dollar annualized revenue run rate; 100,000-plus companies are already using Trainium 2 as it is the majority of Bedrock’s usage; management recently launched Trainium 3, which is 40% more price performant than Trainium 2; management expects nearly all of Amazon’s supply of Trainium 3 to be committed by mid-2026; management is already seeing very strong interest for Trainium 4, which is under development; AI start-up Anthropic is training its next cloud model with Trainium 2 through AWS’s Project Rainier; Project Rainier started with 500,000 chips and is continuing to increase; Trainium 2 is currently fully subscribed; Trainium 4 is expected to launch in 2027; customers are already asking about Trainium 5; Anthropic is pleased with Project Rainier

Customers are starving for better price performance. And typically, and understandably, the dominant early leaders aren’t in a hurry to make that happen. They have other priorities. It’s why we’ve built our own custom silicon and Trainium, and it’s really taken off. We’ve landed over 1.4 million Trainium2 chips, our fastest ramping chip launch ever. Trainium2 is 30% to 40% more price performance than comparable GPUs and is a multibillion-dollar annualized revenue run rate business with 100,000-plus companies using it as Trainium is the majority underpinning of Bedrock usage today. We recently launched Trainium3, which is up to 40% more price performant than Trainium2. We’re seeing very strong demand for Trainium3 and expect nearly all of our Trainium3 supply of chips to be committed by mid-2026. And though we’re still building Trainium4, we’re seeing very strong interest already…

…You mentioned Project Rainier, Anthropic is building their next — they’re training their next cloud model on top of Trainium2. And that’s what Project Rainier is. So we talked about 500,000 chips there. You’ll see that continuing to increase. They’re also using a fair bit of Trainium2 for other workloads and their own APIs beyond just Project Rainier. But Trainium is a multibillion dollar annualized run rate business at this point, and it’s fully subscribed…

…There’s very substantial interest in Trainium4, which is coming in 2027. And we’re already having conversations about Trainium5…

…The Project Rainier has gone very well. I think Anthropic is quite pleased with it.

Amazon’s management thinks the primary way companies will derive value from AI will be agents; management thinks companies will use both their own agents and those built by others; management thinks it’s difficult to build agents, so they have launched Strands, which helps users build agents from any AI model; AI agents require a secure and scalable way to connect with multiple elements of a company’s tech stack, and management thinks this is a hard problem to solve; management has launched Bedrock AgentCore to help companies connect agents to the elements; customers are excited about Bedrock AgentCore; Amazon has built multiple agents for customers to use, including Kiro for coding, Amazon Quick for analytics, AWS Transform for software migration, and more; the number of developers using Kiro grew 150% sequentially in 2025 Q4; management is seeing customers get excited about fully autonomous agents and have launched such agents, such as Kiro for coding, AWS DevOps for operational problem solving, and AWS Security Agents for application security

The primary way companies will get value from AI is with agents, some their own, some from others, and there are several customer challenges that we’re well positioned to solve. It’s harder to build agents than it should be. For that, we’ve built Strands, a service enabling agents to be created from any model. Once agents are built, enterprises are apprehensive about deploying to production because these agents need to securely and scalably connect to compute, data, tools, memory, identity, policy governance, performance monitoring and other elements. This is a new and hard problem where a solution has not existed until we launched Bedrock AgentCore. Customers are quite excited about AgentCore, and it’s unlocking deployments.

Customers also want to leverage others’ useful agents, and we’ve built several, including Kiro for coding, Amazon Quick for knowledge workers to leverage their own data and analytics, AWS Transform for software migration and Amazon Connect for call center operations. We continue adding new capabilities and usage continues to grow quickly. For example, the number of developers using Kiro grew more than 150% quarter-over-quarter. 

In addition to agents that customers direct, customers are also becoming excited about agents that require less human interaction. They can be fully autonomous, run persistently for hours or days, scale out quickly and remember context. At this past AWS re:Invent, we launched Frontier Agents to do that. Kiro autonomous agents for coding tasks, AWS DevOps agents for detecting and resolving operational issues and AWS Security Agents for proactively securing applications throughout the development life cycle, and they’re already making a big difference for customers.

Rufus, Amazon’s AI shopping assistant, can now research products, track prices and auto buy; Rufus can now shop tens of millions of items in other online stores and make purchases for customers; Rufus has 300 million customers in 2025; management thinks agentic commerce will be a great experience for consumers; customers who use Rufus are 60% more likely to complete a purchase; management thinks Amazon will eventually have relationships with 3rd-party agents that have commerce capabilities, but the commerce capabilities need to be a lot better than what’s available now; management thinks that consumers will prefer a commerce agent from a retailer they are familiar with, over a horizontal agent that also has commerce capabilities, and this is why management is optimistic about Rufus

Our Agentic AI shopping assistant, Rufus, has rapidly expanded. Rufus can research products, track prices and auto buy, purchasing a product in our store when it reaches your set price. It can also now shop tens of millions of items in other online stores and make purchases for customers using our Agentic Buy for Me feature. Last year, more than 300 million customers used Rufus…

…I’m very optimistic about the customer experience that will ultimately be what customers use for Agentic shopping. And I think it’s good for customers. I think it’s going to make it easier for them…

…Customers who use Rufus are about 60% more likely to complete a purchase…

…We will have relationships with third-party horizontal agents that can enable shopping as well. We have to collectively figure out a better customer experience. It’s still — these horizontal agents don’t have any of your shopping history. They get a lot of the product details wrong, they get a lot of the pricing wrong. And so we have to try to find a customer experience together that’s better and a value exchange that makes sense for both parties. But I’m very hopeful that we’ll get there over time…

…I think you’re going to have to look at as time goes on, which types of — which shopping agents are consumers going to use. And it kind of reminds me in some ways of the early days of kind of all the search engines that were referring traffic to retailers. And it’s still a relatively small portion of the overall traffic and sales. But of that fraction, you have to ask how many consumers are going to prefer using a horizontal agent where it’s kind of a middle person between the retailer and the consumer versus wanting to use a great agent from that retailer that has all its shopping history and that has all the data right there and makes it easy if you’re just spearfishing for something to shop for it right there or if you want to do discovery, you can do it there, and it’s got the best data on shopping. I think a lot of customers are ultimately going to choose to use a great shopping agent from that retailer. Because if you think about what consumers really want in retail in a retailer, they want really broad selection. They want low prices. They want really fast delivery. And then they want a retailer that they can trust and that takes care of them. And I think horizontal agents are pretty good at aggregating selection, but retailers are much better at doing all 4 of those items. And so I’m very optimistic that people will use our shopping agent.

The usage of AI has helped Amazon deliver highly relevant and useful advertisements for customers; Prime Video ads continued to grow and had meaningful contribution to Amazon’s advertising revenue growth; Prime Video had an average ad-supported audience of 315 million in 2025, up from 200 million in early-2024; management recently launched Ads Agent which helps brands to create and optimize campaigns at scale and target effectively; management recently launched Creative Agent, which creates full funnel ad campaigns for advertisers through a conversational interface, shortening campaign creation from a week to hours

Sponsored products advertising in our store continues to be our largest ads offering and the combination of trillions of shopping, browsing and streaming signals with advanced AI and machine learning led us to deliver highly relevant and useful ads for customers…

…We recently announced our Ads Agent, which lets brands use AI to create and optimize campaigns at scale, implement effective campaign targeting and quickly create actionable insights. And our Creative Agent lets advertisers research, brainstorm and generate full funnel ad campaigns from concept to completion using conversational guidance in Amazon’s retail data, transforming what was a week-long process into just hours.

Amazon’s management expects 2026 capex to be $200 billion (was $128 billion in 2025, and $83 billion in 2024); most of the capex will be for Amazon’s AI needs; management is seeing really high demand for AWS’s core and AI features; AWS is monetising compute capacity the moment it is installed; management has deep experience producing high return on invested capital (ROIC) with AWS capex, and they are confident the AI capex will also generate high ROIC; for 2026 Q1, revenue growth is expected to be 11%-15% and operating income growth is expected to be between -10% and 17%; one way AWS’s ROIC for AI capex is already showing up is in the expansion of its operating margin; the vast majority of AWS’s capex has been for compute capacity that is consumed by external customers; AWS, as well as the other cloud providers, could actually grow faster if they had more supply of AI compute

We expect to invest about $200 billion in capital expenditures across Amazon, but predominantly in AWS because we have very high demand, customers really want AWS for core and AI workloads, and we’re monetizing capacity as fast as we can install it. We have deep experience understanding demand signals in the AWS business and then turning that capacity into strong return on invested capital. We’re confident this will be the case here as well…

…Q1 net sales are expected to be between $173.5 billion and $178.5 billion. This guidance anticipates a favorable impact of approximately 180 basis points from foreign exchange rates. As a reminder, global currencies can fluctuate during the quarter. Q1 operating income is expected to be between $16.5 billion and $21.5 billion…

…On the investments we’re making, as Andy said earlier, we are putting into service with customers all capacity that we’re getting and it’s immediately useful. And we’re also seeing a long arc of additional revenue that we see from other customers and backlog and commitments that people are anxious to make with us, especially for AI services. So you can see that’s working its way into our P&L, both through CapEx and also through our operating margin in AWS. AWS is 35% operating margin through Q4, up 40 basis points year-over-year…

…The vast majority of our — the capital that we spend and the capacity that we have is consumed by external customers. We have — Amazon has always been a very large AWS customer, a very helpful AWS customer because they’re very demanding, and they use the services very expansively and stretch the limits as we launch things. So they’ve always been a very important big customer, but always a very small fraction of the total, and that’s true today in AI as well as the overall AWS business…

…So we’re growing at really an unprecedented rate yet, I think every provider would tell you, including us that we could actually grow faster if we had all the supply that we could take. And so we are being incredibly scrappy around that.

Amazon’s management thinks that inference will be the majority of AI workloads in the long run

I think some of the things that you will see over time in the AI space is you’re going to keep seeing all of the inference services, which is going to be the majority of the long-term AI workloads is going to be inference. You’re going to see the inference keep getting optimized.

It appears that Amazon’s management is willing to bring free cash flow to negative to aggressively invest in AI, as they see it as an unusually large opportunity

[Question] Are there any financial guardrails or governors in place that we should think about around the spend just in terms of operating income growth or positive free cash flow?

[Answer] I think this is an extraordinarily unusual opportunity to forever change the size of AWS and Amazon as a whole. I think it also is an extraordinary opportunity for companies to change all their customer experiences and for start-ups to be able to build brand-new experiences and businesses that would have taken much longer to try to accomplish before that they can do right now. And so we see this as an unusual opportunity, and we are going to invest aggressively here to be the leaders because like we’ve been in the last number of years and like I think we will be moving forward.

Market demand for AI compute currently looks like a barbell to Amazon’s management, with AI labs on one end spending a lot on compute for just a handful of applications, and with enterprises on one end that are using AI for productivity purposes; the middle of the barbell are production AI workloads from enterprises that are under evaluation; management thinks the middle part of the barbell will be the largest and most durable aspect of market demand for AI compute, but it has yet to materialise and it’s only a matter of time

The way I would describe what we see right now in the AI space is it’s really kind of a barbelled market demand where on one end, you have the AI labs who are spending gobs and gobs of compute right now, along with what I would consider a couple of runaway applications. And then at the other side of the barbell, you’ve got a lot of enterprises who are getting value out of AI in doing productivity and cost avoidance types of workloads. These are things like customer service or business process automation or some of the fraud pieces. And then in that middle of the barbell are all the enterprise production workloads. And I would say that the enterprises are in various stages at this point of evaluating how to move those, working on moving those and then putting them into production. But I think that middle part of the barbell very well may end up being the largest and the most durable. And I would put in the middle of that barbell, too, by the way, I would put just the altogether brand-new businesses and applications that companies build that right from the get-go run in production on top of AI…

…When I look at this and what’s happening, it’s kind of unbelievable if you look at the demand of what you’re seeing already with AI, but the lion’s share of that demand is still yet to come in the middle of that barbell. And that will come over time. It will come as you have more and more companies with AI talent as more and more people get educated with the AI background, as inference continues to get less expensive, and that’s a big piece of what we’re trying to do with Trainium and our hardware strategy. And as companies start to have success in moving those workloads to — further and further success in moving those workloads to run on top of AI.

Almost every conversation Amazon’s management is having with companies regarding AWS starts with AI; management thinks that the AI movement will eventually involve many, many more companies than what’s seen today

There’s a number of AI labs, but almost every company you talk to, almost every conversation we have on the AWS side, starts with AI…

…This AI movement is not going to be a couple of companies. It’s going to be thousands of companies over time.

Amazon has over 1,000 AI applications internally that are in production or being developed, and these applications are used in all areas of Amazon’s business

Internally, we have all sorts of ways that we are using AI. We have over 1,000 AI applications that we’ve either deployed or in the process of building, and they range from our shopping assistant in Rufus that we were just talking about to Alexa+, which is a really large-scale generative AI application to applications in our fulfillment network that allow us to have more accurate forecasting predictions to how we do customer service and our customer service chatbot to how we are making it much easier for brands to create advertisements and to optimize all their campaigns across the full funnel of advertising options we have to — in live sports, if you watch Thursday Night Football, you can see defensive alerts, which predict which player is going to blitz or pocket health.

AWS added 3.9 GW of compute capacity in 2025, and that is more than any other company in the world; the 3.9 GW of compute capacity AWS added in 2025 is twice what AWS had in 2022; management expects AWS’s compute capacity to double by 2027; AWS added 1.2 GW of compute capacity in 2025 Q4

In 2025, AWS added more data center capacity than any other company in the world…

…If you look in the last 12 months, we added 3.9 gigawatts of power. Just for perspective, that’s twice what we had in 2022 when we were an $80 billion annual run rate business. We expect to double it again by the end of ’27. We added 1.2 gigawatts of power in Q4, just quarter-over-quarter.

Apple (NASDAQ: AAPL)

The consumer response to AirPods Pro 3 has been amazing; AirPods Pro 3 has a live translation feature; management has been hearing powerful stories of people using live translation to communicate seamlessly across languages

The response to AirPods Pro 3 has been amazing. Customers are raving about the rich immersive sound quality, the unmatched level of active noise cancellation and the noticeably improved comfort that makes them effortless to wear. Features like live translation are also changing the way people can communicate by helping users connect across languages in real time and making everyday conversations feel more natural and accessible…

…And as I touched on earlier, we are hearing powerful stories of people using live translation to communicate seamlessly across languages.

The majority of enabled-iPhone users were using Apple Intelligence in 2025 Q4 (FY2026 Q1); management has introduced dozens of features in Apple Intelligence since launch; Apple Intelligence now supports 15 languages; one of Apple Intelligence’s most popular features is Visual Intelligence; management thinks Apple’s products are the best platforms in the world of AI because of Apple silicon; Apple is collaborating with Google to build the next generation of Apple’s foundation models that will power Apple Intelligence; management determined that Google’s AI technology was the most capable for building Apple foundation models; even with the collaboration with Google, Apple Intelligence will continue to run on-device and in Private Cloud Compute; management sees both on-device and cloud inference as important; a growing percentage of Apple’s overall iPhone installed base is AI-capable

During the quarter, we were excited to see that the majority of users on enabled iPhones are actively leveraging the power of Apple Intelligence.

Since the launch of Apple Intelligence, we’ve introduced dozens of features, including writing tools and cleanup and made it available in 15 languages. These AI experiences are personal, private, integrated across our platforms and relevant to what our users do every day. We are bringing intelligence to more of what people already love about our products so we can make every experience even more capable and effortless. One of our most popular features is Visual Intelligence which helps users learn and do more than ever with the content on their iPhone screen, making it faster to search, take action and answer questions across their apps. And as I touched on earlier, we are hearing powerful stories of people using live translation to communicate seamlessly across languages.

And these are just some of the many powerful AI features that are enabling our users to do remarkable things with our products, which are far and away the best platforms in the world for AI. That’s in no small part because of the extraordinary power and performance of Apple silicon. 

Building on our efforts in the AI space, we are also collaborating with Google to develop the next generation of Apple foundation models. This will help power future Apple Intelligence features, including a more personalized series coming this year. We’re incredibly excited for what’s to come with so many new experiences to unlock…

…We basically determined that Google’s AI technology would provide the most capable foundation for AFM — I’m sorry, Apple foundation models. And we believe that we can unlock a lot of experiences and innovate in a key way due to the collaboration. We’ll continue to run on the device and run in Private Cloud Compute and maintain our industry-leading privacy standards in doing so…

…[Question] When you think about how Apple might manage AI, do you see that evolving towards more edge AI or on device services versus cloud-based AI?

[Answer] We see both being important, the on-device and the private cloud compute. And so we don’t see it as an either/or we see it as both…

…[Question] Can you speak at all to roughly what portion of your iPhone or overall active device installed base is now AI capable?

[Answer] We don’t provide that specific number, but it is a growing number, as you can imagine in our installed base.

Apple’s management will continue with Apple’s hybrid approach when it comes to capital expenditure for AI data centers (of using its own data centers as well as those of 3rd-parties)

Just speaking of CapEx, in general, as you know, we have a hybrid model for CapEx. And so I think that what happens is our CapEx can be volatile, independent of kind of the volume and the performance of our business.

The use of Apple’s own chips in its products provides both strategic as well as direct value, and has impacted the company’s gross margin in a positive way

As far as impact on gross margin, we have been, as you know, investing in core technologies like our own silicon, our own modem. And certainly, while those do provide opportunities for cost savings and can be reflected in margins, they also importantly provide the differentiation that’s really important for our products as well and give us more control of our road map. So I think there’s a lot of strategic value to it, but also we are seeing investments in our core technologies impacting gross margin in a positive way.

ASML (NASDAQ: ASML)

ASML’s management has seen the company’s customers become more positive in their medium-term outlooks, driven by demand for AI; ASML’s customers, for both Logic and Memory (DRAM) chips, are building capacity, and this has translated into orders for ASML’s EUV systems; ASML’s Logic customers are becoming more comfortable about the long-term sustainability of AI demand; ASML’s Memory (DRAM) customers are ramping up capacity for their advanced nodes, and these nodes require more EUV layers; management sees a strong belief in ASML’s customers that AI demand is real, and these customers are adding major capacity, starting in 2026

If you listen to our customers, both what they say publicly, but also what they told us, it’s pretty clear that customers over the past couple of months have actually become more positive in their assessment of the medium-term market perspectives as they see it. I think it’s primarily on the basis of the more robust view that they have when it comes to demand for AI, which seems to be more sustainable from their vantage point. That recognition has led some of our customers to really invest in capacity and gear up their plans for medium-term capacity expansion…

…The market outlook has notably improved in the last few months. This is especially true when it comes to the build-up of the capacity for AI applications, being data centers or other infrastructure. Now, we start to see that this build-up is also translating into need for capacity at our advanced customers. This is true for Logic. This is true for DRAM. This starts to translate also into orders for our most advanced technology, especially EUV. So in the last few months we have seen our DRAM customers, our Logic customers, starting to accelerate their planning-capacity and having this discussions with us. 

If I look at Logic first, so there we see our customers starting to be more comfortable about the sustainability of the long-term AI demand. This means that they are more willing to accelerate their capacity-planning. They are transitioning also from 4nm technology to 3nm technology, which is going to be more demanding in terms of advanced technology. Finally, of course, the ramp of 2nm is going on and I would say is accelerating in order to fulfill the future need of mobile and HPC applications.

When I look at DRAM, there also the demand is very strong for HBM, of course, but also for DDR. This most probably will lead to a very tight supply, at least in 2026 and most probably beyond that. So we see our customer ramping 1b, 1c nodes, which are going to be critical for that demand. And on those nodes, we see them increasing basically the amount of EUV layers. We have talked about that in the past. We see that happening very strongly right now…

…  I think a strong belief that the AI demand is real and a preparation for that, with on the short term a major addition of capacity. This will start in 2026 and will last beyond that. 

ASML’s management is seeing multi-beam inspection becoming more critical as the use of 3D structures in advanced Logic and Memory chips increase; management expects ASML’s E-beam inspection system to gain more traction in 2026

On E-beam inspection, multi-beam is becoming more and more critical. 2025 was also a good year for this product. Allowing us to mature the technology, demonstrate initial value with our customers. We expect also that product to have more traction in 2026…

…With the continuing increase of 3D structures in advanced Logic and Memory, we see more adoption of our multi e-beam inspection system to detect optically non-visible yield-limiting defects.  

ASML’s management continues to see strong growth for the semiconductor market, especially for advanced chips, in the long-term, driven by AI; management sees higher lithography intensity in ASML’s customers’ manufacturing processes; ASML’s management sees AI driving much faster growth in demand for advanced memory and advanced logic chips compared to non-AI memory and non-AI logic chips; AI demand is driving demand for not just more transistors per chip, but more wafers as well

One of the key points we made at our Capital Markets Day, November 2024, was that AI applications will require more advanced technology in DRAM and Logic and will drive basically some of our most advanced products. I think that this is being confirmed as we speak. The last few months have pointed basically exactly to that dynamic. We also see that the progress we continue to make on our cost of technology with EUV is driving for more litho-intensity. And that’s, again, something that has been confirmed in the last few months…

…You see the historical growth of memory logic, which is about 6%, 7% year-on-year…

…What you see with AI is that when we look at advanced logic, when we looked at advanced memory the growth on those segments is going to be more than 20% year-on-year for the foreseeable future. And this is really what is going to drive basically more demand on lithography. Why is that? So we’ve talked in the past a lot about Moore’s low, of course. And Moore’s Law is law that say that every couple of years, we need to double the number of transistors per chips. And that law has been true for many, many years for PC for mobile application. Now when you look at AI and this started to happen in 2010, the curve is far more aggressive. When you look at the most advanced AI product today, NVIDIA products, for example, the request is not to grow 2x every 2 years, but in the last few years to grow 16x every 2 years. So you see a major acceleration basically of the need for silicon. And of course, we provide that in 2 different ways. We provide that with scaling by making transistors small. We can put more transistor per chips. And this has been a good way basically to provide more transistor and follow Moore’s law for many, many years, but that’s not enough anymore. And if you cannot put enough transistor per unit of area per chips, then the only option will be to make more wafers. And that’s a bit what we see happening with AI…

…I pick one example, and I picked it from NVIDIA because all of you are, of course, very much aware of what’s happening there. Today, on the Blackwell system, you need about 2.5 wafers to create the product. If you look at 2027 on the revenue product, this number will go up to 10 wafers. So to provide the same product to their customer, NVIDIA will need 4x more wafer than today.

ASML’s management thinks AI will have a big effect on overall GDP (gross domestic product)

The effect AI can have on the overall GDP is pretty big. In fact, if you look at the U.S., even in 2025, AI was accounting for a very large part of the growth, and we expect that basically to be applied to the entire worldwide GDP.

ASML’s management sees AI demand driving demand for even ASML’s more mature DUV (deep ultra-violet) lithography systems; management continues to drive the roadmap for ASML’s DUV lithography systems

What’s interesting with AI is that this basically touch on all products. Of course, AI is going to require very advanced chips, and this is going to drive EUV, for example. So this year will be a big year for EUV, Roger will talk about that. It’s going to drive advanced inspection tool. But at the same time, AI needs a lot of data generation, a lot of sensor, and this will be still created by the use of more mature technology such as DUV. So AI will have also this effect basically to really drive our entire product portfolio in the coming years…

…We continue to drive the road map both on Immersion, where we have launched our 2150, which basically give us sub-nanometer accuracy and more than 300 wafer per hour. Productivity is important. Productivity, of course, a way to get capacity. So we continue to drive that on immersion, I think the example of the NXT:870B, which is a KrF system is even more spectacular because there we have been capable to achieve more than 400 wafer per hour. And that tool today is creating a lot of interest at our customer because productivity, again, is capacity.

ASML is making good progress in its partnership with AI startup Mistral (reminder that ASML had invested in Mistral in 2025)

Back in the end of the summer when we announced our collaboration but also our investment in Mistral. The rationale there was to get AI in ASML and to get the very best people, the very best competence in ASML in order to be able to first strengthen our core competencies, read putting AI in our product, support the connected market, to offer some of those capability to our customers and also create new opportunity basically moving forward. That’s a project we are going to talk more about in ’26, in ’27. We are making great progress with Mistral, our partner.

ASML’s management thinks memory is more likely to be the bottleneck for AI today

It’s difficult to say if logic or DRAM is the bottleneck for AI today. I will still pick mostly memory at this point of time. And the reason for that is that it comes to memory, the demand for high bandwidth memory, which is the AI memory is extremely high. But the demand for DDR memory, which is for mobile PC is also very high. And as a result, we have seen basically the price of DRAM going up significantly in the last few weeks. Therefore, there’s a need for capacity. And our memory customers are moving very aggressively.

ASML’s management is already planning for hyper-NA EUV systems (this are systems that are even more advanced than high-NA EUV), but there’s a lot of flexibility when it comes to the timeline for introducing hyper-NA

We talk about low NA, we talk about High NA I think we talked about Hyper NA because we see that in the future, there may be a need for even a more advanced litho system. And we could end up in a war, I’m talking 10 years from now where the customer use basically each one of those 3 systems. Now this being said, when you look 10 years ahead, it’s very difficult to know exactly when this will happen. And in order to not have to answer that question today, what we did is develop a program, which we call high productivity platform. So Roger mentioned it as one of the key program in EUV, and that program basically consists in defining an EUV platform that will come to the market early next decade, and that will be able to support Low NA, this major productivity improvement. We look at more than 400 wafer per hour. High NA, also with major improvement and potentially Hyper NA. So we’re designing a platform basically that we’ll be able to receive ultimately low NA optic, high NA optic, hyper NA optic. This give us basically the full flexibility over time to decide exactly when and how we should introduce hyper NA.

Mastercard (NYSE: MA)

Mastercard’s management sees agentic commerce as being in the early days; Mastercard launched Mastercard Agent Pay in 2025 and has now enabled US card issuers to participate; management will enable Mastercard’s global issuer base to work with Agent Pay by the end of 2026 Q1; management is working with the entire agentic commerce ecosystem across all regions; although agentic commerce is still early, management thinks it will come fast; management thinks agentic commerce can affect tokenisation of payments very positively

For us, Agentic Commerce represents another avenue to enable payment choice with the same trust that we always deliver. It’s early days, but we are ready. You remember, last year, we launched Mastercard Agent Pay, a framework designed to foster trust in Agentic transactions. We have now enabled our U.S. issuers to participate in Agent Pay, and we are working to enable our global issuer base by the end of the first quarter…

…We’re actively working with ecosystem participants to adopt Agentic Commerce across all regions…

…In Asia, we’re partnering with Anthem on card-based tokenized payment solutions for Agentic payments. In the U.K., we’re consulting clients such as Lloyds Banking Group, Elavon and Santander on Agentic Commerce innovations. And in the UAE, we’re piloting Agentic payments with the leading retail and entertainment group, Majid Al Futtaim. And with banks, merchants and digital players, we continue to position them for success in this new era of commerce, whether it be through consulting, security, data-driven insights or new loyalty programs, we are there…

…What an exciting space and might be one of those use cases, AI-driven use cases that meet our reality much faster than other AI use cases out there. So I think Agentic Commerce is going to come fast. So this whole idea of a consumer using an agent to drive — have a better commerce journey, I think that just resonates with people. You get better quality insights, you get better recommendations…

…You could also see that the application of services. For example, tokenization get to see a very different path than it might see without Agentic Commerce. So these are all aspects that make me very excited about this.

Meta Platforms (NASDAQ: META)

Meta rebuilt the foundations of its AI program in 2025 and will soon start shipping new AI models and products; management expects Meta to steadily push the frontier over the course of the year as it ships new AI models; management expects to use AI models developed by Meta Superintelligence Labs (MSL) to build compelling AI products; management has already used MSL’s models to build AI dubbing of videos into local languages; Meta now supports 9 different languages and hundreds of millions of people are watching AI-translated videos daily; the translated videos are driving incremental time spent on Instagram; the AI dubbing tool will support more languages throughout 2026; management is pleased with the current progress of Meta Superintelligence Labs, but it’s a long-term effort

In ’25, we rebuilt the foundations of our AI program. Over the coming months, we’re going to start shipping our new models and products. I expect our first models will be good, but more importantly, we’ll show the rapid trajectory that we’re on. And then I expect us to steadily push the frontier over the course of the year as we continue to release new models…

…We expect to use the models developed by Meta Superintelligence Labs to deliver compelling and differentiated AI products. One area we’re already seeing promise is with AI dubbing of videos into local languages. We are now supporting 9 different languages with hundreds of millions of people watching AI translated videos every day. This is already driving incremental time spent on Instagram, and we plan to launch support for more languages over the course of this year…

…We’re about 6 months into building MSL. I’m very pleased with the quality of the team. I think we have the most talent-dense research effort in the industry and some of the early indicators look positive. But look, I think that this is going to — is a long-term effort, right? We’re not here to do this to ship like one model or one product. We’re doing a lot of models over time and a lot of different products.

Meta’s management’s vision with AI is to build personal super intelligence; management thinks what makes agents valuable is the context they can see, and Meta’s agents can provide a uniquely personal experience

Our vision is building personal super intelligence. We’re starting to see the promise of AI that understands our personal context, including our history, our interests, our content and our relationships. A lot of what makes agents valuable is the unique context that they can see. And we believe that Meta will be able to provide a uniquely personal experience.

Meta’s management is merging LLMs (large language models) with the AI recommendation systems of the company’s social media platforms and advertising system; management thinks the introduction of LLMs to the recommendation systems will significantly improve the performance of the already-powerful recommendation systems, and have a positive implication for commerce activity taking place on Meta’s platforms

We’re also working on merging LLMs with the recommendation systems that power Facebook, Instagram, Threads and our ad system. Our world-class recommendation systems are already driving meaningful growth across our apps and ads business, but we think that the current systems are primitive compared to what will be possible soon. Today, our systems help people stay in touch with friends, understand the world and find interesting and entertaining content. But soon, we’ll be able to understand people’s unique personal goals, and tailor feeds to show each person content that helps them improve their lives in the ways that they want. This also has implications for commerce. Our ads today help businesses find just the right very specific people who are interested in their products. New Agentic shopping tools will allow people to find just the right very specific set of products from the businesses in our catalog. We’re focused on making these experiences work across both our feeds and across business messaging, significantly increasing the capabilities of WhatsApp over time.

Meta’s management has simplified Instagram’s ranking architecture to enable more efficient model scaling and this has led to a 30% year-on-year increase in watch time on Instagram Reels in the USA in 2025 Q4; video time on Facebook grew double-digits year-on-year in 2025 Q4, and views of organic feed and video posts were up 7% as a result of optimisations in ranking; Meta is now surfacing 25% more reels published on the day compared to 2025 Q3; the prevalence of original content in the US on Instagram was up 10 percentage points in 2025 Q4, and 75% of recommendations are now from original posts; Threads saw a 20% lift in time-spent from improvements to its recommendation models; management sees a lot of opportunity for Meta to achieve additional gains in engagement on its apps through scaling the complexity and amount of training data in its models, and the introduction of LLMs to the recommendation systems; management is developing Meta’s next generation recommendation systems and the work includes building new model architectures from the ground up on top of LLMs; the improved engagement in 2025 Q4 across Meta’s platforms was from multiple optimisations

Instagram Reels had another strong quarter with watch time up more than 30% year-over-year in the U.S. Engagement is benefiting from several optimizations we made to improve the quality of recommendations including simplifying our ranking architecture to enable more efficient model scaling. This unlocks the ability for our systems to consider longer interaction histories to better identify a person’s interests.

On Facebook, video time continued to grow double digits year-over-year in the U.S., and we’re seeing strong results from our ranking and product efforts on both feed and video surfaces. The optimizations we made in Q4 drove a 7% lift in views of organic feed and video posts on Facebook, resulting in the largest quarterly revenue impact from Facebook product launches in the past two years…

…On Facebook, our systems are surfacing over 25% more reels published that day than the prior quarter. On Instagram, we grew the prevalence of original content in the U.S. by 10 percentage points in Q4 with 75% of recommendations now coming from original posts. 

Threads is also seeing strong momentum again, benefiting from recommendation improvements. The optimizations we made in Q4 drove a 20% lift in threads time spent.

We see a lot of opportunity to drive additional gains. This includes scaling the complexity and amount of training data we use in our models while continuing to make our systems more responsive to people’s real-time interest. We’re also focused on incorporating LLMs to understand content more deeply across our platform, which will enable more personalized recommendations.

Another big area of investment this year is developing the next generation of our recommendation systems. We have several big bets on this front, including building new model architectures from the ground up that will work on top of LLMs, leveraging the world knowledge and reasoning capabilities of an LLM to better infer people’s interests…

…We launched several ranking improvements in Q4 on Facebook and Instagram that drove incremental engagement. And there isn’t really one single launch that is driving most of the gains. It’s multiple optimizations to our recommendation systems that are helping us make more accurate predictions about what will be interesting to each person…

…We’re going to continue to make recommendations even more adaptive to what a person is engaging with during their session. So the recommendations we surface are more relevant to what they’re interested in at that moment.

Meta’s management thinks AI will enable a surge in new, immersive and interactive media formats, leading to more interactive feeds in the company’s social media platforms

Soon, we’ll see an explosion of new media formats that are more immersive and interactive and only possible because of advances in AI. Our feeds will become more interactive overall. Today, our apps feel like algorithms that recommend content. Soon, you’ll open our apps, and you’ll have an AI that understands you and also happens to be able to show you great content or even generate great personalized content for you…

…Video will continue to be here for a long time. It’s going to continue growing, it’s not going anywhere, just like photos and text in many ways, continue to grow even as the market continues to grow beyond that. But I don’t think that video is the ultimate kind of final format. I just — I think that this is going to get — we’re going to get more formats that are more interactive and immersive and you’re going to get them in your feeds. So you can imagine this, I mean there’s obviously a lot of details to fill in on this, but you can imagine people being able to, easily through a prompt, create a world or create a game and be able to share that with people who they care about and you see it in your feet and you can jump right into it and you can engage in it. And there are 3D versions of that, and there are 2D versions of that and Horizon, I think, fits very well with the kind of immersive 3D version of that.

Sales of Meta’s AI glasses tripled in 2025; the glasses are some of the fastest-growing consumer electronics products in history; management thinks most people who wear glasses today will switch to AI glasses in the future; management is directing most of Meta’s investments in the Reality Labs segment to AI glasses and wearables; management thinks Reality Labs’ losses in 2026 will be similar to 2025, and will be the peak going forward

Sales of our glasses more than tripled last year, and we think that they’re some of the fastest-growing consumer electronics and history. Billions of people wear glasses or contacts for vision correction and I think that we’re in a moment similar to when smartphones arrived, and it was clearly only a matter of time until all those flip phones became smartphones. It’s hard to imagine a world in several years where most glasses that people wear aren’t AI glasses.

For Reality Labs, we are directing most of our investment towards glasses and wearables going forward while focusing on making Horizon a massive success on mobile and making VR a profitable ecosystem over the coming years. I expect Reality Labs losses this year to be similar to last year, and this will likely be the peak as we start to gradually reduce our losses going forward while continuing to execute on our vision.

Meta’s management wants Meta to continue investing significantly in AI infrastructure, and has established the Meta Compute division to deliver the infrastructure for the company; management sees long-term investments in silicon and energy as an important part of Meta Compute’s work; management continues to build AI infrastructure for Meta that is flexible; management expects the cost per gigawatt of Meta’s AI infrastructure to decrease significantly over time; management is deploying a variety of chips in Meta’s AI infrastructure; Meta’s ads retrievable engine, Andromeda, can now run on chips from NVIDIA, AMD, and Meta (MTIA); MTIA is currently running inference  workloads on Meta’s core ranking and recommendation models, but management will extend MTIA in 2026 Q1 to also cover training workloads; management expects Meta to have sufficient cash flow to fund its infrastructure investments in 2026, but they are also looking for external financing that may lead to net-debt on the balance sheet; with Meta’s current and planned compute capacity, management is exploring businesses beyond ads; management continues to have a robust ROI-driven process when planning investments for its AI models 

We will continue to invest very significantly in infrastructure to train leading models and deliver personal super intelligence to billions of people and businesses around the world. I recently announced Meta Compute with the belief that being the most efficient at how we engineer, invest and partner to build our infrastructure will become a strategic advantage. Dina Powell McCormick also joined us as President and Vice Chairman, and she will lead our efforts to partner with governments, sovereigns and strategic capital partners to expand our long-term capacity, including ensuring positive economic impact in the communities that we operate in around the world. An important part of Meta Compute will be making long-term investments in silicon and energy. We will continue working with key partners while advancing our own silicon program. We’re architecting our systems that we can be flexible in the systems that we use, and we expect the cost per gigawatt to decrease significantly over time through optimizing both our technology and supply chain…

…We’re working to meet our silicon needs by deploying a variety of chips that optimally support each of our different workloads. To that end, in Q4, we extended our Andromeda ads retrievable engine, so it can now run on NVIDIA, AMD and MTIA…

…In Q1, we will extend our MTIA program to support our core ranking and recommendation training workloads in addition to the inference workloads it currently runs…

…As we invest in infrastructure to meet our business needs, we continue to prioritize maintaining long-term flexibility so we can adapt to how the market develops. We’re doing so in several ways, including changing how we develop data center sites, establishing strategic partnerships, contracting cloud capacity and establishing new ownership structures for some of our large data center sites.

We have a strong net cash balance and expect our business will continue to generate sufficient cash to fund our infrastructure investments in 2026, which is reflected in our expectations. Nonetheless, we will continue to look for opportunities to periodically supplement our strong operating cash flow with prudent amounts of cost-efficient external financing, which may lead us to eventually maintain a positive net debt balance…

…[Question] It just seems like you’re going to have a tremendous amount of capacity. How do you think about expanding your opportunities beyond ads, things like subscriptions or licensing cloud models?

[Answer] We are focused on things beyond ads, I think the numbers make it so that for the next couple of years, ads are going to be, by far, the most important driver of growth in our business. So that’s why, as we’re working on this, we have a balance of new things that we’re trying to do, while also investing very heavily and making sure that all of the work that we’re doing in AI improves both the quality and business performance of the core apps and businesses that we run there…

…A year ago on this call, I think I talked about the set of investments we were making in 2025. as part of our 2025 budgeting process across our ads performance and organic engagement initiatives. And those investments have generally paid off, and we feel really good about kind of the — the process we ran in terms of using projected ROI to stack rank investments, make sure that we had a robust measurement system funded things that were positive ROI and then tracking how they performed over the course of the year. And we are — we’ve just finished running our 2026 budgeting process, and we have funded a similar set of investments, which we expect will enable us to continue delivering strong revenue growth in 2026.

Meta’s management thinks AI will dramatically change how the company works in 2026; management is investing in AI tooling (i.e. agents) for employees; management is seeing AI helping single persons accomplish projects that used to require big teams; management has seen agentic coding tools help increase Meta’s output per engineer by 30% since the beginning of 2025, with even stronger gains seen in; management thinks AI agents will have a profound positive impact on the productivity of the technology sector and the whole economy

I think that 2026 is going to be the year that AI starts to dramatically change the way that we work…

…We’re investing in AI native tooling so individuals at Meta can get more done, we’re elevating individual contributors and flattening teams. We’re starting to see projects that used to require big teams now be accomplished by a single, very talented person…

…A big focus of this is to enable the adoption and advancement of our AI coding tools where we’re seeing strong momentum. Since the beginning of 2025, we’ve seen a 30% increase in output per engineer with the majority of that growth coming from the adoption of agentic coding, which saw a big jump in Q4. We’re seeing even stronger gains with power users of AI coding tools, whose output has increased 80% year-over-year. We expect this growth to accelerate through the next half…

…There’s a big delta between the people who do it and do it well and the people who don’t. And I think that’s going to just be a very profound dynamic for, I think, across the whole sector and probably the whole economy going forward in terms of the productivity and efficiency with which we can run these companies, which I think — my hope is that we can use that to just get a lot more done than we were able to before.

Meta’s year-on-year advertising conversion growth accelerated through 2025 Q4; management expects further gains in 2026, driven by further integration of AI across all layers of the marketing and customer engagement funnel; management continues to scale the complexity and size of Meta’s models for selecting what advertising to show; in 2025 Q4, management doubled the number of GPUs used to train Meta’s ads-ranking GEM model, and adopted a new sequence learning model architecture to process longer sequences of user behaviour; Meta’s recent initiatives to improve its advertising models drove a 3.5% lift in ad clicks on Facebook, and a 1% gain in conversions on Instagram in 2025 Q4; management launched a new run time model across Instagram Feed stories and reels in 2025 Q4 that drove a 3% increase in advertising conversion rates; Meta has continued making progress with Lattice, its unified model architecture for advertising ranking models; management consolidated models for Facebook Stories and other services into the overall Facebook model in 2025 Q4, and this drove a 12% increase in advertising quality; management expects to consolidate more models in 2026 compared to what Meta has done in the previous 2 years; Meta’s ads retrievable engine, Andromeda, can now run on chips from NVIDIA, AMD, and Meta (MTIA); Andromeda’s compute efficiency has tripled in 2025 Q4; Meta does not typically use GEM for inference because it is costly; management thinks Meta’s larger advertising models can benefit from having more compute; management expects to meaningfully scale up the cluster used for training GEM in 2026; management expects to further improve the transfer of knowledge from GEM to run time models; this is the 1st time management has found a recommendation model architecture that scales with similar efficiency as LLMs, and they are hopeful this architecture can be scaled up while preserving an attractive ROI (return on investment)

We’re seeing very strong results from the ad performance investments we made throughout 2025 with year-over-year conversion growth accelerating through the fourth quarter. We expect the set of investments we’re making in 2026 will enable us to drive further gains as we continue to integrate AI across all layers of the marketing and customer engagement funnel.

The first area is our ad system where we’re continuing to scale the complexity and size of our models to better select which ads to show. In Q4, we doubled the number of GPUs we used to train our GEM model for ads ranking. We also adopted a new sequence learning model architecture, which is capable of using longer sequences of user behavior and processing much richer information about each piece of content. The GEM and sequence learning improvements together grow a 3.5% lift in ad clicks on Facebook and a more than 1% gain in conversions on Instagram in Q4. This new sequence learning architecture is significantly more efficient than our prior architectures which should enable us to further scale up the data, complexity and compute we use in our future ranking models to deliver performance gains.

As we scale up our foundational ads models like GEM, we are also developing more advanced models to use downstream of them at run time for ads inference. In Q4, we launched a new run time model across Instagram Feed stories and reels, resulting in a 3% increase in conversion rates in Q4.

We continue to progress on our model unification efforts under Lattice as well. After seeing strong success with the consolidation of Facebook feed and video models in the first half of 2025. In Q4, we consolidated models for Facebook Stories and other services into the overall Facebook model. This, along with a series of back-end improvements drove a 12% increase in ads quality. And in 2026, we expect to consolidate more models than we had in the prior two years as we continue to evolve our systems towards running a smaller number of highly capable models…

… To that end, in Q4, we extended our Andromeda ads retrievable engine, so it can now run on NVIDIA, AMD and MTIA. This, along with model innovations, enabled us to nearly triple Andromeda’s compute efficiency…

…We don’t typically use our larger model architectures like GEM for inference because their size and complexity would make it too cost prohibitive. So the way that we drive performance from those models is by using them to transfer knowledge to smaller lightweight models used at run time. But I would say that we think that there is room for our larger models to benefit from having more compute. And I think as we scale up the compute available to those models, and the foundational models in different areas that power the different stages of ads ranking and recommendation, we expect that we will see gains coming from that…

…In 2026, we’re expecting to meaningfully scale up GEM training to an even larger cluster, increasing the complexity of the model, expanding the data that we trained it on, leveraging new sequence, learning architecture that we had begun deploying in Q4. And we’re also going to further improve how we transfer the learnings from our GEM foundation models to the runtime models that we’re using…

…This is the first time we have found a recommendation model architecture that can scale with similar efficiency as LLMs. And we’re hoping that this will unlock the ability for us to significantly scale up the size of our ranking models while preserving an attractive ROI.

Meta’s video generation tools have hit a combined revenue run rate of $10 billion in 2025 Q, and the sequential growth was nearly 3x higher than the growth Meta’s overall ads revenue; Meta’s latest incremental attribution feature was rolled out in 2025 Q4 and it drove a 24% increase in incremental conversions; the incremental attribution feature is already at a multibillion-dollar annual run rate just 7 months after launch

The combined revenue run rate of video generation tools hit $10 billion in Q4, with quarter-over-quarter growth outpacing the increase in overall ads revenue by nearly 3x. We are also seeing very good results from our incremental attribution feature, which optimizes for incremental conversions in real time. Our latest model rollout in Q4 is driving a 24% increase in incremental conversions versus our standard attribution model, and this product has already achieved a multibillion-dollar annual run rate just 7 months since launching.

Meta’s click-to-message revenue grew more than 50% year-on-year in the US in 2025 Q4; paid messaging within Whatsapp crossed a $2 billion annual run rate in 2025 Q4; management is seeing good early traction with business AI agents in Mexico and the Philippines, with over 1 million weekly conversations between people and business AI taking place

Click to message ads revenue growth accelerated in Q4 with the U.S. up more than 50% year-over-year, driven by strong adoption of our website to message ads which direct people to a business’s website for more information before choosing to launch a chat. Paid messaging within WhatsApp continues to scale as well, crossing a $2 billion annual run rate in Q4…

…We’re seeing good early traction with our business AIs in Mexico and the Philippines, with over 1 million weekly conversations between people and business AI is now happening on our messaging platforms. This year, we will expand availability of our business AIs to more markets, while also extending their capabilities so they not only answer questions on topics like product availability, but can help people get things done right within WhatsApp.

Meta recently acquired Manus; Manus already has a significant number of businesses paying a subscription fee; management thinks the integration of Manus into Meta’s advertising and business products can have really powerful effects

I don’t think either of us mentioned that the Manus acquisition in the upfront comments, I mean that is going to — is a good example of — you have a significant number of businesses that already pay a subscription to basically use their tool to accelerate their business results and integrating that kind of thing into our ads and business managers, so that way we can just offer more integrated solutions for the many, many millions of businesses that use and rely on our platforms is going to be really powerful, both for accelerating their results using the existing products that we have and I think adding new lines as well.

Meta’s management continues to see the company as being capacity-constrained when it comes to AI compute; management expects the capacity-constrain to last for most of 2026, but there are efforts within the company to mitigate the impacts of the constrain 

[Question] You’ve talked about being capacity constrained internally and not having enough compute to sort of achieve the goals you have on the platform on a product standpoint. I want to know if we can get any update on currently how you think about your own internal needs for compute against that road map?

[Answer] We do continue to be capacity constrained. Our teams have done a great job ramping up our infrastructure through the course of 2025. But demands for compute resources across the company have increased even faster than our supply. So we expect over the course of 2026 to have significantly more capacity this year as we add cloud. But we’ll likely still be constrained through m uch of 2026 until additional capacity from our own facilities comes online later in the year. With that said, I think we have done a good job internally mitigating the impact of compute constraints on our business. I expect that will continue to be the case in 2026. We’re continuing to focus on increasing our infrastructure efficiency in several ways, including by optimizing workloads, improving infrastructure utilization, diversifying our chip supply and just investing in efficiency improvements as part of our core technology development efforts in areas like content and ads ranking.

Meta’s management continues to think that it is critical for Meta to build its own frontier AI models

I think the question was around how important is it for us to have a general model. The way that I think about Meta is we’re like a deep technology company. Some people think about us as we build these apps and experiences, but the thing that allows us to build all these things is that we build and control the underlying technology that allows us to integrate and design the experiences that we want and not just be constrained to what others in the ecosystem are building or allow us to build. So I think that this is a really fundamental thing where my guess is that Frontier AI for many reasons, some competitive, some safety oriented are not going to always be available through an API to everyone. So I think like it’s very important, I think, to be able to have the capability to build the experiences that you want if you want to be one of the major companies in the world that helps to shape the future of these products. So that I think is — it’s going to be, I think, important from a business perspective.

Microsoft (NASDAQ: MSFT)

Microsoft’s management thinks AI is just starting to diffuse broadly into society, which would then lead to substantial growth in the company’s total addressable market (TAM)

We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build.

When building Microsoft’s AI infrastructure, management is aware of the heterogeneous nature of different workloads, and is optimising for tokens per watt per dollar to decrease total cost of ownership (TCO); Microsoft has been able to increase throughput by 50% in OpenAI inference, which is one of Microsoft’s highest volume workloads; Microsoft recently connected two GPU clusters through an AI WAN (wide area network) to build a first-of-its-kind AI data center

When it comes to our cloud and token factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads. We are building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment specific needs for all customers, including the long tail. The key metric we’re optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon systems and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, OpenAI inferencing, powering our Copilots. And another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI WAN to build a first of its kind AI super factory. Fairwater’s 2-storey design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training.

Microsoft’s AI infrastructure utilises chips from NVIDIA, AMD, and itself (Maia); management recently introduced Maia 200, which has 30% better TCO compared to other leading AI chips; management will be using Maia 200 for inferencing and synthetic data generation for its AI research team, and for production inference workloads; Microsoft has been building its own chips for a long time; Microsoft’s own AI models will all be optimised for Maia 200

At the silicon layer, we have NVIDIA and AMD and our own Maia chips, delivering the best all up fleet performance, cost and supply across multiple generations of hardware. Earlier this week, we brought online our Maia 200 accelerator. Maia 200 delivers 10-plus petaFLOPS at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this starting with inferencing and synthetic data gen for our Superintelligence Team as well as doing inferencing for Copilot and Foundry…

…We’ve been at this in a variety of different forms for a long, long time in terms of building our own silicon…

…We’re obviously round-tripping and working very closely with own super intelligence team with all of our models, as you can imagine, whatever we build will be all optimized for Maia.

AI workloads require both AI accelerators (i.e. GPUs) and CPUs; Microsoft’s own Cobalt 200 CPU delivers 50% higher performance compared to the previous version

And given AI workloads are not just about AI accelerators, but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well. Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom build processor for cloud-native workloads.

Microsoft’s management sees AI agents as the new app platform that comes with the platform shift to AI; management thinks customers will need a model catalog, tuning services, harness for orchestration, and more, to deploy AI agents; more than 80% of the Fortune 500 have built active agents with Copilot Studio and/or Agent Builder; management thinks the proliferation of AI agents will create a new, significant growth opportunity for Microsoft; to meet the new growth opportunity, management has introduced Agent 365 for organisations to extend their existing governance, identity, security and management to agents; many of Microsoft’s technology partners are already integrating Agent 365; Agent 365 is the first product of its kind that allows cross-cloud agent cloud

Like in every platform shift, all software is being rewritten. A new app platform is being born. You can think of agents as the new apps and to build, deploy and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability and security. It starts with having broad model choice…

…We are also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code/no-code tools.

As agents proliferate, every customer will need new ways to deploy, manage and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security and management to agents. That means the same controls they already use across Microsoft 365 and Azure, now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, Genspark, Glean, NVIDIA, SAP, ServiceNow and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. 

Microsoft’s management sees the company’s customers wanting to use multiple AI models; management thinks Microsoft offers the broadest selection of models among the cloud hyperscalers; Microsoft already has more than 1,500 customers using Anthropic and OpenAI’s models on Foundry; management is seeing more customers choosing geographic-specific AI models

Our customers expect to use multiple models as part of any workload that they can fine tune and optimize based on cost, latency and performance requirements. And we offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5.2 as well as Claude 4.5. Already over 1,500 customers have used both Anthropic and OpenAI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere as more customers look for sovereign AI choices, and we continue to invest in our first-party models, which are optimized to address the highest value customer scenarios such as productivity, coding and security. As part of Foundry, we also give customers the ability to customize and fine-tune models. 

Microsoft’s management thinks one of the most important considerations for companies when working with AI is their need to capture the tacit knowledge they possess inside of model weights as their core IP; Fabric’s annual revenue run rate was over $2 billion in 2025 Q4 (FY2026 Q2), and quarterly revenue was up 60% year-on-year; customers spending more than $1 million per quarter on Foundry was up 80% year-on-year in 2025 Q4 (FY2026 Q2); more than 250 customers are on track to process 1 billion tokens on Foundry in February 2026 alone; Foundry is a great on-ramp for Microsoft’s other cloud computing services, as most of Foundry’s customers are using additional Azure solutions

Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge, that means connecting their agents to systems of record and operational data, analytical data as well as semi-structured and unstructured productivity and communications data. And this is what we are doing with our unified IQ layer, spanning Fabric, Foundry and data powering Microsoft 365. In the world of context engineering, Foundry knowledge and Fabric are gaining momentum. Foundry Knowledge delivers better context with automated source routing an advanced agentic retrieval while respecting user permissions. And Fabric brings together end-to-end operational real-time and analytical data.

2 years since it became broadly available, Fabric’s annual revenue run rate is now over $2 billion with over 31,000 customers, and it continues to be the fastest-growing analytics platform on the market with revenue up 60% year-over-year. All of the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. And over 250 customers are on track to process over 1 trillion tokens on Foundry this year…

…Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale.

Microsoft’s own consumer Copilot agent experiences span a wide variety of domains; daily users of the Copilot app are up 3x year-on-year in 2025 Q4 (FY2026 Q2); users are able to make purchases directly in the Copilot app because of the Copilot Checkout feature; Microsoft 365 Copilot, which is Microsoft’s agentic experience for enterprises, has unmatched accuracy and the quality of its response had the highest sequential increase to-date in 2025 Q4 (FY2026 Q2); Microsoft 365 Copilot’s average number of conversations per user doubled year-on-year in 2025 Q4 (FY2026 Q2); Microsoft 365 Copilot’s daily active users was up 10x year-on-year in 2025 Q4 (FY2026 Q2); management is seeing strong momentum with Researcher Agent and Agent Mode; Microsoft 365 Copilot seat additions was up 160% year-on-year in 2025 Q4 (FY2026 Q2); there are now 15 million paid Microsoft 365 Copilot seats; the number of customers with >35,000 seats in Microsoft 365 Copilot tripled year-on-year in 2025 Q4 (FY2026 Q2); management is seeing strong growth across GitHub Copilot, with Copilot Pro Plus subs for individual developers up 77% sequentially in 2025 Q4 (FY2026 Q2), and paid Copilot subscribers up 75% year-on-year; Microsoft has the GitHub Copilot SDK and recently added a dozen new security Copilot agents; Dragon Copilot is a leader in its category and is serving 100,000 medical providers; Dragon Copilot documented 21 million patient encounters in 2025 Q4 (FY2026 Q2), up 3x year-on-year

In consumer, for example, Copilot experiences span chat, news, feed, search, creation, browsing, shopping and integrations into the operating system, and it’s gaining momentum. Daily users of our Copilot app increased nearly 3x year-over-year. And with Copilot checkout, we have partnered with PayPal, Shopify and Stripe, so customers can make purchases directly within the app.

With Microsoft 365 Copilot, we are focused on organization-wide productivity. Work IQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization. It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications and their history and memory all within an organization security boundary. Microsoft 365 Copilot’s accuracy and latency powered by Work IQ is unmatched, delivering faster and more accurate work grounded results than competition, and we have seen our biggest quarter-over-quarter improvement in response quality to date. This has driven record usage intensity with average number of conversations per user doubling year-over-year. Microsoft 365 Copilot also is becoming true daily habit with daily active users increasing 10x year-over-year.

We’re also seeing strong momentum with Researcher Agent, which supports both OpenAI and Claude, as well as Agent Mode in Excel, PowerPoint and Word…

…It was a record quarter for Microsoft 365 Copilot seat adds, up over 160% year-over-year. We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats and multiples more enterprise chat users…

…The number of customers with over 35,000 seats tripled year-over-year. Fiserv, ING, NASA, University of Kentucky, University of Manchester, U.S. Department of Interior and Westpac, all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees…

…Copilot Pro Plus subs for individual devs increased 77% quarter-over-quarter, and all up now, we have 4.7 million paid Copilot subscribers, up 75% year-over-year…

…GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition and xAI in the context of customers GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows…

…And we’re going beyond that with GitHub Copilot SDK. Developers can now embed the same run time behind Copilot CLI, multi-model, multistep planning tools, MCP integration, Ops streaming directly into their applications. In security, we added a dozen new and updated security Copilot agents across Defender, Entra, Intune, and Purview…

…To make it easier for security teams to onboard, we are rolling out security copilot to all our E5 customers and our security solutions are also becoming essential to manage organization’s AI deployments. 24 billion Copilot interactions were audited by Purview this quarter, up 9x year-over-year…

…In health care, Dragon Copilot is the leader in its category, helping over 100,000 medical providers automate their workflows… All up, we helped document 21 million patient encounters this quarter, up 3x year-over-year.

1/3 of Microsoft’s cloud and AI-related capex in 2025 Q4 (FY2026 Q2) are for long-lived assets that will support monetisation over the next 15 years and more, while the other 2/3 are for CPUs and GPUs, driven by strong AI- and Azure-related demand; Azure is still capacity-constrained, and management wants to balance Azure demand for compute with 1st party demand for compute; the ROI for Microsoft’s capex sometimes shows up in increased revenue for Microsoft’s software business (i.e. non-Azure business) too; some of Microsoft’s AI compute is also allocated for R&D; the useful lives of Microsoft’s GPUs continue to be quite matched with the duration of their contracts; Microsoft becomes more efficient with delivery as its GPUs age, so margins actually improve over time

Capital expenditures were $37.5 billion, and this quarter, roughly 2/3 of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond. This quarter, total finance leases were $6.7 billion, and were primarily for large data center sites. And cash paid for PP&E was $29.9 billion…

…As we spend the capital and put GPUs specifically, it applies to CPUs, the GPUs more specifically, we’re really making long-term decisions. And the first thing we’re doing is solving for the increased usage in sales and the accelerating pace of M365 Copilot as well as GitHub Copilot, our first-party apps. Then we make sure we’re investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you’ve seen from us and products over the past a bit is coming because we are allocating GPUs and capacity to many of the talented AI people we’ve been hiring over the past years. Then, when you end up, is that, you end up with the remainder going towards serving the Azure capacity that continues to grow in terms of demand…

…As an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot and you should think about GitHub pilot, you should think about Dragon Copilot, Security Copilot. All of those have a GM profile and lifetime value. I mean if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Copilot, which are all by the way incremental businesses and TAMs for us. And so we don’t want to maximize just 1 business of ours, we want to be able to allocate capacity while we’re sort of supply constrained in a way that allow us to essentially build the best LTV portfolio…

…You got to think about compute is also R&D…

…When you think about average duration, I think what you’re getting to is — and we need to remember, is it, average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApps portfolio, are shorter dated, right, 3-year contracts. And so they have, quite frankly, a short duration. The majority then that’s remaining are Azure contracts are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5 years. And the way to think about that is the majority of the capital that we’re spending today, and a lot of the GPUs that we’re buying are already contracted for most of their useful life…

…To state this in case it’s not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at delivery. So where you’ve sold the entirety of its life, the margins actually improved with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.

Commercial RPO (remaining performance obligation) is now $625 billion, up 110% from a year ago (was $392 billion in 2025 Q3); the weighted average duration of the RPO is 2.5 years; the RPO has significant customer-concentration risk with OpenAI, as OpenAI accounts for 45% of the RPO; the non-OpenAI part of the RPO was up 28% year-on-year in 2025 Q4 (FY2026 Q3); the average duration of RPOs for Azure are much longer than the average duration for M365 contracts

Commercial remaining performance obligation, which continues to be reported net of reserves increased to $625 billion, and was up 110% year-over-year with a weighted average duration of approximately 2.5 years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion recognized beyond the next 12 months increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio…

…When you think about average duration, I think what you’re getting to is — and we need to remember, is it, average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or our BizApps portfolio, are shorter dated, right, 3-year contracts. And so they have, quite frankly, a short duration. The majority then that’s remaining are Azure contracts are longer duration. And you saw that this quarter when we saw the extension of that duration from around 2 years to 2.5 years. And the way to think about that is the majority of the capital that we’re spending today, and a lot of the GPUs that we’re buying are already contracted for most of their useful life…

Azure grew revenue by 39% in 2025 Q4 (FY2026 Q2) (was 40% in 2025 Q3); Azure’s revenue growth was slightly better than expected; Azure was capacity-constrained in 2025 Q4 (FY2026 Q2) and management wants to balance Azure demand for compute with 1st party demand for compute

In Azure and Other Cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations with ongoing efficiency gains across our fungible fleet, enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments and geographic regions, and demand continues to exceed available supply…

…Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation and continued replacement of end-of-life server and networking equipment.

Netflix (NASDAQ: NFLX)

Netflix started using new AI tools in 2025 to help advertisers create custom advertising based on Netflix’s intellectual property; Netflix started using AI models in 2025 to speed up advertising campaign planning; management continues to invest in sales and go-to-market for the advertising business

In 2025, we began testing new AI tools to help advertisers create custom ads based on Netflix’s intellectual property, and we plan to build on this progress in 2026. We also introduced automated workflows for ad concepts and used advanced AI models to streamline campaign planning, significantly speeding up these processes. 

Netflix is using AI to improve subtitle localisation; Netflix is using AI to help with merchandising

In content production and promotion, we’re using AI to improve subtitle localization, making it easier for our titles to reach more viewers around the world. Additionally, we’re implementing AI-driven tools to help with merchandising, which improves our ability to connect members with the most relevant titles for them to watch.

PayPal (NASDAQ: PYPL)

PayPal’s management’s vision with agentic commerce is to create a universally trusted catalog for AI agents to access, discover, and transact with; PayPal recently connected with early-adopter merchants for agentic commerce; PayPal recently went live with agentic purchasing through Perplexity and Microsoft Copilot; PayPal will acquire Cymbio for its Store Sync technology; management does not expect agentic commerce to move the needle for PayPa in 2026

Let me quickly share some of our latest developments in agentic commerce. Our vision is to create a universally trusted catalog that AI agents can access, discover and transact with safely and securely. Through our Store Sync offering, we are already connecting early adopters like Abercrombie & Fitch, Fabletics, PacSun and Wayfair with agentic chat platforms to allow consumers to discover, evaluate and purchase items within the chat. We went live with agentic purchasing through Perplexity ahead of Thanksgiving, and we are now also live on Microsoft Copilot…

Store Sync is enabled through a partnership with Cymbio, which we have agreed to acquire to bring this technology in-house. Agentic won’t materially impact 2026 growth. But as AI-powered shopping scales, our aim is to become the default payment option. This is only the beginning, and we are collaborating closely with the major AI platforms as we build agentic commerce capabilities together. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management expects capex for 2026 to be US$52 billion to US$56 billion, up 27%-37% from 2025 (2025’s capex was US$41 billion); most of the capex for 2026 will be for advanced process technologies; TSMC’s capital expenditure is always in anticipation of growth in future years; management now thinks a long-term gross margin of 56% and higher is achievable (previously was 53% and higher); TSMC’s capex in the last 3 years was ~US$100 billion, and the next 3 years is expected to be much higher; management thinks TSMC can earn a high-20% ROE through the cycle; management expects TSMC to shoulder greater capex for its customers; management has raised the revenue growth forecast for AI accelerators for 2024-2029 to mid-to-high-50% CAGR (previous guidance was mid-40%); management now expects 25% revenue CAGR in USD-terms for 2024-2029 (previously was 20% CAGR), driven by all 4 technology platforms; the 25% revenue CAGR projection is conservative

At TSMC, a higher level of capital expenditures is always correlated to the high-growth opportunities in the following years. With our strong technology leadership and differentiation, we are well positioned to capture the multiyear structural demand from the industry megatrends of 5G, AI and HPC. In 2025, we spent USD 40.9 billion as compared to USD 29.8 billion in 2024 as we began to raise our level of capital spending in anticipation of the growth that will follow in the future years. In 2026, we expect our capital budget to be between USD 52 billion and USD 56 billion as we continue to invest to support our customers’ growth. About 70% to 80% of the 2026 capital budget will be allocated to advanced process technologies. About 10% will be spent for specialty technologies and about 10% to 20% will be spent for advanced packaging, testing, mask making and others…

…As a result, in the last 3 years, our CapEx dollars amount totaled USD 101 billion, but is expected to be significantly higher in the next 3 years…

…We believe a long-term gross margins of 56% and higher through the cycle is achievable, and we can earn an ROE of high 20s percent through the cycle. By earning a sustainable and healthy return, even as we shoulder a greater burden of CapEx investment for our customers, — we can continue to invest in technology and capacity to support their growth while delivering long-term profitable growth to our shareholders.

…We expect 2026 to be another strong growth year for TSMC and forecast our full year revenue to increase by close to 30% in U.S. dollar terms…

…We raised our forecast for the revenue growth from AI accelerated to approach a mid- to high 50% CAGR for the 5 years period from 2024 to 2029, underpinned by our technology depreciation and broad customer base, we now expect our overall long-term revenue growth to approach 25% in U.S. dollar terms for the 5-year period starting from 2024. While we expect AI accelerators to be the largest contributor in terms of our incremental revenue growth. Our overall revenue growth will be fueled by all 4 of our growth platform, which are smartphone, HPC, IoT and automotive in the next several years…

…I think that fundamental thing position TSMC to be very good future growth, let me say that, 25% CAGR as we projected, and we used to be conservative. You know that.

TSMC’s management thinks Foundry 2.0 was up 16% in 2025, and is expected to grow 14% in 2026, supported by robust AI demand

Concluding 2025, the Foundry 2.0 industry, which we define as all logic wafer manufacturing, packaging, testing, mask making and others increased 16% year-over-year…

…We forecast the Foundry 2.0 industry to grow 14% year-over-year in 2026, supported by robust AI-related demand.

TSMC’s management thinks recent developments in the AI market are very positive; AI accelerator revenue accounted for high-teens of total revenue for TSMC in 2025; management sees increasing AI adoption in consumers, enterprises, and sovereigns; management has received very strong demand signals from TSMC’s customers and the customers’ customers; management’s conviction in the AI megatrend remains strong; management is disciplined when planning for capacity; TSMC’s lead-time has now increased to 2-3 years; management is very nervous about AI demand, but they have talked a lot with TSMC’s customers and customers’ customers in recent months to understand AI demand, and management is satisfied with the evidence they show of AI helping their businesses; a hyperscaler active in social media (most probably Meta Platforms) achieved very positive ROI from AI; TSMC is using AI internally to improve productivity and even just 1%-2% of productivity improvement would have paid off for TSMC’s AI investments; management sees AI growing into people’s daily life and they think there’s a real long-term trend

Recent development in the AI market continue to be very positive. Revenue from AI accelerator accounted for high teens percent of our total revenue in 2025.

Looking ahead, we observe increasing AI model adoption across consumer, enterprise and sovereign AI segment. This is driving need for more and more computation, which supports the robust demand for leading-edge silicon. Our customers continue to provide us with a positive outlook. In addition, our customers’ customers who are mainly the cloud service providers are also providing strong signals and reaching out directly to request the capacity to support their business. Thus, our conviction in the multiyear AI megatrend remains strong, and we believe the demand for semiconductor will continue to be very fundamental.

As a foundry, our first responsibility is to fully support our customers with the most advanced technology and necessary capacity to unleash their innovations. To address the structural increase in the long-term market demand profile, TSMC was closely with our customer and our customer and customer to plan our capacity. This process is continuous and ongoing in addition as process technology complexity increases the engagement lead time with customers is now at least 2 to 3 years in advance. Internally, as we have said before, TSMC employs a disciplined capacity planning system to assess the market demand from both top-down and bottom-up approaches. We focus on the overall addressable megatrend to determine the appropriate capacity to build. Based on our assessment, we are preparing to increase our capacity and stepping out our CapEx investment to support our customers’ future growth…

…Whether the AI demand is real or not. I’m also very nervous about it. You bet because we have to invest about USD 52 billion to USD 56 billion for the CapEx, right? If we didn’t do it carefully, and that would be big disaster to TSMC for sure.

So of course, I spend a lot of time in the last 3, 4 months talking to my customer and end customers’ customer. I want to make sure that my customers demand are real. So I talked to those cloud service providers, all of them. The answer is that I’m quite satisfied with the answer. Actually, they show me the evidence that the AI really help their business. So they grow their business successfully and healthy in their financial return. So I also double check their financial status. They are very rich. That sounds much better than TSMC. So no doubt, I also asked specifically that what’s application, right? I mean that’s — for one of the hyperscalers, they told me that, that helped their social media software. And so the customer continue to increase. So I believe that. 

And with our own experience in the AI application, we also help to our own fab to improve the productivity. As I mentioned, 1 time say that 1% or 2% productivity improvement, that is free to the TSMC…

…I believe in my point of view, the AI is real, not only real, it’s starting to grow into our daily life. And we believe that is kind of — we call it AI megatrend, we certainly would believe that. So you — another question is can the semiconductor industry to be good for 3, 4, 5 years in a row, I’d tell you the truth, I don’t know. But I look at the AI, it looks like it’s going to be like an endless, I mean, that for many years to come.

All of TSMC’s AI customers in the US are asking for a lot of support from TSMC’s Arizona fab; TSMC’s capacity in the US is very tight, probably going into 2027, and management is working hard to narrow the gap

All my customer and AI customers in the U.S., so they ask a lot of support from the U.S. fab. So because of that, we have to speed up our fab expansion in Arizona…

…The capacity is very tight. We work very hard to narrow the gap so far. Probably this year, next year, we have to work extremely hard to narrow the gap, okay? We just bought a second land in Arizona. That gives you a hint. That’s what we plan to do because we need it. We are going to expand many fabs over there and this giga-fab cluster can help us to improve the productivity, to lower down the cost and to serve our customers in the U.S. better.

TSMC’s management has seen the hyperscalers solve power constraints when building AI data centers through long-term forward planning; TSMC’s customers are telling TSMC that chips are their bottleneck when building AI data centres

[Question] We see that the AI semiconductor growth has seen very strong growth. And I believe all of your customers and customers’ customers very desperate to add more capacity support from TSMC. But I’m just wondering, how does TSMC evaluate the potential power electricity supply for data center?

[Answer] Talking about build a lot of AI data center all over the world, I use one of my customers’ customers I answer because I ask the same question. They told me that they plan this one 5, 6 years ago already. So as I said, those cloud service providers are smart, very smart. If I knew that, I will — anyway. So they say that they work on the power supply 5, 6 years ago. So today, their message to me is silicon from TSMC is a bottleneck and ask me not to pay attention to all others because they have to solve the silicon bottleneck first.

TSMC’s management thinks it will be 2028 or 2029 when the company can match demand and supply for AI chips

[Question] My question is really on AI. I mean, TSMC has been supply constrained for your AI customers, I think, since 2024, and it sounds like 2026 is another year where we’re going to see challenges. Do you think the CapEx you’ve laid out for this year. TWD 52 billion to TWD 56 billion, could that mean that we start to see supply and demand more in balance in 2027?

[Answer] If you build a new fab, it takes 2 and 3 years — 2 to 3 years to build a new fab. So even we start to spend the TWD 52 billion to TWD 56 billion, the contribution to this year almost none and to 2027, a little bit. So we actually are looking for 2028, 2029 supply. 

Tesla (NASDAQ: TSLA)

Tesla’s management thinks the growth of AI and robotics will usher in an era of universal high income

With the advent or with the continued growth of AI and robotics, I think we actually are headed to a future of universal high income, not universal basic income, but universal high income. I mean there’s going to be a lot of change along the way, but that is what I see as the most likely outcome.

Tesla will be investing heavily in capex in 2026 as it builds out vehicle autonomy and increase production of Optimus at scale, along with investing in its own AI chips; Tesla’s capex for 2026 is expected to be higher than $20 billion (was slightly below $9 billion in 2025); management’s planned capex amount of $20 billion or more does not include the 1st-party semiconductor fab that they are thinking of developing; management thinks Tesla will be in an investment cycle for some time; Tesla has sufficient cash resources to fund the capex, and it also has recurring-revenue services such as robotaxi that will help grease the wheels with banks when it comes to financing

This year is going to be a huge investment year from a CapEx perspective. And at the moment, we are expecting that CapEx would be in excess of $20 billion. We’ll be paying for 6 factories namely, the refinery, LFP factories, Cybercab, Semi, a new Megafactory, the Optimus factory. On top of it, we’ll also be spending money for building our AI compute infrastructure, and we’ll continue investing in our existing factories to build more capacity. And then also the related infrastructure along with it. And we’ll also further expand our fleet of Robotaxi and Optimus…

…Just keep in mind that we’re not — none of these numbers, which I shared of $20 billion factors in anything to do with the solar fab or the semiconductor chip fab…

…I think we’re getting into this investment phase because we have big aspirations. And when you look at it, some of these aspirations are — I call them as infrastructure play, especially if you have to do a chip fab, and we have to do a solar cell manufacturing fab, those are infrastructure plays and that funding takes a little bit longer. And you would be in an investment cycle for a little bit longer…

…How are we going to fund it? Initially, obviously, we have over $44 billion of cash and investments on the books. So we’ll use our internal resources, but there are ways where we can fund it, especially when we look at the Robotaxi fleet because any time you have a consistent stream of cash flow, you can go and get money from the banks. And we have had conversations with banks about it. And that is something how we’re going to do it.

Tesla will soon be stopping the production of the Model S and X and shift their production spaces to the production of Optimus, with the long-term goal of 1 million units annually; management will unveil Optimus 3 in a few months; management expects the manufacturing ramp for Optimus to be longer than for regular products because the supply chain for Optimus has to be built entirely from scratch; Optimus 3 will be a general-purpose robot that can learn by observing human actions; management thinks Optimus 3 will have a significant positive impact on US GDP; the Optimus robots are currently not used in Tesla’s factories in a material way, and any usage is for learning purposes for the robot; management expects significant volume of Optimus production to come only towards the end of 2026; management thinks that Optimus’s form factor – it looks like a human – will make it very easy to teach Optimus how to handle human tasks; management believes Tesla’s biggest competitors in the humanoid robot market will be from China; management believes Optimus far exceeds the capability of any robot under development in China; management thinks designing a hand with the required dexterity is the hardest engineering challenge with a robot; other major engineering challenges are a real-world AI model, and scaling production; management thinks Tesla is the only company in the world that can solve all 3 engineering challenges

We expect to wind down S and X production next quarter and basically stop production of Model S and X next quarter. We’ll obviously continue to support the Model S and X programs for as long as people have the vehicles. But we’re going to take the Model S and X production space in our Fremont factory and convert that into an Optimus factory, which will — with the long-term goal of having 1 million units a year of Optimus robots in the current S, X space in Fremont…

…We’ll probably unveil Optimus 3 in a few months. And I think it’s going to be quite surprising to people. It’s an incredibly capable robot…

…There’s really nothing from the existing supply chain that exists in Optimus. Everything is designed from physics first principles. So that means the normal S-curve of manufacturing ramp will be longer for Optimus than it is for products that have at least some portion of an existing supply chain. Like when everything is new, the production rate will be proportionate to the least lucky, least confident part of the entire supply chain. And if there’s 10,000 things that need to go right, it’s — it only takes one to be slow to lag that. But — so it will be sort of a stretched out S-curve. But I’m confident that we’ll get to 1 million units a year of — in Fremont of Optimus 3…

…Optimus 3 really will be a general-purpose robot that can learn by observing human behavior, so you can like demonstrate a task or literally verbally describe a task or show it a task, even show it a video and it will be able to do that task…

…I think, long term, Optimus will have a very significant impact on the U.S. GDP, like it will actually move the needle on U.S. GDP significantly…

…We have had Optimus do some basic tasks in the factory. But as we iterate our new versions of Optimus, we deprecate the old versions. And so it’s not — I wouldn’t say it’s like — it’s not in usage in our factories in a material way. It’s more so that the robot can learn. We wouldn’t expect to have any kind of significant Optimus production volume until probably the end of this year…

…It looks like a human. People could be easily confused that it’s a human. And this helps our strategy for the AI too because you can learn from how humans do these tasks and it’s very easy to teach the robot in the same way as opposed to previous robots…

…I do think that the — by far the biggest competition for humanoid robots will be from China. China is incredibly good at scaling manufacturing, actually quite good at AI, as you can see from the open source — or not the open source, but the sort of — I guess, some of them are open actually. But basically, the models that China is distributing for free are actually quite good and they keep getting better. So China is very good at AI, very good at manufacturing and will definitely be the toughest competition for Tesla. We — to the best of our knowledge, we don’t see any significant competitors outside of China…

…We think Optimus will be much more capable than any robot that we are aware of under development in China. So we think we’ll be ahead in terms of the real-world intelligence, the electromechanical dexterity, especially the hand design, which is by far the hardest thing in the robot. And in fact, I’d tell you there’s really 3 hard things about humanoid robots. Building an incredible hand that has the same degrees of freedom and dexterity as a human hand is an incredibly difficult engineering challenge. Then there’s the real-world AI and scaling production. Those are the 3 hardest problems by far for humanoid robots. I think we’re — Tesla has — is the only company that actually has all 3 of those components.

Tesla is now able to do its first robotaxi rides in Austin, Texas without a safety monitor; management thinks the amount of fully autonomous rides from Tesla will be increased dramatically every month going forward; management thinks there’s substantial economic opportunity for Tesla in the form of existing Tesla vehicle owners (for those who own vehicles with AI4 hardware versions) adding their vehicles to an autonomous fleet; depending on regulations, management expects Tesla to have fully autonomous vehicles in dozens of cities to half of the US by the end of 2026; revenue and cost per mile metrics for the robotaxi business are still not meaningful at the moment; management thinks autonomous vehicles will significantly change the global market size for automobiles; management is using Tesla’s vast network of charging and service centers that only the company has to prepare for the demand for robotaxi and autonomous vehicles; Tesla now has over 500 vehicles in the robotaxi fleet between the Bay Area and Austin in the US

We’re able to do our first rides with no safety monitor in the car in Austin. These are paid rides. So these are just sort of randomly selected paid rides with no safety monitor. And I think maybe — as of maybe yesterday or so, we actually don’t — we don’t even have a chase car or anything like that. So these are just cars with no people in them and no one is following the car in Austin. So we obviously are being very cautious about this because we want to have no injuries or serious accidents along the way. So I think it makes sense to be very cautious, but you’ll see the amount of autonomy increased dramatically, I think, every month essentially…

…There will also be an opportunity, something we’ve talked about for a long time for existing owners of Teslas to add or subtract their cars to the fleet, kind of like how Airbnb works where you can add or subtract your house to the Airbnb inventory. And I think probably the value of the Tesla — the sort of partial — a few people adding or subtracting the cars to Tesla autonomous fleet is probably a little underweighted by a lot of people because we’ve got millions of cars with AI4 that can do this. So that — it might potentially — I think it will provide an opportunity for a lot of customers to earn more by lending their car to the fleet than their lease cost to Tesla, yes, which is kind of — it’s kind of like you get — in that scenario, you basically get paid to own a Tesla…

…We expect to have fully autonomous vehicles in probably, I don’t know, somewhere between a 1/4 and 1/2 of the United States by the end of the year, pending regulatory approval. A big factor would be if there’s some kind of federal preemption for autonomous vehicles. In the absence of that, you kind of have to go on a city-by-city or state-by-state basis. But nonetheless, even if it is city by city, state by state, we expect to be in, I don’t know, dozens of cities, dozens of major cities by the end of the year…

…We’re still in the early phase of our fleet deployment and are still doing a lot of validation testing, the revenue and cost per mile metrics are not meaningful to discuss at the moment…

…[Question] Today, there are approximately 90 million cars sold globally each year. Does Tesla have a view based on its Robotaxi ambition what this number will be in 5 or 10 years?

[Answer] Obviously, autonomy and Cybercab are going to change the global market size and mix quite significantly. I think that’s quite obvious. General transportation is going to be better served by autonomy as it will be safer and cheaper…

…We’re using our vast network of charging and service centers that really only Tesla has in this space to jump start our infrastructure build-out needs to get ahead of Robotaxi and autonomous vehicle demand. And we expect that because of this network, we are the only company capable of scaling at the rate that is needed for the tsunami of autonomy that is coming…

…One other thing people forget that we’ve been deliberate on all this in the sense that we have the supporting infrastructure already being in place, whether it’s service centers, charging. Yes, we’ll have to augment as the fleet grows, depending upon the density of where the demand is and whatnot. But it’s not something like we just stumbled upon it and we’re starting to do. We’ve been at it for years. Yes, not every city is designed the same way. Same thing. Our infrastructure is also not the same in every city. But you have to give us credit that it’s been a journey.

…In terms of Robotaxi vehicles carrying paid customers, I think we’re well over 500 at this point between the Bay Area and Austin.

There are many countries where Tesla is selling vehicles where the latest version of FSD (Full Self Driving) software is not available; FSD now has nearly 1.1 million paid customers globally, and 70% of them paid upfront; in 2026 Q1, Tesla transitioned fully to a subscription model for FSD; a variant of the autonomous software used for robotaxi was recently shipped to customers of Tesla’s consumer vehicles with v14 (version 14) of FSD, and there was a lot of happy feedback from customers

We saw increase in demand leading to record deliveries in smaller countries like Malaysia, Norway, Poland, Saudi Arabia and Taiwan, while continued strength in the rest of APAC and EMEA. We, therefore, ended 2025 with a bigger backlog than in recent years. Note that none of these countries have the latest version of FSD supervised available yet…

…FSD adoption continued to improve in the quarter, reaching nearly 1.1 million paid customers globally. Of these, nearly 70% were upfront purchases. It is important to note that beginning this quarter, we are transitioning fully to a subscription-based model for FSD. Therefore, net additions to this figure will primarily be via subscription model and in the short term will impact automotive margins…

…A variant of the software that’s used for the Robotaxi service was shipped to customers with V14, and customers saw a huge jump in performance, like a lot of happy feedback from customers. So — and since then, we have improved the software significantly as well.

Tesla’s Cybercab vehicles for the robotaxi fleet are designed to accommodate just 2 passengers or less because 90% of vehicle miles travelled are with that number of passengers; the Cybercab model will not have a steering wheel or pedals, so it’s fully autonomous; management expects to start production of Cybercab in April 2026, with a typical S-shape curve for the production ramp; in time, management expects to be producing several times more Cybercabs per year than all of Tesla’s other vehicles combined; management thinks that 1%-5% of miles driven in the future will be performed by humans; the Cybercab has a different design to traditional passenger vehicles and it is super optimised for minimum cost per mile and a much higher duty cycle; management expects a Cybercab vehicle to be used 50-60 hours a week compared to 10-11 hours a week for a human-driven car; management is designing larger vehicles for the Cybercab in the future

And over 90% of vehicle miles traveled are with 2 or less passengers now, which is why we designed Cybercab that way…

…The Cybercab, which is a dedicated 2-seater, dedicated Robotaxi. It’s a little confusing with the terms, Robotaxi and Cybercab, sorry about the confusion. But — and in fact, in some states, we’re not allowed to use the word cab or taxi. So it’s going to get even more strange. It’s going to be like cyber vehicle or something, cyber car. But the Cybercab, which is a specific vehicle model that we’re making, does not have a steering wheel or pedals. So this is clearly — there’s no fallback mechanism here. It’s like this car either drives itself or it does not drive. And we expect to start production in April. As always, it’s an S-curve of — the production rate is an S-curve. So it starts off very slowly and then grows exponentially, then you hit the linear and then ultimately, it asymptotes at whatever your target volume is. So — but we would expect over time to make far more Cybercabs than all of our other vehicles combined. Given that 90% of distance driven or distance being — distance traveled exactly, no longer driving, is 1 or 2 people. I think it’s like 80% is just one. So it would mean that long-term Cybercab — we would make several times more Cybercabs per year than all of our other vehicles combined…

…The vast majority of miles traveled will be autonomous in the future. I would say, probably less than — I’m just guessing, but probably less than 5% of miles driven will be where somebody is actually driving the car themselves in the future, maybe as low as 1%…

…The whole design of Cybercab was to optimize the fully considered cost per mile of autonomous driving. And it’s a different design problem than if you’re trying to design cars for people who will be driving versus being driven. And — so Cybercab is, like I said, super optimized for minimum cost per mile and also for a much higher duty cycle. So we would expect Cybercab to be used probably 50 or 60 hours a week instead of the 10 or 11 hours a week that a driven vehicle is used. So typically, people might drive their car for 1.5 hours a day on average, so it’s like 10 hours per week out of 168. But I think an autonomous vehicle is likely to be used probably 5x as often, which means that you need to design the vehicle for much more wear and tear per unit time and much more resilience…

…We will have larger vehicles in the Cybercab in the future that are designed for full autonomy. And we’ve actually shown pictures of this, and in fact, have shown prototypes. So this is not exactly a secret. In fact, we’ve given people rides in them. So we’re not keeping this — hiding this light under a bushel here.

Tesla’s management thinks getting the design for Tesla’s AI5 chip right is the most important thing for the company at the moment; management is confident that AI5 will be a very good chip; management expects AI6 to follow AI5 in under a year, and for AI6 to be a much better chip than AI5; management’s priority with Tesla’s chips is for internal usage as they believe that chip production will be the key limiting factor for Tesla’s growth in the next few years; Tesla is currently using its own AI4 chips in its data centers and is conducting training of its AI models with both NVIDIA chips and the AI4 chips; management thinks Tesla needs to build its own fab to (1) solve production constraints at the major leading-edge fabs, and (2) reduce geopolitical risk; if Tesla were to build its own fab, it will be in the USA and will include logic chips, memory chips, and advanced packaging capabilities; management thinks that some people are underestimating the geopolitical risks related to advanced fabs (likely referring to the situation involving TSMC, Taiwan, and China); management thinks memory chips will be a bigger limiter to Tesla’s growth than logic chips; there are currently no advanced memory fabs in the USA; management will be making a big announcement on Tesla’s fab in the future; management has sufficient plans to solve for Tesla’s chip supply for the next 3 years, but anything beyond that is fuzzy

I tend to spend time on whatever the most critical issue is for the company and completing the AI5 chip design and having it be a great chip is arguably the #1 most critical thing to get done, which is why I’m spending more time on that than currently anything else at Tesla…

…I do think AI5 will be a very good chip. And I feel quite confident about the design at this point. And then AI6, which will follow that, it will be — aspirationally would follow that in under a year will be yet another big leap beyond AI5…

…In terms of selling it outside of Tesla, we first need to make sure we have enough chips for all of our vehicle production and all of our Optimus production, and then we will actually use the AI5 chips in our data centers…

…When I look ahead at, let’s say, what’s the limiting factor for Tesla growth, if you go, say, 3 or 4 years out, I think it actually is chip production. Is there enough AI logic and enough AI — enough memory, enough RAM for our volume. And right now, I see that as being the thing that probably limits our growth in 3 or 4 years, which probably imply that we’re not selling chips outside of Tesla because we need them.

…We already use the AI4 chips in our data centers. So when we do training, it’s a combination of the AI4 chips and NVIDIA hardware primarily that we do training with…

…This is definitely going to be a sort of a controversial thing, but I think Tesla needs to build a Terafab. And I mentioned this at the shareholder meeting. But even when you look at the output of — the best case output of all of our key suppliers and I would say even beyond suppliers like strategic partners like Samsung, TSMC and Micron, and we say like what’s the most you could possibly make, then it’s not enough. So we — I think in order to remove the constraint, the probable constraint in 3 or 4 years, we’re going to have to build a Tesla Terafab, a very big fab that includes logic, memory and packaging domestically. And that’s actually also going to be very important to ensure that we are protected against any geopolitical risks. I think people may be underweighting some of the geopolitical risks that are going to be a major factor in a few years…

…I think if we don’t do the Tesla Terafab, we’re going to be limited by supplier output of chips. And I think maybe memory is an even bigger limiter than AI logic. So for example, we have chip supply deals with TSMC in Arizona and Samsung in Texas. But currently, there are no advanced memory fabs at scale in the United States. They are zero, literally zero. Hopefully, Micron will have something going in a few years because they’re all headquartered in Idaho, where they make a lot of potato chips, but we need to make computer chips, too…

…Quite frankly, it would be crazy not to try the Terafab. We’ll have a bigger announcement on this in the future…

…We do have a solution for logic and memory for, let’s say, the next roughly 3 years. But if you start going beyond 3 years, we look at the scaling plans and how many fabs are getting built and especially if you factor in geopolitical uncertainty, there’s always risk that maybe the best chips don’t arrive that people were expecting to arrive.

Tesla recently invested in Elon Musk’s AI startup, xAI; Tesla is collaborating with xAI on AI technology and in fact, Tesla vehicles are already utilising xAI’s model, Grok; management believes Grok will be very useful for managing Tesla’s potentially massive robotaxi fleet; management sees Grok as a model that could be useful for also managing a fleet of Optimus robots 

On January 16, 2026, Tesla entered into an agreement to invest approximately $2 billion to acquire shares of Series E Preferred Stock of xAI as part of their recent publicly-disclosed financing round. Tesla’s investment was made on market terms consistent with those previously agreed to by other investors in the financing round. As set forth in Master Plan Part IV, Tesla is building products and services that bring AI into the physical world. Meanwhile, xAI is developing leading digital AI products and services, such as its large language model (Grok). In that context, and as part of Tesla’s broader strategy under Master Plan Part IV, Tesla and xAI also entered into a framework agreement in connection with the investment. Among other things, the framework agreement builds upon the existing relationship between Tesla and xAI by providing a framework for evaluating potential AI collaborations between the companies. Together, the investment and the related framework agreement are intended to enhance Tesla’s ability to develop and deploy AI products and services into the physical world at scale. This investment is subject to customary regulatory conditions with the expectation to close in Q1’2026…

…Even today, if you look at Tesla vehicles, we are using Grok in there…

…Grok will be very helpful in, say, maximizing the efficiency of the management of a large autonomous fleet. So I mean, if you’ve got an autonomous fleet that’s in the future 10 million vehicles or tens of millions of vehicles, then optimizing the efficient use of that fleet, Grok will be, I think, way better than any heuristic solution or sort of manually managed solution.

And if you say you’re managing, say, a large team of Optimus robots to build a factory or build a refinery — say hypothetical — it’s a hypothetical example, a rare earth or refinery, which we do desperately need in America, then you say, well, like what’s going to organize the Optimus robots to build that ore refinery that would — you need — kind of need an orchestra conductor. And so then Grok would be kind of the orchestra conductor for the Optimus robots to build the — hypothetically and — it might not be hypothetical in the future. I’m just saying it’s not currently in our plans.

Tesla’s management believes that Tesla’s AI model has the highest intelligence density per gigabyte, by far, in the world

I think one of the metrics one to consider for any given AI model is the intelligence per gigabyte, especially when you’re constrained on RAM, having an AI that has very high intelligence density per gigabyte. So you could say like, for a given number of gigabytes, how much functionality can you get out of it? I actually think Tesla is ahead of the rest of the world in intelligence density of AI by an order of magnitude or more. Like this is going to sound like a pretty bold statement, but I kind of know what the intelligence efficiency of the big models are like Grok and like to be honest — and a bunch of the other models. And Tesla AI is like in terms of its memory efficiency, more than an order of magnitude better.

Visa (NASDAQ: V)

Visa’s Intelligent Commerce solution uses Visa Tokens as the foundation for agentic payments; Visa is working with more than 100 partners in the global commerce ecosystem to enable agentic commerce and over 30 partners are already building in Visa’s sandbox; Visa recently expanded into B2B agentic payments with Ramp; Visa recently entered into an agreement with AWS for Visa Intelligent Commerce to help developers build agentic commerce solutions; Aldar is integrating with Visa Intelligent Commerce to provide recurring payments services; Visa’s Trusted Agent Protocol helps bring trust to agentic commerce; Visa recently partnered with Cloudflare and Akamai on Trusted Agent Protocol; Visa is currently building interoperability between Visa Intelligent Commerce and Google’s Universal Commerce Protocol; Visa’s agentic solutions are already live in the US and CEMEA (Central Europe, Middle East, and Africa); Visa’s agentic solutions are currently in pilot phase in Asia and Europe, with Latin America and the Caribbean (LAC) soon to come

One of those [ capabilities ] that is enabled with Visa Tokens is an important area of innovation, agentic commerce. Our Visa Intelligent Commerce solution utilizes tokens and their configurability as the core underlying foundation for agentic payments. We’re working to enable agentic commerce with more than 100 partners across the commerce ecosystem globally. Over 30 partners are actively building in our sandbox with multiple agents and agent enablers running live production transactions and more partners expected in the future.

Just this quarter, we expanded into B2B agentic payments with Ramp, streamlining corporate bill payments, enabling their business customers to capture cash back on card payments and optimizing working capital. We also reached an agreement with AWS to make Visa Intelligent Commerce available on AWS marketplace to support developers building agentic commerce solutions connecting secure, automated payment workflows at scale through blueprints for workflows such as travel bookings or retail purchases. In our CEMEA region, Aldar, leading real estate developer, investor and manager is integrating Visa Intelligent Commerce to make reoccurring payments such as property service charges on their Live Alder app.

Our Visa Trusted Agent Protocol continues to help define the connectivity and data elements required to bring trust to the agentic environment. In Q1, we announced partnerships with leading Internet security players, first Cloudflare and then Akamai, who collectively serve millions of businesses globally, including 9 of the world’s top 10 retailers. In addition, we are building interoperability between key elements of Visa Intelligent Commerce and Google’s new Universal Commerce Protocol as part of our global effort to help ensure that Visa transactions are securely supported as different protocols evolve. Our agentic solutions are live in the U.S. and CEMEA and we are initiating pilot programs in Asia Pacific and Europe. LAC is soon to follow where we have already begun token enrollment for agentic commerce with issuers. We believe that we are well positioned to be the infrastructure provider and key enabler in agentic commerce so that every agent interaction is trusted and secure.

The AI-powered Visa Account Attack Intelligence solution has scored 60 billion transactions and identified 600 million suspicious transactions in the last 12 months; Visa Account Attack Intelligence has prevented more than $10 billion of fraud in LAC (Latin America and the Caribbean) in the last 6 months

Another AI-powered solution, Visa Account Attack Intelligence was announced in 2024 in the U.S. to help clients prevent enumeration attacks which are when bad actors systematically initiate e-commerce transactions to obtain valid payment credentials. The results of this solution in the U.S. have been impressive, with over 60 billion transactions scored and nearly 600 million suspicious transactions identified in the last 12 months…

…In LAC, for example, in just 6 months, we have almost 90% of clients already activated and have prevented more than $10 billion of fraud.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Mastercard, Meta Platforms, Microsoft, Netflix, PayPal, TSMC, Tesla, and Visa. Holdings are subject to change at any time.

The View On Consumer Spending From The Largest Payments Companies (2025 Q4)

Mastercard and Visa can feel the pulse of consumer spending – what are they seeing now?

Mastercard (NYSE: MA) and Visa (NYSE: V) are two of the largest payments companies in the world. As a result, they have a great view on consumer spending that’s taking place. With both companies reporting their earnings results for the fourth quarter of 2025 earlier this week, the bottom line is that consumer spending remains strong in the USA and other parts of the world. Here’s what they are seeing.

*What’s shown in italics between the two horizontal lines below are quotes from Mastercard and Visa’s management teams that I picked up from their earnings conference calls.


From Mastercard

1. Mastercard’s management sees consumer and business spending remaining healthy, supported by a balanced labour market, although there remains geopolitical and economic uncertainty; management remains positive about Mastercard’s growth outlook

As we enter 2026, geopolitical and macroeconomic uncertainty persists. We will continue to monitor and work to navigate just as we have successfully done in the past. But for now, we remain optimistic and confident in our execution and the fundamentals of our business…

…The fundamentals of our business remain strong. The macroeconomic environment remains supportive with balanced job markets across the globe, underpinning healthy consumer and business spending. That said, there continues to be ongoing geopolitical and economic uncertainty. We maintain a disciplined capital planning approach and have levers to pull if needed…

…We remain positive about the growth outlook and our base case for 2026 continues to reflect healthy consumer spending.

2. Worldwide GDV (gross dollar volume) was up 7% year-on-year in constant-currency basis; cross-border volume was up 14% globally in constant-currency, driven by both travel and non-travel cross-border spending (cross-border volume growth was 15% in 2025 Q3); switched transactions was up 10% year-on-year; card growth was 6% in 2025 Q4, with Mastercard ending the quarter with 3.7 billion cards in circulation (there were 3.6 billion cards in 2025 Q3, and year-on-year growth was 6% then); on currency-neutral basis, domestic assessments were up 8%, cross-border assessments were up 17% and transaction processing assessments were up 14%

Let’s first look at some of our key volume drivers for the fourth quarter on a local currency basis. Worldwide gross dollar volume, or GDV, increased by 7% year-over-year. In the U.S., GDV increased by 4% with credit growth of 6% and debit growth of 2%. The growth of our debit portfolio was impacted by the Capital One debit migration, which continued through Q4. Outside of the U.S., volume increased 9% with credit growth of 9% and debit growth of 9%. Overall, cross-border volume increased 14% globally for the quarter, reflecting continued growth in both travel and non-travel related cross-border spending…

…Switched transactions grew 10% year-over-year in Q4… 

…Card growth was 6%. Globally, there are 3.7 billion Mastercard and Maestro-branded cards issued…

…All growth rates are described on a currency-neutral basis, unless otherwise noted. Looking quickly at each key metric. Domestic assessments were up 8%, while worldwide GDV grew 7%. The difference is primarily driven by pricing, offset by mix. Cross-border assessments increased 17%, while cross-border volumes increased 14%. The 3 ppt difference is driven primarily by pricing in international markets, partially offset by mix. Transaction processing assessments were up 14%, while switched transactions grew 10%. The 4 ppt difference is primarily due to favorable mix and pricing, partially offset by a decline in revenue from FX volatility. Towards the end of Q4 and month-to-date January, we saw FX volatility well below historical norms.

3. In 2025 Q4, Mastercard’s operating metrics had good year-on-year growth but there were sequential declines; in January 2026 so far, Mastercard’s operating metrics continue to be strong with worldwide switched volume growth of 9% (5% in the USA, and 12% outside of the USA), switched transactions growth of 10%, and cross-border volume growth of 13%; US switched volume was flat sequentially in January 2026 as the migration of debit volume by Capital One was offset by easier comps from weather impacts a year ago; in all, management continues to see healthy consumer and business spending; consumer spending did not change in 2025 despite what surveys may say; US tariffs that were implemented in 2025 has not affected consumer spending in a noticeable way; consumer spending remains healthy across the world

Starting with Q4 and looking at the metrics on a sequential basis. U.S. switched volume growth declined primarily due to the migration of the Capital One debit portfolio. Worldwide less U.S. switched volume saw a slight deceleration, driven primarily by tougher comps, including the lapping of portfolio wins in Europe. Switched transactions were in line with Q3. Cross-border volume remained strong. Of note, we saw a sequential decline in cross-border card-not-present ex travel, primarily driven by tougher comps from the lapping of share wins in Europe and higher growth from crypto purchases a year ago.

As we look to the first 3 weeks of January, our metrics continue to remain strong, generally in line with the fourth quarter. Of note, U.S. switch volume was flat sequentially as the Capital One debit roll-off was mostly offset by easier comps due to weather impacts in the prior year. We saw a decline in cross-border travel volumes, primarily due to weather-related impacts in Europe this year. Cross-border card-not-present ex travel continued to be impacted by higher growth from crypto purchases a year ago. Overall, we continue to see healthy consumer and business spending…

When you look back over 2025 over the whole year, and we just take soft data like headlines or consumer confidence data that comes out. On one hand, consumers fill in surveys. At the same time, their spend behavior hasn’t actually changed. So that’s a pattern that just continues. We see — just taking 2025, it hasn’t changed quarter-on-quarter. We see a truly savvy and intentional consumer…

…There is question on how the consumer was affected or not by some of the tariff changes that we’ve seen last year. And that doesn’t show up in our data either. So it’s not coming through. Somewhere across the ecosystem between importers and big brands, it’s all been adjusted in a way that it hasn’t really affected consumer spending, at least we cannot tell that…

…If you zoom out and you look across the world, these patterns are different by region here and there, but the aggregate top line is that consumer spending remains healthy, is the same.

From Visa

1. US payments volume growth was good at 7%, with e-commerce growing faster than physical spend, and it reflected resilience in consumer spending; US credit and debit volume were up 7% and 6%, respectively; the slight step down in US payment volume (credit and debt both grew 8% in 2025 Q3) were partly the result of a Visa Direct customer shifting volumes to its own solution, and Capital One migrating its debit volume; growth across consumer spend bands remained relatively consistent with FY2025 Q4 with the highest spend band continuing to grow the fastest; management did not see a deterioration in spend in the lower bands; both discretionary and non-discretionary spend remained strong; consumer spending in the holiday period of 2025 grew from a year ago in both the US and other key countries globally

U.S. payment volume was up 7%, with e-commerce growing faster than face-to-face spend, reflecting resilience in consumer spending. Credit was up 7% and debit was up 6%. The slight step down in U.S. PV throughout the quarter was driven by debit primarily as a result of a Visa Direct client moving the remainder of its volume to its own solution and a number of other small factors, including the loss of some Interlink volumes to the Capital One debit migration and severe weather that affected certain spend categories.

Growth across consumer spend bands remained relatively consistent with Q4, with the highest spend band continuing to grow the fastest. We did not see a deterioration in the lower spend band and across our volume, both discretionary and nondiscretionary spend remain strong.

Honing in on the holiday season specifically, which we define as the period from November 1 to December 31. I would note a few items. In the U.S. consumer holiday spending growth was in line with last year, reflecting continued strength in retail, an improvement in fuel and some moderation in other spend categories. Focusing on retail. Holiday spending growth was slightly better than last year, driven by strong growth in e-commerce, which continues to take on a greater share of consumer retail spend. In several key countries around the globe, we saw similar trends with consumer retail holiday spending growth up from last year, led primarily by e-commerce growth.

2. Visa’s cross-border volume growth remained strong in 2025 Q4 (FY2026 Q1) at 11%, and was the same as in 2025 Q3

Q1 total cross-border volume was up 11% year-over-year, consistent with Q4. Cross-border e-commerce volume was up 12%, slightly below Q4, primarily from lower growth in cryptocurrency purchases. Travel-related cross-border volume was up 10%, consistent with Q4. We saw continued strength in commercial volumes, and we started to see improvement in U.S. inbound from Canada.

3. Payments volume on Visa’s network continues to grow in January 2026, with US payments volume up 8%, cross-border volume up over 11%, e-commerce volume up 12%, and processed transactions up 9%

Now let’s look at drivers through January 21, with volume growth in constant dollars. U.S. payments volume was up 8% with credit up 9% and debit up 6% year-over-year. Our constant dollar cross-border volume, excluding transactions within Europe, total volume grew 11% year-over-year with e-commerce up 12% and travel up 10%. Processed transactions grew 9% year-over-year.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Mastercard and Visa. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q4 2025

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the fourth quarter of 2025.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the fourth quarter of 2025 – was held earlier this week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy remains resilient, but long-term risks remain.

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy remained resilient in 2025 Q4; the labour market, though soft, has not worsened; consumers continue to spend and businesses remain healthy; management thinks the good conditions could last; management thinks markets are underestimating risks; consumer sentiment is weak, but spending-trends are not deteriorating; debit and credit sales volume were up 7% in 2025; management thinks the short-term macro outlook is positive, but there are longer-term risks, including fiscal deficits in the US and other countries; management sees the Federal Reserve’s ongoing purchase of T-bills as a tailwind for the economy

The U.S. economy has remained resilient. While labor markets have softened, conditions do not appear to be worsening. Meanwhile, consumers continue to spend, and businesses generally remain healthy. These conditions could persist for some time, particularly with ongoing fiscal stimulus, the benefits of deregulation and the Fed’s recent monetary policy. However, as usual, we remain vigilant, and markets seem to underappreciate the potential hazards—including from complex geopolitical conditions, the risk of sticky inflation and elevated asset prices…

…Despite weak consumer sentiment, trends in our data are largely consistent with historical norms and we are not currently seeing deterioration. Across income groups, debit and credit sales volume continued to perform well, up 7% year-on-year…

…When you’re guessing what the macro environment is going to be, if you ask me, in the short run, call it, 6 months to 9 months and even a year, it’s pretty positive. Consumers have money. There’s still jobs, even though it’s weakened a little bit. There’s a huge — there is a lot of stimulus coming from the One Big Beautiful Bill. Deregulation is a plus in general, not just for banks but — banks will be able to redeploy capital. But the backdrop is also important, but the timetables are different. Geopolitical is an enormous amount of risk. I don’t have to go through each part of it. It’s just a big amount of risk that may or may not be — determine the state of the economy. The deficits in the United States and around the world are quite large. We don’t know when that’s going to bite. It will bite eventually because you can’t just keep on borrowing money endlessly…

…The Fed, they don’t call it QE but they’re talking about doing $40 billion a month of buying T-bills. That adds $40 billion a month into bank — all things being equal, to bank reserves. And most of that initially shows up in wholesale deposits and then maybe gets redeployed. So we’ll see how that plays out too. But it does create more liquidity in the system, which I should have mentioned is another tailwind for the economy.

2. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose 4% from US$2.4 billion a year ago

Credit costs of $4.7 billion with $2.5 billion of net charge-offs and a $2.1 billion net reserve build…

…Net reserve build of $2.1B, reflecting a $2.2B reserve established for the forward purchase commitment of the Apple credit card portfolio.

3. JPMorgan’s investment banking fees fell in 2025 Q4 from a year ago because of a tough comparison-period and some deals that were pushed into 2026; management sees a strong pipeline for capital markets activities

IB fees were down 5% year-on-year, reflecting a strong prior year compare and the timing of some deals that were pushed to 2026. In terms of the outlook, we expect strong client engagement and deal activity in 2026, supported by constructive market dynamics, which is reflected in our pipeline.

4. Management is mindful of risks in non-bank financial institution (NBFI) lending; management sees NBFI lending as having structural protections for lenders, and losses in the category will generally occur only in the event of fraud or a deep recession

In light of the growth and the novel elements of some components of this activity, we are quite mindful of the risks. But given the structural protections, you would generally expect losses in this NBFI category to appear either as a result of additional instances of fraud-like problems or as a result of a particularly deep recession that erodes all the credit enhancement. In that scenario, losses associated with traditional lending to end borrowers would likely be the greater concern for the industry.

5. Management’s current assumption is 2 interest rate cuts for 2026

As usual, the outlook follows the forward curve, which currently assumes 2 rate cuts.

6. Management expects credit card net charge-offs for 2026 to be 3.4% (was around 3.3% in 2025) 

On credit, we expect the 2026 card net charge-off rate to be approximately 3.4% on favorable delinquency trends driven by the continued resilience of the consumer.

7. Management thinks that if caps on interest rates on credit cards are implemented, a lot of people will lose access to credit, especially those who need credit the most, and that will have a negative impact on the economy

For the purposes of this call, given how little we know at this point, the way I would prefer to talk about it is, just assume for the sake of argument that something in the general mode of price controls on credit card interest rates goes through, what would be the consequences of that.

And I think the first thing to say, which you obviously know very well, is that the card ecosystem is an exceptionally competitive ecosystem. It’s among the most competitive businesses that we operate in. And that’s true for all levels of borrower credit score, from high FICO to low FICO. And so in that context, when you — just basic economics, when you start with that as your starting point, the right assumption about what the response of the system is going to be to the imposition of price controls is not that you will simply compress the profit margins, which are already at their sort of competitively optimal level, and thereby pass on benefits to consumers. What’s actually simply going to happen is that the provision of the service will change dramatically.

Specifically, people will lose access to credit, like on a very, very extensive and broad basis, especially the people who need it the most, honestly. And so that’s a pretty severely negative consequence for consumers and frankly, probably also a negative consequence for the economy as a whole right now.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

How Non-Tech Companies Are Thinking About AI

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed non-technology companies in the 2025 Q3 earnings season.

It has been three years or so since artificial intelligence (or AI) leapt into the zeitgeist with the public introduction of DALL-E2 and ChatGPT. As AI technology develops, I have been tracking how companies are thinking about and using it. 

The latest earnings season for the US stock market – for the third quarter of 2025 – recently ended and there were a number of companies that I follow or have a vested interest in, where the management teams discussed the topic of AI and how the technology could impact their industry and businesses. I shared the commentary from US-listed technology companies recently here. For commentary from non-technology companies, they are below.

Costco (NASDAQ: COST)

Costco’s management has integrated AI into Costco’s pharmacy inventory system and this has improved in-stocks to more than 98%, leading to mid-teen growth in pharmacy scripts filled, higher margins, and lower prices for members; management is deploying AI into Costco’s gas business, which is expected to improve inventory management and drive higher sales; management sees many tangible areas in Costco’s business to implement AI; management sees 2 concurrent phases for AI-implementation, namely, the member-facing phase, and the business-basics phase

An early use case has involved integrating AI into our pharmacy inventory system. This system now compares prescription drug pricing across vendors and autonomously and predictively reorders inventory, improving our in-stocks to more than 98%. This change has played an important role in helping us achieve mid-teen growth in pharmacy scripts filled and has improved margins while lowering prices to our members.

We’re now in the process of deploying AI tools in our gas business, which we expect will improve inventory management and drive incremental sales by ensuring we are always delivering the best value to our members…

…On the AI front, we’re extremely excited about what the future holds for us. I mean we see many opportunities that are really business-driven and tangible — have great tangible business value for us and you look at things like our procurement system as we are a global retailer and we buy from around the world as well as supply chain, what it can do there. And just the tools that we’ve seen that this has improved our employees’ work abilities and their skill sets as well as they do their day-to-day work…

…We look at it in a 2-phase approach that concurrently, we’re going to be focusing on member-facing, how do we improve the experience for the member through AI and then business in basics, how do we continue to focus on the business basics. Our mantra is to bring goods to market at the lowest possible price. And we think AI has a great asset to that, and it really can help us become a much better merchant out there.

Tractor Supply (NASDAQ: TSCO)

Tractor Supply is deploying AI in 3 buckets, namely, (1) off-the-shelf software, (2) custom-built software, and (3) AI agents; in off-the-shelf software, Tractor Supply’s software vendors are increasingly infusing their products with AI capabilities; in custom-built software, Tractor Supply has Hey GURA, Tractor Vision, and Quorso, as examples of AI-powered custom-built software; in AI agents, Tractor Supply recently integrated with OpenAI to enable 1,500 users within Tractor Supply to access AI agents to improve operational efficiency; an example of how AI agents have helped Tractor Supply is in providing automated approval to team members after completion of a manual task, when the task previously needed manual approval

On AI. We’ve got a lot of exciting things going on, on that front. And I’m going to break it into 3 buckets: enterprise-level software, the second is custom-built — now we call off-the-shelf enterprise software. Second, I’d call custom-built enterprise software. And then the third, I would talk about is around agents and automation.

First off, on the enterprise kind of purchased software. All of our vendors that we work closely with are now rolling in AI modules, AI analysis, AI capabilities, whether that’s in ERP systems, whether that’s in replenishment systems, marketing, et cetera. So we are fast adopters there where appropriate, obviously, with clarity of understanding of functionality and security.

The second one, in terms of custom-built, we talked about that several times in the past. Those software systems applications that we built out, we continue to scale, we continue to refine and they continue to — and they’ve become more and more key parts of just how we operate every single day. So whether that’s Hey GURA, which is increasing in its use, whether that’s Tractor Vision, in terms of our customers calling out when customers need help in areas that our team members might not have visibility to them or whether that’s in Quorso, which drives day-to-day operational tasks. So those are just 3 examples of custom-built applications that are scaled out now and continue to ramp in their impact in use by our team members.

On the third one around kind of automation and agent build-out. Over the last 6 months, we’ve done an enterprise integration with OpenAI. We now have over 1,200, I think 1,500 users that now have OpenAI enterprise accounts that’s integrated with our Snowflake Data Lake. And what that allows us to do now is to start really across the organization, building agents to automate and make things simpler and faster. An example of that would be in, say, our Fast team, where in the past, when a Fast team member would finish a planogram reset, they would take a picture, they would send it to their District Manager — District Fast Supervisor, they would review it and provide manual feedback. We’ve now built up the capability where when that picture is taken, AI assesses the picture and gives immediate feedback to the team member and our District Fast Supervisor only has to get involved with escalations. And so it just makes everybody’s job more efficient and allows us to execute faster. And kudos to the team across really all dimensions of our organization for embracing it and driving that productivity enhancements that it can provide.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Costco and Tractor Supply. Holdings are subject to change at any time.

Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

Last month, I published More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s nanagement has continued to develop the 1st-party Firefly models, while also expanding partnerships with other GenAI models; the new Firefly Image 5 model is performing really well; Firefly is the only app with Adobe’s own commercially safe models and over 25 leading partner models, including those from Google and OpenAI; monetisation of the usage of Adobe’s and 3rd-party models is through Generative Credits; different models and different media types consume different quantities of Generative Credits; consumption of Generative Credits increased 3x sequentially in 2025 Q3 (FY2025 Q4); subscribers who consume more Generative Credits can move to higher-value offerings or add credits; Adobe is attracting new creators through the Firefly application; management sees Firefly as a one-stop shop for accessing industry-leading models integrated into rich creative workflow at an affordable price; Creative Cloud customers are adopting Firefly, with 2x sequential growth in first-time subscriptions of Firefly in 2025 Q3 (FY2025 Q4); management has announced the general availability of Firefly Boards, a new ideation surface that is integrated with industry-leading models from both Adobe and 3rd parties; Firefly Services can now perform automated content production including video resizing, video reframing, image composition, image harmonization, digital-twin generation and more; more than 100 new deals for Firefly Services were signed in 2025 Q3 (FY2025 Q4) by enterprises

We have continued to develop our own commercially safe Firefly Models, while dramatically expanding our ecosystem of GenAI model partnerships. The new Firefly Image 5 model is performing incredibly well with generation quality, native 4- megapixel resolution and industry-leading prompt-based editing capabilities. At Adobe MAX in October, we significantly expanded Firefly to become the only app with our own commercially safe models and over 25 leading partner models including Google, OpenAI, Black Forest Labs, Luma, Runway, Topaz Labs and ElevenLabs. These models are now integrated into our Firefly, Express and Creative Cloud applications. We also announced advanced model capabilities including custom model support for Firefly and Creative Cloud customers. 

Usage and monetization of new Adobe and third-party models is measured and charged through Generative Credits. Different models (Firefly, Gemini or Flux, for example) and different media types (video and high-resolution images, for example), consume different quantities of Generative Credits. Generative Credits are a great indicator of high-value usage and credit consumption increased 3x quarter over quarter. As subscribers consume more generative credits, they have the choice of moving to higher value Creative Cloud offerings or acquiring Firefly Credit Add-ons…

…We are attracting new creators to Adobe through the Firefly application, which can be purchased through our Firefly Standard, Pro and Premium subscription plans. Firefly has a rich set of generative AI capabilities that allow users to generate with Adobe and partner models, ideate with Firefly Boards and create and edit videos and images. Simply put, Firefly is a one-stop shop for accessing industry-leading models integrated into rich creative workflows, at an affordable price.  In addition, we’re seeing strong adoption of Firefly from Creative Cloud customers, as they embrace the growing breadth of AI models and tools, seamlessly integrated into creative workflows. We drove 2x quarter-over-quarter growth in first-time subscriptions of Firefly…

…We also announced the general availability of Firefly Boards, a new ideation surface that brings together everything creative professionals need to explore visual and design concepts with stakeholders using industry-leading models, from Adobe and our partners…

…As part of the overall content supply chain solution for marketing use cases, we continue to advance automated content production with Firefly Services that include video resizing, video reframing, image composition, image harmonization, digital-twin generation and more…

…Accelerating adoption of Firefly Services within enterprises with over 100 new deals signed in Q4.

Adobe’s management has atomised Photoshop, Express and Acrobat capabilities as Model Context Protocol (MCP) endpoints; management sees LLMs (large language models) as a great top of funnel for customer acquisition for Adobe; the MCPs are important for Adobe because they allow LLMs to work with Adobe’s models and APIs, which helps Adobe reach more customers 

We also took a huge step forward in Q4 as we showcased the work we’ve been doing to atomize Photoshop, Express and Acrobat capabilities as Model Context Protocol (MCP) endpoints at Adobe MAX…

…Our focus has always been around sort of meeting customers where they are. And that used to predominantly be focused on search and the web, and now we’re seeing this incredible growth with LLMs. And so we are taking all of our technology and making sure that it can run in these LLMs. They represent, in our mind, a great fop of funnel. They let us reach new users that we typically wouldn’t have reached with some of the traditional markets that we go through, and we can engage them in new ways…

…Maybe the more important elements and moments of why this is such a critical moment for us is that as LLMs start embracing these model context protocols, these MCP endpoints, it’s no longer that these LLMs are about a prompt to a model and a response. It now gives us the opportunity to have the LLMs actually work with models and APIs, and that plays to a really strong strength that we have and durable differentiator given the incredible APIs we have across creativity and productivity. So it lets us reach a lot more customers, it lets us atomize the capabilities, double down on the freemium experience that we’ve been putting in place.

Adobe’s management is providing imaging, video, and productivity functionality in AI conversational platforms to monetize Adobe’s GenAI capabilities

In addition to delivering applications, we are providing imaging, video, and productivity functionality in ChatGPT, Copilot and other conversational platforms in order to deliver and monetize creative and PDF functionality in new surfaces. Users will be able to work conversationally while still benefiting from the power and precision of Adobe’s industry-leading features and direct-manipulation tools, making it easier than ever to go from intent to outcome, whether editing a PDF, refining an image, or generating a design.

Usage of AI features inside Acrobat and Reader is up 4x year-on-year in 2025 Q3 (FY2025 Q4); management introduced an AI Assistant into Adobe Express in 2025 Q3 (FY2025 Q4) that can generate content and perform complex editing; the AI Assistant has led to significant MAU (monthly active users) growth in Adobe Express; Adobe Acrobat Studio combines conversational, comprehension, and generative capabilities and the customer reception to Adobe Acrobat Studio has been strong

We revolutionized how users consume and comprehend documents by introducing Acrobat AI Assistant in FY24 and recently added PDF Spaces, allowing individuals and teams to create knowledge hubs to collaborate across multiple documents. Users package multiple documents – not just PDFs, but other file types and web links – into a single workspace they can share with others, enabling a collaborative conversational experience. Usage of these AI features inside Acrobat and Reader has grown more than 4x year over year, as users increasingly turn to Acrobat to help them discover insights, synthesize new ideas and share knowledge. 

Adobe Express made significant advances in Q4 with the introduction of an AI Assistant capable of generative content creation and complex editing. Express now supports generative presentations and designs, moving the industry into a post-template world. Express AI Assistant is capable of conversationally editing images, flyers, presentations, infographics and more. Innovations like these have contributed to significant Express MAU growth. 

Adobe Acrobat Studio brings together the conversational consumption and comprehension capabilities of AI Assistant and PDF Spaces with the generative creation power of Express, alongside the PDF tools people know and rely on into a unified offering. Customer reception of Acrobat Studio has been strong, with nearly 50% of Acrobat commercial ETLA’s renewed in Q4 already upgrading to this offering, reflecting user enthusiasm for unified document comprehension and content generation. 

Adobe’s management recently released Premier Mobile, a next-generation AI video editing tool; Adobe is partnering with Google and Youtube to introduce AI-driven audio and video tools to help creators remix YouTube Shorts

The release of Premiere Mobile in Q4 marks an important milestone in next-generation AI video editing. In partnership with Google and YouTube, we are introducing AI-driven audio and video tools to streamline how creators remix YouTube Shorts, which receive 200 billion daily views.

Creative Cloud recently released a number of new AI capabilities including new models for Generative Fill, and upscaling and prompt editing in Photoshop; management has announced the general availability of Firefly Boards, a new ideation surface that is integrated with industry-leading models from both Adobe and 3rd parties; usage of AI in Creative Cloud applications continues to accelerate; Generative Credit consumption in Creative Cloud, Firefly, and Express in the Creator & Creative Professional category is up 3x sequentially in 2025 Q3 (FY2025 Q4)

Creative Cloud delivered massive new value at Adobe MAX including the release of new models for Generative Fill, upscaling and prompt editing in Photoshop, reflection removal in Lightroom, Turntable in Illustrator and smart masking in Premiere. We also announced the general availability of Firefly Boards, a new ideation surface that brings together everything creative professionals need to explore visual and design concepts with stakeholders using industry-leading models, from Adobe and our partners. Use of AI in these applications continues to accelerate, underscoring the impact AI is having on what creative professionals can produce…

…[Creator & Creative Professional] Accelerating Generative Credit consumption in Creative Cloud, Firefly and Express by individuals and enterprises, which grew approximately 3x quarter over quarter

The MAU (monthly active users) of creative users across Firefly, Express, Premiere Mobile and other freemium offerings was up 35% year-on-year in 2025 Q3 (FY2026 Q4) to over 70 million

Growing our base of creative users across Firefly, Express, Premiere Mobile and other freemium offerings. MAU for these offerings surpassed 70 million in Q4, growing over 35% year over year.

Adobe’s management sees the Adobe Experience Platform (AEP) as a customer data platform that brings together new AI-powered apps and agents to drive customer engagement and loyalty, as well as reduce costs; AEP evaluates 35 trillion segments and activates 70 billion profiles daily; management has released 6 new AI agents powered by AEP Agent Orchestrator

Adobe Experience Platform (AEP) is a leading customer data platform that serves as the foundation in enterprises for digital customer engagement and brings together new AI-powered apps and agents to drive engagement and loyalty, as well as to reduce costs. Our platform operates at scale with over 35 trillion segment evaluations and more than 70 billion profile activations per day. We released six new AI agents powered by AEP Agent Orchestrator to transform how businesses build, deliver and optimize marketing campaigns and customer experiences.

Generative AI traffic to retail sites is up 760% in the 2025 holiday season; management is seeing AI-powered traffic to retail sites from LLMs and agentic browsers rising and this requires different approaches for conversion; the Adobe Experience Manager helps solves retailers’ needs in the agentic web; management thinks SemRush, which Adobe recently announced the acquisition of, has important assets that addresses marketers’ growing need for sustained brand relevance in AI search

Our most recent Adobe Digital Index data, which is based on online transactions across more than 1 trillion visits to U.S. retail sites, shows that generative AI traffic is up 760% thus far in the 2025 holiday season. Our data shows that AI-powered traffic from LLMs and agentic browsers is rising and requires different approaches to conversion, underscoring the growing importance of the agentic web and our opportunity to provide insights and automation to marketers.

Brand visibility is critical to success in this new agentic web, and Adobe solves customer needs through solutions like Adobe Experience Manager, Adobe Analytics and the newly available Adobe LLM Optimizer. The pending acquisition of Semrush, which we announced a few weeks ago, brings complementary assets to help us address marketers’ growing need for sustained brand relevance in AI search. Over the past decade, Semrush’s data-driven search engine optimization and generative engine optimization solutions have earned the trust of industry leaders like Amazon, JPMorganChase and TikTok. Together, Adobe and Semrush will deliver a comprehensive solution to enable marketers to shape how their brands appear across owned channels, LLMs, traditional search and the wider web.

Adobe’s management launched Adobe Brand Concierge in 2025 Q3 (FY2025 Q4), an AI-first application for businesses to manage AI agents for agentic commerce; management is seeing significant customer interest in Adobe Brand Concierge

Adobe Brand Concierge, which was launched in Q4, is an AI-first application enabling businesses to configure and manage AI agents that guide consumers from exploration to purchase decisions, using immersive and conversational experiences. By uniting data, content and agentic AI in a single experience, Brand Concierge gives businesses ownership of the critical discovery and consideration phase. We’re pleased with the significant customer interest and the wins we had for Brand Concierge in Q4.

Adobe GenStudio’s ending ARR grew 25% year-on-year in 2025 Q3 (FY2025 Q4); management sees GenStudio as the product that takes care of every aspect of content production for enterprises

GenStudio is our comprehensive offering spanning content ideation, creation, production, and activation. At MAX, we introduced new scaled content production capabilities through Firefly Services, enhanced model customization with Adobe Firefly Foundry, and integration with a growing ecosystem of ad networks. Ending ARR for the Adobe GenStudio solution grew over 25% year over year as the world’s leading brands increasingly turn to Adobe to power their content supply chain…

…GenStudio is really the offering that we want to provide that takes care of every aspect of their content production, whether it’s the creation part of the campaign, whether it’s then creating custom models, whether it’s training it at the back end, whether it’s automating it and then certainly delivery

Adobe’s new agentic web offerings, Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge, had over 50 customers in 2025 Q3 (FY2025 Q4)

Strong customer demand for our newly introduced agentic web offerings with over 50 customers in Q4 for Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge

Adobe’s new AI-influenced ARR is now more than one-third of overall business, or more than $2 billion

Total new AI-influenced ARR now exceeds one-third of our overall business as we integrate AI deeply into our solutions and continue to launch new AI-first offerings which are now included as part of the AI-influenced metric.

Adobe’s management has announced Adobe Firefly Foundry, which provides enterprises with proprietary foundation models trained on their own intellectual property; interest in Firefly Foundry is strong; Firefly Foundry is operated as a managed service; a media and entertainment company that’s an existing customer of Adobe with $10 million of ARR signed for Firefly Services and Firefly Foundry for an additional $7 million; the media and entertainment company was able to train its proprietary model in 2-3 months, and is already seeing increased efficiency in content production; management’s vision for Firefly Foundry is to have a Foundry be created specifically for every single franchise

We also announced Adobe Firefly Foundry at MAX, which delivers enterprises with proprietary foundation models trained on their own content, data and brand catalogs. Interest in Firefly Foundry has been strong from enterprise marketing teams and media and entertainment companies, where there is increasing desire to produce content faster and more cost-effectively…

…We introduced Foundry, like you mentioned, at MAX. And the core value is that we train on their content, their data and their brand guidelines. We’re able to generate images, videos, audio and 3D models and we operate it as a managed service. So marketing teams can then train on product shots, environment styles, brand guidelines and media companies actually train on their individual franchises, whether it’s a movie or whether it’s a series, they’ll train their characters, their sets, their props, their locations so they can generate the whole thing…

…Let’s take a media and entertainment company we’re working on, and I’m rounding the numbers here, but to give you a little bit of context, let’s say, that organization was spending $10 million with us ARR on our core creative products that we’ve been selling with them. We ran a sales process with them, engagement with them for about 6 months. We were able to sell them Firefly Services and Firefly Foundry for about $7 million, so a pretty significant step up in terms of the engagement that we have with the customer. We were able to train models within 2 or 3 months, and now we’re running some of those models specifically as managed services for them for ideation and production processes. They’re already seeing increased efficiency in content production. They’re able to generate more production content, and they’re now getting into opportunities that are revenue-bearing opportunities like increasing the types of content they produce for social shorts and personalizing more of it for fan engagement with integration with our real-time CDP…

…The vision clearly is that for every single brand, if you’re a consumer company or for every single TV show or a movie, we can create a Foundry specifically for that particular franchise, as David said, because the ability to help with the automation of that content and production is massive.

Adobe’s management sees Adobe as the only company that can close the loop from the creation of an AI-powered advertising campaign, the execution of that campaign, to the commerce impact of the campaign

There are trillions of dollars that are spent in marketing and our opportunity is to really say we can help you make sure that, that content is more personalized. I’ll have Anil also add after this and deliver it. And the fact is that since we can deliver that content through an ad network and then we understand through our analytics where that is resulting in traffic, where that is resulting in conversion, where that’s not, we’re the only company that can close the loop from the creation of a campaign, the execution of that campaign as well as then actually looking at what that causes in terms of commerce. And so I think our real value proposition in all of this is that as increasingly people are saying, “Hey, I want to use AI to create more.” We can not only optimize and accelerate the amount of content that they’re producing, but we’re the only company that can then help them say, “Hey, this caused so much traffic.”

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks that the AI wave has yet to meaningfully impact MongoDB’s results; it’s still early days, but management is already seeing AI startups building applications on MongoDB; management is seeing large enterprises develop AI agents on MongoDB, but these agents are pilot projects and there are currently no AI agents running in production that can fundamentally transform businesses or serve customers better; management thinks there’s a lot of work needed to change an AI application prototype into one that is enterprise-ready; management is seeing that industries that are regulated have very different requirements for an AI agent to be in production compared to being in prototype; management is seeing enterprises try out and churning from many different AI coding agents

All of this momentum in the core business is happening before the AI wave has meaningfully impacted our results. We are still early, but the signs are encouraging from AI-native start-ups building intelligent applications on MongoDB to large enterprises developing AI agents that will reshape how they operate…

…There are various co-pilots when it comes to productivity types of applications that are happening inside of an organization, whether it’s a bank or a health care organization or a manufacturing organization. But what I have not seen is truly AI agents running in production that fundamentally transform the business or serve customers better. There are many, many pilots still going on…

…We’re clearly seeing a lot of, I would say, prototyping and iteration. I would say the enterprise requirements still have a pretty strong and stringent requirements around security and durability and performance. So while there’s a big difference between coming out with the prototype and having a production-grade system that an enterprise can truly rely on trust. And so there is still a lot of work required to make those applications enterprise class…

…When I speak to customers who I’ve been speaking for a long time, in regulated industries, which is financial services, which is health care, which is public sector, the requirement for an AI agent to be in production versus prototype are vastly different, and they are looking for governance, auditability, this and that, while the innovation and the need for the speed is very high. So I have not seen — like customers will tell me, CJ I have 10 agents in production, 15 agents in production. And when I really asked them, I say, are they really customer-facing? Can they be audited on the probabilistic outcome they derive? The answer is, oh, we are still working through that…

…Even the environment on which they are building agents, they are telling me they try one, it doesn’t work, they move on to the next one. So the churn for some of these AI companies that deliver these tools is also very real.

MongoDB’s management thinks that AI applications must connect LLMs (large language models) with companies’ proprietary data, and this connection is an information retrieval problem that requires a very different architecture from the rigid tabular stores that traditional software depended on; management thinks that MongoDB’s document database model has a structural advantage with the architecture that AI applications require; the Voyage MongoDB models is #1 on the Hugging Face retrieval embedding benchmark; MongoDB’s database is the #1 vector database on DB-Engines; MongoDB’s improvement of its embedding and reranking models have driven meaningful accuracy gains and lowered LLM hallucinations; management is hearing from AI native companies that alternatives to MongoDB and relational databases do not scale for AI workloads 

AI applications must connect what LLMs know with what companies know, which is their proprietary data, systems and real-time context. This is fundamentally an information retrieval problem, and it requires a very different architecture than the last generation of software. Rapidly evolving AI models uncover new complex properties about entities and rigid tabular stores cannot deliver the real-time high accuracy performance that AI systems require. At the same time, AI is dramatically increasing the speed at which applications are built and iterated and fixed database schemas simply cannot keep pace.

This is where MongoDB has a structural advantage. Our document model, natively, JSON is built for diverse class changing and interdependent data. Our integrated search, vector search and Voyage embeddings removed the need for brittle bolt-ons, and we are seeing industry-leading results. Number one on the Hugging Face retrieval embedding benchmark with Voyage MongoDB models and the #1 vector database on DB-Engines. Advances in our embedding and reranking models drive meaningful accuracy gains, enabling AI applications to deliver more grounded responses with fewer LLM hallucinations, while lowering storage cost and query cost through smaller, more efficient embeddings…

…Speaking to my network in Silicon Valley with AI-native companies or digital-native companies, what I hear from them is that certain alternatives on relational database just do not scale because AI workloads are fundamentally around unstructured and semi-structured data.

The AI-powered hiring startup Mercor is using MongoDB Atlas to store AI data behind its platform that directly connects professionals to AI model training and evaluation roles; Mercor is also using Voyage embeddings and Atlas Vector Search; MongoDB Atlas is able to support Mercor’s 50% monthly growth, whereas Mercor’s previous solution, Postgres, was not able to

Mercor, which is redefining hiring with its fully automated platform that uses AI to assess and match talent with the opportunities they are best suited for. Mercor uses MongoDB Atlas to store the AI data behind its platform that directly connects professionals to AI model training and evaluation roles. Originally, a self-serve customer, the company is also utilizing Voyage embeddings and Atlas Vector Search. Atlas has scale to support Mercor’s 50% month-over-month growth, allowing the company to keep its software engineering team lean and agile as it expands to over $10 billion in value…

…In my remarks, I shared that there is a super high growth AI company that is doing very, very well and will become a very large company. I have absolutely no doubts about that. They were not able to scale with Postgres and few other technologies, Redis and so on that they were using, and they moved completely to MongoDB and seeing that week-over-week and month-over-month growth is super inspiring. And I spoke to the hyperscaler where this workload is running and they are seeing the same that, wow, this company is doing really well. So that’s built on MongoDB because Postgres had scaling issue.

A global media company running multi-modal content recommendation workloads switched from Elastic Search to MongoDB Atlas and MongoDB Atlas Search after hitting a performance wall with Elastic Search; the media company was able to integrate Voyage AI models in just weeks; MongoDB has helped the media company to cut latency by 90%, reduce operational spend by 65%, and increase click-through rates by 35% 

A highly influential global media company aim to increase engagement via enhanced content recommendation for its vast repository of multimodal assets across its 70-plus websites. That existing stack powered by Elasticsearch hit a performance wall struggling with the complexity of new embedding models. Recognizing that [ rigid ] systems stifle innovation, the engineering team re-architected on MongoDB Atlas and MongoDB Atlas Vector search. Working with MongoDB experts to deliver a proof of concept in just weeks, they integrated Voyage AI models directly alongside their data. The solution scale effortlessly, cutting latency by 90% and reducing operational spend by 65% and driving a 35% increase in click-through rates, ultimately providing millions of global readers with a seamless, deeply personalized discovery journey.

MongoDB’s management is seeing the multi-cloud or public cloud transformation trend continue to happen, and will do so for the next 5-7 years; management thinks it’s possible that the emergence of AI is driving higher demand for application modernisation

The modernization effort, whether it’s a workload that may be just running on-prem, in a large enterprise or a workload that is moving to cloud or sometimes to multiple clouds for resiliency that transformation in speaking to a large telecommunications company, a large health care company, a large tech company, and I can cite you many other examples. I was pretty overwhelmed to understand that those transformations are still going on. There is just a recent conversation I had with CTO of a large telecommunications company who said that they are moving 1,300-plus applications to another hyperscaler and trying to determine which workloads are best suited for MongoDB. So the whole multi-cloud or a public cloud transformation is still going on. And just my intuitive sense in speaking to these customers will be going on for at least next 5 to 7 years…

…This is my personal experience in building AI technologies in the past. That the AI team is typically a separate team from the core data team. And AI team relies on the core data team. And if the core data team moves slow, then AI teams get really frustrated because innovation velocity is how they measure themselves on. So my personal experience was, hey, when the core team is not agile there schemas are not flexible, it actually slows AI down. So that is definitely some facts behind your theory that it is potentially the AI revolution, which we are still in the early stages, is driving modernization in the other part of the enterprise.

A fast-growing AI startup that built its own vector database decided to give Voyage’s AI embedding models a try; if the startup sees good results with Voyage, it will switch from its in-house vector database to MongoDB; a very large customer of MongoDB has deep appreciation for Voyage AI embedding models and are already running 2 big workloads on it

I spoke to a fairly successful AI native company that is doing decent ARR, growing very fast. And when I said, hey, have you considered MongoDB to the founder, CEO, who is very technical. And he said, CJ, we didn’t, we built our own vector database and so on. And while I was speaking to him Alex, about 10 days ago, he basically said, once he looked at the portfolio, he said, let me start with embeddings first. So we are going to try. Of course, we have to prove it to him why our embeddings improves his accuracy on search and so on and improve the performance. So he said, let’s start with embedding models first from Voyage AI once that works CJ, I’m willing to replace my vector DB that we have homegrown created it with MongoDB and oh, by the way, if that works well, eventually, I’m willing to swap out my operational database as well and use MongoDB…

…I’m also seeing in a very large customer of MongoDB, I spoke to somebody who is running the AI initiatives, and they love the Voyage AI embeddings and reranking model, and they’ve already approved it for 2 big workloads.

MongoDB’s management sees AI coding tools as a tailwind for MongoDB because it increases the pace of software creation, and hence, drives demand for databases

Clearly, with the advent of co-gen tools, the rate and pace of software development is only going to increase. And as I think we said in the past, that’s one of the big reasons why we think AI is a tailwind. It’s just that the ability to produce more software, more database, and more and more strategies has been encapsulated in software. So from that point of view, we think that’s all good news for us.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s management is seeing AI going everywhere, doing everything, all at once

AI is going everywhere, doing everything, all at once.

NVIDIA’s management is seeing off the charts demand for Blackwell; management has visibility to $0.5 trillion of revenue for its Blackwell and Rubin platforms from the start of 2025 through to 2026, with about $0.34 trillion over the next 14 months; management sees opportunities for Blackwell and Rubin to have more than $0.5 trillion of revenue from the start of 2025 through to 2026

Blackwell sales are off the charts, and cloud GPUs are sold out…

…We currently have visibility to $0.5 trillion in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026…

…[Question] You talked about the $500 billion of revenue for Blackwell plus Rubin in ’25 and ’26 at GTC. At that time, you talked about $150 billion of that already having been shipped. So as the quarter is wrapped up, are those still kind of the general parameters that there’s $350 billion in the next kind of 14 months or so?

[Answer] Yes, that’s correct. We are working into our $500 billion forecast. And we are on track for that as we have finished some of the quarters, and now we have several quarters now in front of us to take us through the end of calendar year ’26. The number will grow. And we will achieve, I’m sure, additional needs for compute that will be shippable by fiscal year ’26. So we shipped $50 billion this quarter, but we would be not finished if we didn’t say that we’ll probably be taking more orders… There’s definitely an opportunity for us to have more on top of the $500 billion that we announced.

NVIDIA’s management sees $3 trillion to $4 trillion of annual AI infrastructure build by 2030, with NVIDIA’s platforms being the superior choice; demand for AI infrastructure continues to exceed management’s expectations; management thinks the hyperscalers’ workload transitions would be half of the company’s long-term opportunity; management thinks the other half of NVIDIA’s long-term opportunity would come from higher compute spend by foundation model builders; the dollar-content of NVIDIA chips in AI data centers has been increasing with each successive generation

By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA will be the superior choice for the $3 trillion to $4 trillion in annual AI infrastructure build we estimate by the end of the decade. Demand for AI infrastructure continues to exceed our expectations…

…We see the transition to accelerate computing and generative AI across current hyper workloads contributing toward roughly half of our long-term opportunity. Another growth pillar is the ongoing increase in compute spend driven by foundation model builders such as Anthropic, Mistral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab and xAI, all scaling, compute aggressively to scale intelligence…

…[Question] What assumptions are you making on NVIDIA content per gigawatt in that $500 billion number? Because we have heard numbers as low as $25 billion per gigawatt of content to as high as $30 billion or $40 billion per gigawatt. So I’m curious what power and what dollar per gig assumptions you are making as part of that $500 billion number.

[Answer] In each generation, from Ampere to Hopper, from Hopper to Blackwell, Blackwell to Rubin, our part of the data center increases. And Hopper generation was probably something along the lines of 20-some-odd, 20 to 25. Blackwell generation, Grace Blackwell particularly is probably 30 to 30 to say, 30 plus or minus and then Rubin is probably higher than that.

The installed base of NVIDIA GPUs, including the older generation Hopper and Ampere families, are fully utilised; NVIDIA’s GPUs have long useful lives, which gives them a significant TCO (total cost of ownership) advantage over competing chips; the long useful lives of NVIDIA’s GPUs is the result of the company’s CUDA software stack; NVIDIA’s 6-year-old A100 GPUs are still fully utilised today

Our GPU installed base, both new and previous generations, including Blackwell, Hopper and Ampere is fully utilized…

…The long useful life of NVIDIA’s CUDA GPUs is a significant TCO advantage over accelerators. CUDA’s compatibility in our massive installed base, extend the life NVIDIA Systems well beyond their original estimated useful life…

…Most accelerators without CUDA and NVIDIA’s time-tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped 6 years ago are still running at full utilization today, powered by vastly improved software stack.

NVIDIA’s Data Center revenue again had very strong growth in 2025 Q3 (FY2026 Q3), driven partly by the GB300 chip from the Blackwell family; GB300 was 2/3 of total Blackwell revenue in 2025 Q3 (FY2026 Q3); the Blackwell Ultra chip delivers 5x faster time to train than Hopper; Blackwell had the highest performance and lowest total cost of ownership across every model and use case under the InferenceMAX benchmark; Blackwell delivers a 10x higher performance per watt and 10x lower cost per token compared to H200 on the Deepseek-R1 model; TSMC delivered the first US-produced Blackwell chip in October 2025; 

Record Q3 data center revenue of $51 billion (sic) [ $51.2 billion ] increased 66% year-over-year, a significant feat at our scale. Compute grew 56% year-over-year, driven primarily by the GB300 ramp, while networking more than doubled, given the onset of NVLink scale up and robust double-digit growth across Spectrum-X Ethernet and Quantum-X InfiniBand…

…GB300 crossed over GB200 and contributed roughly 2/3 of the total Blackwell revenue. The transition to GB300 has been seamless with production shipments to the majority — to the major cloud service providers, hyperscalers and [ GP clouds ] and is already driving their growth…

…In the latest MLPerf training results, Blackwell Ultra delivered 5x faster time to train than Hopper. NVIDIA swept every benchmark. Notably, NVIDIA is the only training platform to leverage FP4 while meeting the MLPerf’s strict accuracy standards. In Semi Analysis’s, InferenceMAX benchmark, Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell’s NVLinks performance on a mixture of experts, the architecture for the world’s most popular reasoning models. On DeepSeek-R1 Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus H200, a huge generational leap fueled by our extreme co-design approach…

…Last month, in partnership with TSMC, we celebrated the first Blackwell wafer produced on U.S. soil.

NVIDIA’s management is seeing the hyperscalers transitioning their workloads from classical machine learning to generative AI; management thinks NVIDIA’s CUDA software stack excels at both classical machine learning and generative AI; management is seeing the hyperscalers’ capex-expectations for 2026 increase by $200 billion since the start of the year to $600 billion; management thinks the hyperscalers’ workload transitions would be half of the company’s long-term opportunity

The world hyperscalers, a trillion-dollar industry are transforming search, recommendations and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition, driving infrastructure investment measured in hundreds of billions of dollars…

…Expectations for the top CSPs and hyperscalers in 2026, aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher relative to the start of the year…

…We see the transition to accelerate computing and generative AI across current hyper workloads contributing toward roughly half of our long-term opportunity.

NVIDIA’s management is seeing Meta Platforms increase users’ time spent on Facebook and Threads because its AI recommendation systems are showing up better content; when Meta Platforms’ generative AI foundation model, GEM, drove a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed in 2025 Q2

At Meta, AI recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Threads…

…Meta’s GEM, a foundation model for ad recommendations trained on large-scale GPU clusters exemplifies this shift. In Q2, Meta reported over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed driven by generative AI-based GEM. Transitioning to generative AI represents substantial revenue gains for hyperscalers.

NVIDIA’s management is seeing the 3 scaling laws of pre-training, post-training, and inference, to be intact

The 3 scaling laws, pretraining post training and inference remain intact. In fact, we see a positive virtuous cycle emerging whereby the 3 scaling laws and access to compute are generating better intelligence and in turn, increasing adoption and profits…

…Just today, I was reading a text from Demis. And he was saying that pre-training and post training are fully intact. And Gemini 3 takes advantage of the scaling laws and got to receive a huge jump in quality performance — model performance.

NVIDIA’s management is observing a proliferation of AI agents; RBC is using agentic AI to reduce report generation time from hours to minutes

We are also witnessing a proliferation of agentic AI across various industries and tasks. Companies such as Cursor, Anthropic, OpenEvidence, Epic and Abridge are experiencing a surge in user growth as they supercharge the existing workforce, delivering unquestionable ROI for coders and health care professionals…

…RBC is leveraging agentic AI to drive significant analyst productivity, slashing report generation time from hours to minutes.

NVIDIA’s management continues to engage the US and China governments on the sale of American chips into China

While we were disappointed in the current state that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the U.S. and China governments and will continue to advocate for America’s ability to compete around the world. To establish a sustainable leadership position in AI computing, America must win the support of every developer and be the platform of choice for every commercial business, including those in China.

NVIDIA’s next generation of chips, the Rubin family, are on track for volume production in 2026 H2; 7 different chips go into the Vera Rubin platform; management sees Rubin delivering much better performance than Blackwell; Rubin’s manufacturing is compatible with Blackwell, and the manufacturing ecosystem is ready to ramp Rubin

The Rubin platform is on track to ramp in the second half of 2026. Powered by 7 chips, the Vera Rubin platform will once again deliver an X-factor improvement in performance relative to Blackwell…

…Rubin is our third-generation rack-scale system substantially redefined the manufacturability while remaining compatible with Grace Blackwell. Our supply chain data center ecosystem and cloud partners have now mastered the build to installation process of NVIDIA’s rack architecture. Our ecosystem will be ready for a fast Rubin ramp.

NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q3 (FY2026 Q3), driven by strong demand across Spectrum-X Ethernet, InfiniBand and NVLink (networking revenue was $7.3 billion in 2025 Q2); the majority of AI deployments now include NVIDIA networking switches; NVIDIA Ethernet attach rates are now roughly on par with Infiniband; major AI players are building gigawatt AI data centers with Spectrum-X Ethernet; management recently introduced Spectrum-XGS, a scale across technology; NVIDIA is the only company with AI networking solutions for scale up, scale out, and scale across; NVIDIA recently announced a collaboration to link Fujitsu’s CPUs and NVIDIA GPUs via NVLink Fusion; NVIDIA has a partnership with Intel to connect Intel’s CPUs and NVIDIA GPUs with NVLink; Arm recently announced that it will be using NVLink IP for customers to connect Arm CPU designs with NVIDIA’s platforms; management sees NVLink as the only proven scale up networking solution in the market today

Our networking business purpose built for AI and now the largest in the world, generated revenue of $8.2 billion, up 162% year-over-year with NVLink, InfiniBand and Spectrum-X Ethernet, all contributing to growth. We are winning in data center networking, as the majority of AI deployments now include our switches with Ethernet GPU attach rates roughly on par with InfiniBand. Meta, Microsoft, Oracle and xAI are building gigawatt AI factories with Spectrum-X Ethernet switches and each will run its operating system of choice, highlighting the flexibility and openness of our platform.

We recently introduced Spectrum-XGS, a scale across technology that enables gigascale AI factories. NVIDIA is the only company with AI scale up, scale out and scale across platforms, reinforcing our unique position in the market as the AI infrastructure provider.

Customer interest in NVLink Fusion continues to grow. We announced a strategic collaboration with Fujitsu in October, where we will integrate Fujitsu’s CPUs and NVIDIA GPUs via and NVLink Fusion, connecting our large ecosystems. We also announced a collaboration with Intel to develop multiple generations of custom data center and PC products, connecting NVIDIA and Intel’s ecosystems using NVLink. This week at Supercomputing ’25, Arm announced that it will be integrating NVLink IP for customers to build CPU SoCs that connect with NVIDIA. Currently on its fifth generation, NVLink is the only proven scale up technology available on the market today.

NVIDIA’s open source inference framework, NVIDIA Dynamo, has now been adopted by every major cloud service provider

NVIDIA Dynamo, an open source, low latency modular inference framework has now been adopted by every major cloud service provider, leveraging Dynamo’s enablement and disaggregated inference. The resulting increase in performance of complex AI models, such as MoE models, AWS, Google Cloud, Microsoft Azure and OCI have boosted AI inference performance for enterprise cloud customers.

NVIDIA’s management is working on a strategic partnership with OpenAI to deploy AI data centers and for NVIDIA to invest in OpenAI; NVIDIA is serving OpenAI through Microsoft Azure, Oracle Cloud Infrastructure (OCI), and CoreWeave and will continue to do so in the future; management is happy to support OpenAI’s self-build AI infrastructure; management recently inked a partnership with Anthropic that will see Anthropic use NVIDIA for the first time; management will optimise Anthropic’s models for CUDA, and optimise future NVIDIA chips for Anthropic workloads; Anthropic’s initial commitment to NVIDIA is for up to 1 gigawatt of compute capacity; the investments NVIDIA has been making in the AI ecosystem is to expand the reach of CUDA; management expects NVIDIA’s investment in OpenAi to generate extraordinary returns

We are working on a strategic partnership with OpenAI, focused on helping them build and deploy at least 10 gigawatts of AI data centers. In addition, we have the opportunity to invest in the company. We serve OpenAI through their cloud partners, Microsoft Azure, OCI and CoreWeave. We will continue to do so for the foreseeable future. As they continue to scale, we are delighted to support the company to add self-build infrastructure, and we are working towards a definitive agreement and are excited to support OpenAI’s growth.

Yesterday, we celebrated an announcement with Anthropic. For the first time, Anthropic is adopting NVIDIA, and we are establishing a deep technology partnership to support Anthropic’s fast growth. We will collaborate to optimize Anthropic models for CUDA and deliver the best possible performance, efficiency and TCO. We will also optimize future NVIDIA architectures for Anthropic workloads. Anthropic’s compute commitment is initially including up to 1 gigawatt of compute capacity with Grace Blackwell and Vera Rubin Systems…

…All of the investments that we’ve done so far, all the period, is associated with expanding the reach of CUDA expanding the ecosystem…

…That relationship we’ve had since 2016, I delivered the first AI supercomputer ever made to OpenAI. And so we’ve had a close and wonderful relationship with OpenAI since then. And everything that OpenAI does runs on NVIDIA today. So all the clouds that they deploy in, whether it’s training and inference runs NVIDIA and we love working with them. The partnership that we have with them is one, so that we could work even deeper from a technical perspective so that we could support their accelerated growth. This is a company that’s growing incredibly fast. And don’t just look at what is said in the press, look at all the ecosystem partners and all the developers that are connected to OpenAI, and they’re all driving consumption of it. and the quality of the AI that’s being produced, huge step-up since a year ago. And so the quality of response is extraordinary. So we invest in OpenAI for a deep partnership in co-development to expand our ecosystem and support their growth. And of course, rather than giving up a share of our company, we get a share of their company. And we invested in them, in one of the most consequential once-in-a-generation companies that we have a share of. And so I fully expect that investment to translate to extraordinary returns.

NVIDIA’s management sees physical AI as a multi-trillion dollar opportunity; physical AI is already a multi-billion business for NVIDIA; leading US robotics companies are using NVIDIA’s products, including Omniverse; many enterprises, including TSMC, are building Omniverse digital twin factories; robotics companies, including Amazon Robotics, are using NVIDIA Cosmos World Foundation Models, Omniverse, and Jetson, to develop their robots

Physical AI is already a multibillion-dollar business addressing a multitrillion dollar opportunity on the next leg of growth for NVIDIA. Leading U.S. manufacturers and robotics innovators are leveraging NVIDIA’s 3 computer architecture to train on NVIDIA, test on Omniverse’s computer and deploy real-world AI. And just in robotic computers, PTC and Siemens introduced new services that bring Omniverse powered digital twin workflows to their extensive installed base of customers. Companies, including Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC and Wistron are building Omniverse digital twin factories to accelerate AI-driven manufacturing and automation. Agility Robotics, Amazon Robotics, Figure and Skild at AI are building our platform, tapping offerings such as NVIDIA Cosmos World Foundation Models for development, Omniverse for simulation and validation and Jetson to power next-generation intelligent robots.

NVIDIA is partnering with Uber for the world’s largest Level 4 ready autonomous fleet

We are partnering with Uber to scale the world’s largest Level 4 ready autonomous fleet built on the new NVIDIA Hyperion L4 robotaxi reference architecture.

NVIDIA’s management is not seeing an AI bubble; management sees 3 computing transformations happening in the world simultaneously and NVIDIA is addressing all of them; the 3 transformations are (1) the transition from CPUs to GPUs, (2) transformation of existing applications by AI, and (3) AI agents

There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different…

…The world is going — is undergoing 3 massive platform shifts at once. The first time since the dawn of Moore’s Law, NVIDIA is uniquely addressing each of the 3 transformations.

The first transition is from CPU general purpose computing to GPU accelerated computing and Moore’s Law slows. The world has a massive investment in non-AI software from data processing to science and engineering simulations, representing hundreds of billions of dollars in compute — cloud computing spend each year. Many of these applications, which ran once exclusively on CPUs are now rapidly shifting to CUDA GPUs. Accelerated computing has reached a tipping point. 

Secondly, AI has also reached a tipping point and is transforming existing applications while enabling entirely new ones. For existing applications, generative AI is replacing classical machine learning in search ranking, recommender systems, ad targeting, click-through prediction to content moderation. The very foundations of hyperscale infrastructure… 

…Now a new wave is rising, agentic AI systems capable of reasoning, planning and using tools from coding assistance like Cursor and Claude Code to radiology tools like Aidoc, legal assistants like Harvey and AI chauffeurs like Tesla FSD and Waymo.  

NVIDIA’s management thinks the company excels at each phase of AI, from pre-training to inference

NVIDIA is unlike any other accelerator. We excel at every phase of AI from pre-training and post training to inference.

The pioneers of agentic AI that management is seeing are all startups

These systems mark the next frontier of computing, the fastest-growing companies in the world today, OpenAI, Anthropic, xAI, Google, Cursor, Lovable, Replit, Cognition AI, OpenEvidence, Abridge, Tesla are pioneering agentic AI.

The fastest-growing applications in history are AI-powered coding applications

The fastest-growing application in history, a combination of Cursor and Claude Code and code — OpenAI’s Codex and GitHub CoPilot. These applications are the fastest-growing in history. And it’s not just used for software engineers, it’s used by — because of wide coding is used by engineers and marketeers all over companies, supply chain planners, all over companies.

NVIDIA’s platform is the only one in the world that runs every AI model

NVIDIA’s architecture, NVIDIA’s platform is the singular platform in the world that runs every AI model. We run OpenAI, we run Anthropic, we run xAI because of our deep partnership with Elon and xAI, we were able to bring that opportunity to Saudi Arabia to the KSA so that HUMAIN could also be hosting opportunity for xAI. We run xAI, we run Gemini, we run Thinking Machines, let’s see, what else do we run? We’ve run them all. And so not to mention, we run the science models, the biology models, DNA models, gene models, chemical models and all the different fields around the world. It’s not just cognitive AI that the world uses, AI is impacting every single industry.

NVIDIA’s management hopes inference will become a large portion of the use case for NVIDIA GPUs because that will suggest that people are using AI in more applications

[Question] In the past, you’ve talked about roughly 40% of your shipments tied to AI inference. I’m wondering, as you look forward into next year, where do you expect that percentage could go in, say, a year’s time?

[Answer] Inference because of chain of thought, because of reasoning capabilities, AIs are essentially reading, thinking before it answers. And the amount of computation necessary as a result of those 3 things has gone completely exponential. I think that it’s hard to know exactly what the percentage of it will be at any given point in time and who. But of course, our hope is that inference is a very large part of the market because if inference is large, then what it suggests is that people are using it in more applications and they’re using it more frequently. And that’s — we should all hope for inference to be very large.

NVIDIA’s management sees a number of important constraints on the growth of the AI ecosystem, namely, power and financing, but they are all solvable problems 

[Question] Many of your customers are pursuing behind-the-meter power, but like what’s the single biggest bottleneck that worries you that could constrain your growth? Is it power? Or maybe it’s financing or maybe it’s something else like memory or even foundry?

[Answer] These are all issues and they’re all constraints. And the reason for that, when you’re growing at the rate that we are and the scale that we are, how could anything be easy?… Now on the one hand, we are transitioning computing from general purpose and classical or traditional computing to accelerated computing and AI. That’s on the one hand. On the other hand, we created a whole new industry called AI factories. The idea that in order for software to run, you need these factories to generate it, generate every single token instead of retrieving information that was pre-created. And so I think this whole transition requires extraordinary scale. And all the way from the supply chain. Of course, the supply chain, we have much better visibility and control over because obviously, we’re incredibly good at managing our supply chain. We have great partners that we’ve worked with for 33 years. And so the supply chain part of it, we’re quite confident. Now looking down our supply chain, we’ve now established partnerships with so many players in land and power and shell. And of course, financing. These things — none of these things are easy, but they’re all attractable and they’re all solvable things.

NVIDIA’s management thinks it’s incredibly hard for ASICs (application specific integrated circuits) for AI workloads to compete against NVIDIA GPUs because NVIDIA’s GPU systems (1) are now incredibly complex and (2) can run every AI model

[Question] I’m curious if your thoughts around the role that AI ASICs or dedicated XPUs play in these architecture build-outs has changed at all? Have you seen, I think you’ve been fairly adamant in the past that some of these programs never really see deployments. But I’m curious if we’re at a point where maybe that’s even changed more in favor of just GPU architecture.

[Answer] Back in the Hopper day and the Ampere days, we would build one GPU. That’s the definition of an accelerated AI system. But today, we’ve got to build entire racks entire — 3 different types of switches, scale up, scale out and scale across switch. And it takes a lot more than 1 chip to build a compute node anymore. Everything about that computing system because AI needs to have memory, AI didn’t use to have memory at all. Now it has to remember things, the amount of memory and context it has is gigantic. The memory architecture implication is incredible. The diversity of models from mixture of experts to dense models, to diffusion models that are aggressive not to mention biological models that are based on the laws of physics, the list of different types of models have exploded in the last several years. And so the challenge is the complexity of the problem is much higher…

…We’re now the only architecture in the world that runs every AI model, every frontier AI model, we run open source AI models incredibly well. We run science models, biology models, robotics models. We run every single model. We’re the only architecture in the world that can claim that. It doesn’t matter whether you’re auto regressive or diffusion based. We run everything and we run it for every major platform, as I just mentioned. So we run every model.

Okta (NASDAQ: OKTA)

Okta’s products help customers build more secure AI agents and manage their AI agents in a secure and scalable way; management thinks AI agents will redefine the identity security landscape; AI agents are also vulnerable without proper security governance, so it’s also essential for enterprises to secure AI agents; Okta has been focusing on securing AI agents (it is the company’s #1 priority now) and management thinks the space will be the next growth leg in identity security; management thinks Okta is the best-positioned to be the identity layer for AI agents; management recently launched Auth0 for AI agents, which allows customers to build secure agents; management has seen a recent surge in inbound interest for Okta’s solutions to manage the security of AI agents; it’s still early days for Okta in securing AI agents, but the company is already working with 100 current customers; management thinks Okta is the only company that is able to secure AI with a modern and neutral platform; the amount of interest in Okta’s solutions for securing AI agents is unlike anything management has seen; management is seeing a large number of enterprises getting stuck with AI projects because they’re unable to give the right level of access to AI agents; management thinks the market opportunity for the identity layer for AI agents is even bigger than Okta’s current opportunity set; only 10% of companies with AI agents in production think their agents are secured

The simple way to think about it is that Okta is helping customers both build more secure AI agents and manage their AI agents in a secure and scalable way. The emergence of agentic technology is redefining the identity security landscape. AI security is identity security. AI agents represent a new powerful identity type. However, without proper security governance, they are also highly vulnerable. Securing AI agents and nonhuman identities is not a feature. It’s essential for any businesses looking to safely scale their adoption and deployment of AI. If an organization does not secure its agents today, they risk undoing years of security improvements and leaving themselves vulnerable to new identity-based attacks.

Okta has prioritized our efforts to focus on helping customers solve this business imperative and capture what we believe will be the next catalyst for growth and meaningful market within the identity security space. Okta’s neutral and unified platform, coupled with our installed base of over 20,000 customers, positions us best to become the identity layer for AI agents. That’s why we’re so excited about the recent launch of Auth0 for AI agents. Auth0 for AI agents allows customers to build secure agents, APIs and users more effortlessly across their B2B, B2C and internal app ecosystem…

…Over just the past few months, we have experienced a surge in inbound interest for our Agentic Security solutions to manage agents, Okta for AI agents. These organizations are looking for a single control plane to observe and manage agents of all types in a way that offers flexibility as the technology continues to evolve. They also want a solution that gives them control like the ability to embed fine-grain access into every agent. Okta is here to deliver…

…It’s very early days on this front, but we have already been engaged with over 100 of our current customers, which combined represent over $200 million in existing ARR…

…Okta is the essential identity layer to help customers build, observe and manage AI agents. We’re the only company that is able to secure AI with a modern and neutral platform, allowing us to deliver even greater value to our customers…

…[Question] When you think about the full deployment of this, how do I think about the dollar potential here when you have customers that are spending $100,000 with you by how much can AI truly elevate that total bill for them?

[Answer] I’ve been personally and the entire company is blown away by how interested customers and prospects are in this capability. I haven’t seen anything like this in my experience at Okta with a new capability or a new product set. So it’s very, very exciting…

…You take all the company’s data and you show it in a big data warehouse like Snowflake or Databricks or Palantir and then the agents have way too much access. They can just see everything and they do unintended things. And so people are stuck and they’re pause and they’re saying, wait a minute, we’re not going to roll these things out. And there’s a huge, huge cohort of companies that are trying to do something AI and they’re stuck…

…Longer term, if you look at our market, we have a $50 billion TAM for workforce identity, a $30 billion TAM for customer identity. Owning and governing the agentic identity layer and securing AI can be a bigger TAM than both of those…

…The company’s #1 priority now is to take advantage of this opportunity. So we’re very clear in our R&D and our go-to-market, we’re going to focus on this opportunity…

…We shared a survey that we had run of a few hundred enterprise customers reporting that 91% of them had agents in production and only 10% of them were confident they had them secured.

A financial services company that is an existing Okta customer selected Okta for AI agents when it was deploying AI agents across its operations; the financial services company deals with sensitive data, so the security of its AI agents is critical; the addition of Okta for AI agents represented a significant ACV (annual contract value) uplift for Okta compared to the prior contract

A great early win with Okta for AI agents. It’s with a financial services customer that is in the midst of deploying AI agents across their operations. Given the sensitive nature of their data and the need to remain compliant with the regulatory environment, securing these agents was not optional. It was critical. They selected Okta for AI agents to secure their AI footprint and provide them with enhanced visibility and remediation capabilities for the agent identities, enforce access control, identity governance and threat detection. It was a great win-win. Okta is helping the customer to safely deploy AI across their business and the addition of Okta for AI agents represented a significant ACV uplift compared to their prior contract.

Okta’s management recently introduced a new open standard, Cross App Access, that helps with securing AI; Cross App Access enables AI agents to safely connect with other technologies; Cross App Access is now an extension of a model context protocol (MCP); customers of Auth0 for AI agents to build agents are getting Cross App Access out of the box

Last quarter, you heard me talk about Okta’s role in the development of cross-app access, which brings visibility and control to both agent-driven and app-to-app interactions. This allows IT teams to decide what apps are connecting and what information AI agents can access. I’m excited to share that as of last week, cross-app Access is now an extension of a model context protocol known as MCP, which helps validate that identity providers like Okta will act as the indispensable control plane for the AI enterprise…

…Customers that are using Auth0 for AI agents to build agents will get support for Cross App Access out of the box, meaning any agents that they build with Auth0 for AI agents will be discoverable by an IDP that also supports the model context protocol. And Okta’s IDP also supports cross-app access and the model context protocol. So customers developing agents with our technology will be producing agents that any company can secure more precisely. And the Okta platform will help customers discover agents that have been deployed and then manage those agents as well.

Okta’s management thinks that a key driver for customers to consolidate onto Okta is technological change; past technological changes have been cloud and mobile, but the recent change driving consolidation towards Okta has been AI; management has been working with a Fortune 50 customer on replacing a multitude of competing products with Okta, because (1) the consolidation will save costs for the Fortune 50 company, and (2) the Fortune 50 company is using 5,500 applications but only 1,500 are able to be hooked up to its central identity system with its existing solutions and this is not feasible for the Fortune 50 company’s agentic projects

[Question] From your perspective, what gets customers over the hump and convinces them to consolidate IAM, governance, PAM, customer identity and any other components to Okta?

[Answer] It’s always wrapped up in some other technological change. If you’re not changing your data center, if you’re not changing your apps, if you’re not investing in AI, you’re not going to change identity. So in all the customers I work with, it’s about some other catalyzing technological change. For many years, it was cloud and building mobile apps and still cloud transformation. And — but what we’re seeing more and more is companies are trying to move technology so they could take advantage of AI. They’re modernizing apps. They’re modernizing their security stack so they can give AI agents access to all of their data resources, and that’s been a catalyzer…

…We’re working with one of the largest Fortune 50 customer of ours on a wholesale replacement of Ping Identity, SailPoint, CyberArk, and several other identity vendors across their whole stack to standardize on Okta products. And the driver there is 2 things. It’s cost. They wanted to have less cost in their environment, and they want to have more better functioning and greater products. That’s part of the driver. But the bigger driver was actually something very simple, which is this company has 5,500 applications. And only all these years with these legacy vendors, they only had 1,500 of them hooked up to their central identity system. And so they’re thinking about agentic future where they want to give their agents and their agent infrastructure access to every application that they have, and they only had a paved path for 1,500 of them because they only were able to get that many on their identity platform with the old technology. So when they think about standardizing, they think about moving all 5,500 applications to Okta.

Okta’s management thinks agentic commerce will be a very big deal; management thinks Okta’s Auth0 for AI agent product is the right solution to secure agentic commerce

I think it’s a big deal. I think Agentic Commerce and if you have a website and you — that’s doing customer support or e-commerce commerce, you’re going to have some version of agents on there very quickly if you don’t already. And if you’re building those agents, Auth0 for AI agents is the right solution. It shortcuts the ability to have those agents connect to multiple systems on the back end. It helps you put Fine Grained Authorization inside of your agentic flow. So it’s purpose-built, and we’re — I think it’s a big trend we’re talking about here.

Companies have a few identity and security challenges when deploying AI agents, namely, (1) their agents need to be discovered, (2) ensuring agents are only authorised to do very specific things, and (3) knowing what agents have been deployed in their environments; Okta helps companies solve all the identity and security challenges that come with deploying AI agents

Builders of agents, they need to solve for at least 2 distinct challenges.

One is ensuring their agents can be discovered. And the second is ensuring that agents are only authorized to do specific things that they have access to specific corporate assets and not others. And Auth0 provides the capabilities to solve both of that with support for Cross App Access and model context protocol, agents built through Auth0 can be discovered and managed properly. And Auth0’s Fine Grained Authorization allows agents to be built in a way that their privileges can be very finely tuned, which is hugely important to our customers in that space.

But the second part of that challenge that our customers have is they don’t know. They tell us they don’t know what agents are deployed in their environment. They don’t know what their users have turned on and what their users’ agents don’t have access to. And this is the challenge of discoverability and being able to discover agents. So our — on the Okta platform side, our Identity Security Posture Management product scans corporate networks to find service accounts and the privileges of those service accounts, but it will also now help discover agents that are implemented and deployed as long as they support the Cross App Access protocol, the extension to MCP.

So the problem of discoverability is something they need help with, and we’re well positioned to help them with that. And the other related challenge is not only knowing that they exist, but then protecting the identity of those agents to ensure the agents can’t themselves be impersonated by a threat actor and to ensure that those agents are properly authorized to take the actions that they’re attempting to access.

So the Auth0 platform on the build side is hugely important for our customers and the Okta platform on the Discover and manage side is important for them as well. That also includes things like privileged access, allowing the agents to have tokens that are appropriately vaulted and governance, having them provisioned and be provisioned based on just-in-time requirements.

Okta’s management is seeing that most of the agentic projects companies are taking on involve agents that are built in-house; the deployment of agents by software vendors is a little slower

I would say that the actual most concrete implementations are agents they built themselves. I think that the deployment from the — some of the packaged application vendors you talked about are maybe a little bit more behind in terms of deployments.

Okta’s management is currently pricing Okta’s agentic products similarly to the company’s other products; the agentic products are priced on a per-agent basis; management is open to changing the pricing model based on what they learn, as the agentic model is still something new

The agentic products are priced similarly to our current products. Our current products are priced per user, the agentic products are priced per agent. So sometimes that can be a one-to-many relationship. You might have a few agents for a person. Sometimes they might be agents on their own. So I think we’re set up in a way that gives us flexibility as these things evolve in terms of how companies want to deploy agents to augment headcount, what they want to — how they want to deploy agents at the front end of processes before it ever gets to a person. And this is one of the advantages we have with all these customers and all this interest, we can figure this out quickly. And we can iterate on this quickly, and that’s how we’ve gotten to this pricing model because this is a new thing.

Okta’s management is currently not seeing major seat reductions at companies because of AI-related reductions in workforce; management is confident that Okta’s customer identity and agentic identity businesses will more than offset any reductions in its workforce identity business from AI-related reductions in workforce if it were to happen; management thinks a human employee will typically be bound to 5-10 AI agents

We are not — like everyone, we’re looking at what changes will happen in the global workforce at companies as they lean more on AI and technology to run their businesses. We’re not yet feeling a material headwind from — you mentioned seat reductions in the business. But were we to see that, we’re confident in our customer identity business offsetting that. We’re confident in our agentic identity business offsetting that. So in the aggregate, we view this shift in the industry as net upside for Okta…

…I think a lot of companies think about agents as like software engineering is a great example. As a software engineer, you’re going to have 10 of these agents working for you all the time. They’re going to be reviewing code. They’re going to be doing security reviews. They’re going to be checking code in. They’re going to be running tests. And that — all those agents are going to be working on your behalf in some cases and have their own identity and others and it’s just having the flexibility to support all those different use cases in addition to agents that would just run on their own. Your customer support agents or your agents sitting on your website accepting commerce are going to be on their own. They’re going to need access control, but they’re not bound to a user until maybe it gets lowered down in the workflow…

…[Question] What’s that relationship being like in the example that we’ve seen so far, what is it like 1 to 10, 1 to 20?

[Answer] I think it’s like 5 to 10 per person.

Okta’s Auth0 and Workforce agentic products are both experiencing similar traction from customers; the customer-profile of the Auth0 and Workforce agentic products are different

[Question] What’s getting more traction? Is it the Auth0 solution or the workforce side? And then what do you think represents the larger opportunity and why?

[Answer] They’re both getting about the same amount of traction. I think the — it’s a little bit different. I think a lot of the interest in the Auth0 for AI agents, it’s more online, people find out AI developers, right? So they find out about it on the website. They do self-service, upgrade to enterprise. It’s a little bit of a different motion. The Okta for AI agents, which is for IT and security, it’s very much have an enterprise architecture with a CISO or security influence buyer or an IT influence buyer.

Salesforce (NYSE: CRM)

Agentforce has delivered 3.2 trillion tokens to customers so far, exceeding management’s expectations; Agentforce and Data reached nearly $1.4 billion in ARR (annual recurring revenue) in 2025 Q3 (FY2026 Q3), up 114% year-on-year; Agentforce ARR reached $540 million in ARR in 2025 Q3 (FY2026 Q3), up 330% year-on-year; Agentforce is Salesforce’s fastest-growing product ever; management has integrated Agentforce into every Salesforce product; all of Salesforce’s data is unified for use in Agentforce; when an LLM is interacting with Agentforce, it’s getting strategic context from Salesforce’s data on customers, service, sales, marketing etc, and this data is unique because it makes business more valuable; 6 of Salesforce’s top 10 deals in 2025 Q3 (FY2026 Q3) are driven by companies who want to use Agentforce; Agentforce is only a year old, butSalesforce has already closed 18,500 Agentforce deals, 9,500 of which are paid; paid Agentforce deals are up 50% sequentially in 2025 Q3 (FY2026 Q3); Agentforce can rope in humans when necessary; Agentforce is using many different LLMs, including those from OpenAI, and will go with the lowest cost option; Agentforce can control AI costs by knowing when to invoke LLMs for tasks; Salesforce is itself an Agentforce customer; customers in production with Agentforce are up 70% sequentially in 2025 Q3 (FY2026 Q3); more than 50% of new Agentforce bookings in 2025 Q3 (FY2026 Q3) came from existing Agenforce customers; management launched Agentforce IT Service in November 2025; management thinks Agentforce is uniquely positioned for the agentic era partly because of its scale; new bookings for the most premium SKU management has within Agentforce doubled sequentially in 2025 Q3 (FY2026 Q3); customers leveraging Salesfroce’s forward-deployed engineers for Agentforce in 2025 Q3 (FY2026 Q3) saw 33% faster deployment times; 3 customers refilled their Agentforce tank in 2025 Q1, but 362 did so in 2025 Q3 (FY2026 Q3)

We have delivered incredible results with Agentforce. It’s really exceeding our expectations. You’re going to hear all the details, but I think that you could see 3.2 trillion tokens delivered for our customers…

…Agentforce and Data reached nearly $1.4 billion in ARR in the quarter, up 114% year-over-year, including Agentforce ARR of about $540 million, 330% year-over-year…

…This is our fastest-growing product ever…

…Every Salesforce app now not just sales, service, marketing, commerce, all of them, Tableau, Slack, our new ITSM, supply chain products, they’ve all been rebuilt, and Sreeni’s here, he’s going to talk about what we’ve done to bring Agentforce into every product we have and we transform Agentforce from being a product to a platform so that all of our apps can reason, learn, take action, collaborate with users, but it’s really about humans and apps and the AI and the data all working together. And that is what’s so exciting that every part of our platform is now so deeply integrated and because all of the data is unified. And every app shares the same metadata…

…When an LLM is interacting with Agentforce, it’s getting that strategic context from our data, from the data on the Internet as well, from the data that it’s been trained in. And then how you — knows how your business operates, it’s really able to give you that. And that’s because Salesforce is unique in that we have data that makes business more valuable. It’s that customer data, the service data, the sales data, the marketing data and then we’re able to deliver it in a tremendously friendly way…

…6 of our top 10 deals in the quarter are now driven by companies that just want to transform with Agentforce…

…A year since we introduced Agentforce, we’ve closed over 18,500 Agentforce deals. 9,500 of them are paid transactions, it’s up 50% quarter-over-quarter…

…Across the apps, you’ve seen the omnichannel supervisor like built into the service cloud, where all of a sudden, I’m a customer. I’m coming into the website even like Salesforce to help.salesforce.com or any of our customers’ websites. And I’m in there, and I’m working and then all of a sudden, I’ve hit kind of the limit of what the LLM can do, I can escalate immediately, also write to a human. And that’s where the humans and the agents and the AI and the data all have to work together…

…We use all of the large language models. The — they’re all great. We love all of them. We love all of our children, but they’re also all just commodities, and we can have the choice of choosing whatever one we want, whether it’s Open AI or Gemini or anthropic or what there’s other open source ones, they’re all very good at this point. So we can swap them in and out. The lowest cost the best one for us, making us basically the top user of these foundation models…

…As customers put Agentforce to work across their business, but not every task or step in the workflow needs to call the LLM, we call that determinism. And determinism is really important because for those of us who grew up in software, we used to call it if then statements, but now we call it determinism. But determinism is that, hey, if I need to do this, go to the LLM, but I probably don’t need to go to the LLM, just do that. So that is going to even reduce our costs further and not hit the LLM as much as we do. And that’s why we built hybrid reasoning and agent script and our AI teams are just crushing it on that. And we’re getting customers the best of both worlds, combining LLM driven reasoning and deterministic precision…

…We had strong performance across Agentforce Service, Agentforce Sales and Slack. And those 3 apps are just a powerful combination for [indiscernible] Salesforce. We use those every single day, we live on them. It is really the hat trick for Salesforce with large customers to say, “Let us show you what we’re doing in service. Let us show you what we’re doing in sales. Let us show what we’re doing in Slack. And it’s a Wow experience right now. It’s only going to get better…

…Customers in production with Agentforce have jumped now 70% year quarter-over-quarter…

…In the quarter, more than 50% of new Agentforce bookings as well as 50% of Data 360 in bookings came from existing customers, expanding their investment, which was awesome and really showed adoption…

…Last month, we launched Agentforce IT Service or Agentforce ITSM or you know that what company that we’re targeting…

…We’re delivering this capability to a global customer base, more than 150,000 Salesforce customers and 1 million companies are now on Slack, now have the immediate opportunity to work side-by-side with agents and Agentforce and the apps are already using every day to become elevated. And that’s why we’re uniquely positioned for this new area. We have the strategy of the platform, the global scale…

…New bookings for Agentforce One Edition and A for X or as we call it, Agentforce for Apps, our most premium SKU doubled quarter-over-quarter…

…Our top priority is accelerating Agentforce and Data 360 adoption. We are relentlessly reallocating our resources to high-growth areas and it’s paying off. Q3 was one of our biggest pipeline generation quarters ever and customers leveraging our forward-deployed engineers are seeing 33% faster deployment times…

…I don’t know if you remember 2 quarters ago, I was super excited. I had to dig very deep to find that 3 customers came and refill the tank in Q1. In Q3, 362 customers refill the tank. That’s an incredible testimony of the success that Agentforce is having in a very short time frame.

Salesforce’s management has delivered employee agents through Slack and it is called Slackbot; Slackbot is able to go through all of Salesforce’s customers’ data and do so in a secured way; Slackbot is able to deliver analysis of a customer and recommendations for interactions with the customer; management sees Slack as a conversational interface for every app, agent, and workflow; Slackbot is currently only available for a small number of customers; Slackbot is built on Agentforce

Some of you have seen it, but probably a lot of you haven’t as employee agents. And we’ve really delivered an incredible new framework deeply integrated into our Slack product. Every Salesforce employee already uses it every day I do, and it’s the core of every demonstration we give to our customers to show how we have unleashed with Slack, something new called Slackbot, which is really the heart of our employee agent strategy, and you’re going to see that. It’s incredible. It is able to go not only through Slack, but and not only through the whole Internet, but also through all of our customers’ data that they have basically provisioned in a secure way through Salesforce as well and deliver a context…

…I was with a really good friend of mine. Just this weekend, I had lunch with him, and he’s a top venture capitalist and he had been a huge investor in the Coinbase. And I’ll tell you that we’re just sitting there, just talking about, hey, tell me about everything with your venture capital company, tell me everything about this venture capitalist and then also tell me everything about Coinbase and the company and our relationship. And then it’s able to deliver to me an absolute and complete not only analysis, not only a summarization, not only all of the detail, but next steps, how to sell, what I should do exactly for the customer. And I love demoing this to customers because they don’t think it’s possible. And then when they see it, they say, “Wow, this is what AI was meant to be.”…

…And Slack is now where it’s coming all together, and that is this incredible conversational interface for every app, every agent, every workflow…

…They may not have Slackbot yet because we’ve only turned it on for a small number of customers who are about to hit the switch and everybody is going to see this employee agent power. So that most people have seen that customer agent power. Now they’re going to see the employee agent power. And they’re going to see how it’s built on Agentforce, how it’s built on the apps and how it’s built on the data.

Williams-Sonoma used Agentforce to build a digital sous chef on its website; there are no hallucinations with Williams-Sonoma’s agent; Williams Sonoma will be building voice agents soon; ride-hailing company Uber and consumer packaged food products company Conagra are customers of Agentforce; CVS Health, Telecom Argentina, TD Bank, the US IRS (Inland Revenue Service), and Costco have become Agentforce customers; General Motors is now an Agentforce customer and is using Agentforce to speed up case resolution for its call centers; PenFed has become a customer of Agentforce ITSM; Agentforce will help PenFed reduce operational expenses by 30%, and produce $2 million in savings; the UK police force recently launched Bobby, an Agentforce Service agent; Bobby is the UK public’s first point of contact for nonemergency calls and can provide instant responses; Bobby has already reduced nonemergency demand by 20%; Salesforce used Agentforce for STR Agent, which has generated tens of millions in incremental pipeline; Agentforce passed 2 million conversations in 2025 Q3 (FY2026 Q3) on Salesforce’s customer-help website; Agentforce took 9 months to reach the first million conversations on Salesforce’s customer-help website and 4.5 months for the next million

Williams-Sonoma’s version of Agentforce, which they call all of [ Olive ]. And if you haven’t been on the Williams Support — Williams-Sonoma’s website and seen the sous chef that they call Olive and used it, I think the quality is what I’m most impressed with that it’s really very, very good. You don’t see hallucinations. You see really kind of the customer personality, the quality, the ability to deliver value, and they are saying that’s about 60% of their chats. We’ve got a whole another level to go with them with voice, which is coming, which is very exciting…

…Great companies like Uber, like Conagra, like LY, like Williams Sonoma, like all these great companies that we’ve been talking about and the consumption flywheel is gaining traction…

…We had incredible wins this quarter, Miguel is going to talk about CVS Health and Telecom Argentina and TD Bank and the IRS, somebody who’s going to be getting a big check from all of us, they are all now on Agentforce. So your IRS agents or Agentforce agents and [ NG ] and so many more are becoming agentic enterprises. And Costco, we love Costco…

…We know General Motors, we love Mary, amazing, how one of her new Escalade IQ, she’s tired of me telling her how much I love it. Expanding Salesforce across the automotive cloud, Data 360, MuleSoft, Agentforce Sales, Agentforce service. But really cool Agentforce tossed their other collaborative product. We won’t talk — tell you what it is, you probably know the name. And they’re now using Slack…

…With Agentforce, Mary’s speeding up case resolution for her call centers. Slack is now the company’s primary communications hub, scaling to 96,000 employees in just 9 months…

…PenFed went live with ITSM with agents for IT service…

…You look at PenFed, I think they went live with agents for IT service as well as member service and collections, they’re projecting a 30% reduction in operational expenses and $2 million in savings with this product is killer…

…This week, we launched the U.K.’s first AI police officer. We work with multiple police departments to roll out Bobby. Everybody loves Bobby, it’s the Agentforce Service agent that is the public’s first point of contact for nonemergency calls and Bobby autonomously provides instant responses on more than 90 topics and police departments have already seen a 20% reduction in nonemergency demand, and they are just getting started, and this is what real enterprise adoption looks like…

…As Customer Zero, our STR agent, has worked hundreds of thousands of leads, generating tens of millions in incremental pipeline. We see that same velocity with Agentforce on help.salesforce.com, which passed 2 million conversations this quarter. It took 9 months to reach the first million and just half that time to double it, another clear example of our internal consumption flywheel taking off.

Salesforce’s management thinks that LLMs (large language models) are basically commodities

We use all of the large language models. The — they’re all great. We love all of them. We love all of our children, but they’re also all just commodities, and we can have the choice of choosing whatever one we want, whether it’s Open AI or Gemini or anthropic or what there’s other open source ones, they’re all very good at this point. So we can swap them in and out. The lowest cost the best one for us, making us basically the top user of these foundation models. 

90% of Forbes’ top 50 AI companies are using Salesforce, including the high profile AI companies such as Anthropic and OpenAI; the Forbes’ top 50 AI companies that use Salesforce average 4 clouds each; 80% of the Forbes’ top 50 AI companies that use Salesforce are using Slack

but nearly 90% now of all of the Forbes top 50 AI companies are using Salesforce. Let’s just think about that for a second. 90% of all the Forbes top 50 AI companies, those are the Anthropics and Open AIs and the [ blah, blah, blah ] companies, okay, that is our Cognition, Cursor, Figure AI, okay. They all average about 4 clouds each already. And 80% of them are using Slack to run their business.

Agentforce has powered 1.2 billion LLM (large language model) calls to-date, with 200 million calls in 2025 Q3 (FY2026 Q3); Agentforce is on track to power 2 billion LLM calls in 2026 (FY2027); Agentforce’s weekly actions have risen 140% quarter-on-quarter; Agentforce token usage in October 2025 was 540 billion, up 25% month-on-month

Agentforce has powered 1.2 billion large language model calls, that’s interactions when agents invoke a model to understand contacts and decide the next best action…

…More than 200 million Agentforce LLM calls in Q3 alone, on track to power another 2 billion over the next year. And those LMs now are calling these agent force actions such as updating the opportunities, creating a case, handling service inquiry and the number of average weekly actions has now risen about 140% Q-ver-Q…

…In October alone, token usage was nearly 540 billion, up 25% month-over-month.

Data 360 was formerly known as Data Cloud; Data 360 is the foundation for Agentforce; Data 360 ingested 32 trillion records in 2025 Q3 (FY2026 Q3), up 119% year-on-year; the 32 trillion records included 15 trillion zero-copy data integrations, up 341% year-on-year; traditional enterprises and technology companies alike are using Data 360; Data 360’s ingestion of records in 2025 Q3 (FY2026 Q3) was up 38% sequentially; Data 360’s zero-copy data integrations in 2025 Q3 (FY2026 Q3) was up 52% sequentially

Data 360 is the foundation for every Agentforce deployment, and it’s accelerating in Q3. Data 360, the product formerly known as Data Cloud. In Q3, Data 360 ingested 32 trillion records. 32 trillion records, up 119% year-over-year, and that includes 15 trillion through zero-copy data integration up 341% year-over-year. So Dentsu, Moody’s, KPMG, Ferguson, Zoom and dozens more invested in Data 360 in the quarter…

…In quarter-over-quarter on Data 360, people have built their lake, just in Data Cloud, our ingest has increased by 38%, and zero-copy has increased by 52% growth in terms of records.

Salesforce’s management sees the agentic enterprise as a new, very large, secular trend, after meeting many customers; management thinks that companies are finding it hard to build their own agentic solutions, so they need to vendors such as Salesforce; management thinks the agentic trend will lead to customers using Salesforce in a different way; management thinks the monetisation opportunity for Salesforce in the agentic enterprise trend is 3x-4x higher than before; Salesforce has already seen AOV (annual order value) with some customers increase by 2x-5x because of the agentic opportunity; the companies that are really turning to Salesforce’s agentic solutions are the visionary ones who started building their own agents 2 years ago, and they turn to Salesforce because they realised how bad the pain points were; the visionaries 2 years ago were concerned with what LLMs Salesforce was using, but now they no longer care

This past quarter, I was in 3 continents, 12 countries, I talk to 400 customers, many one-on-ones, many one to two several dinners. And the reality is very different. There is something very large, very important, and I want to emphasize this, I don’t think we’ve made Marc and Robin enough justice to what is happening right now in front of us. This is — there is a new very large secular demand trend, which is the agentic enterprise. Every single company in the world, small, medium, large wants to become an agentic enterprise…

…The problem is they’ve been experimenting. They’ve been experimenting for 2 years. They’ve gone from experimentation now to frustration a little bit. And now they are all saying, you know what, this is hard. This is much harder than we thought. They all want to go to scale because the opportunities, which is a multitrillion market cap opportunity, it’s in front of us. The TAM is a multitrillion for us, and they want to go all in, they know it’s hard because LLM cannot do this alone. And now to answer your question, the last mile is hard. And last mile is hard because companies need the context. For enterprise AI to be successful and accurate in the enterprise, you need the context, you need the data, you need the metadata, you need deterministic workflows. You don’t want the agents to be essentially executing based on what they found in an LLM, you want the agents to execute in a deterministic way the same workflows that that company had already qualified the apps for the years that humans are already using. And they need AI that is embedded where the humans are. That’s why it’s so important to have the data with the context to have the apps, the deterministic workflows to have the AI where the humans are and only Salesforce can do that…

…The Agentic enterprise is a new paradigm. Customers will have — we’ll use Salesforce in a totally different way. They will use Salesforce to be the platform for detailed labor for sales, for service, for marketing and the impact on the way we can monetize those relationships is exponential. It’s not linear growth. It’s exponential. Robin alluded to that at Investor Day, [ we were ] talking about 3x, 4 times the ability to multiply the monetization on customers because, by the way, they’re getting 3 or 4x or 10x more value from our products…

…The bookings that we do with them, the AOV had doubled, tripled, in some cases, multiplied by 4 and 5, and we are just getting started…

…When I talk to CIOs, I see 2 types. People who are really advanced who are visionaries who started 2 years back, do it yourself. they really understand the pain point. They are the ones who are moving fast to the platform…

…One more right thing is our customers 2 years back, they would ask me, what model are you supporting, where is it, what hyperscale you run. They don’t ask me any of those things now because we abstract all that complexity for them. That’s the original promise of Salesforce when we said no software.

Salesforce is not building data centers for AI, so its gross margin and cash flow is preserved

I just want to make sure everybody realizes we’re not building data centers at Salesforce. We’re preserving our gross margins and our cash flow.

Salesforce’s management thinks they have nailed down Agentforce’s pricing model by having a range of per-seat and per-consumption models; the per-seat Agentforce SKU doubled year-on-year in 2025 Q3 (FY2026 Q3)

The other thing that we’ve learned is pricing matters. It’s very complex. We’ve gone long ways. We’ve had different ways of pricing the product. And now I think we have the whole portfolio of different commercial frameworks to meet customers where they are where they want to be…

…You and me came up with the [ Agentic Enterprise License Agreement ] concept when we visited a few customers in Europe from Unilever to P&I We had great conversations. And we realized that they wanted to move. They wanted to transform, but they were afraid about all these metrics, consumption, et cetera. So we — what we’re doing now is very simple. We are putting the whole menu of options to them. We also have a very successful SKUs that we launched, which are Agentforce for sales or Agentforce for service that are seat-based SKU. People talk about seat versus consumption-based pricing. The reality is there are a lot of customers that want to seat based because seat-based gives you the predictability. So we’ve sold a lot of seat-based licenses for Agentforce and data cloud in Q3. In fact, that SKU has doubled year-on-year. It’s very massive success there. And — but we also have customers from the beginning that they want to just pay per conversation or per agentic actions. So we have the whole portfolio.

Salesforce is seeing both the number of seats and pricing increase

I think you guys always ask the same thing on whether the number of seats is increasing, the price is increasing. Well, for our clouds, we are seeing both increasing, which is exciting.

Veeva Systems (NYSE: VEEV)

The first AI agents under Veeva AI, an initiative launched in April 2025 that will see the company build industry-specific AI agents within its applications, is on track for a December 2025 launch; the first AI agents are for Vault CRM and commercial content; Veeva is on track for more agents in 2026, and these agents will be in all of Veeva’s software applications; early results of Veeva’s agents with early adopters have been very promising; management sees a lot of interest in Veeva AI from customers because they find value in specialised AI agents that fit seamlessly into their workflow; management thinks Veeva AI can be transformative for safety-related applications; management thinks AI agents will be transformative in clinical operations

The first Veeva AI agents will be available as planned in early December for CRM and commercial content. And we are on track for R&D, quality, and additional commercial agents in 2026. We started working with our first early adopters over the past few months, and early results are very promising…

…There’s a lot of interest in Veeva AI because of the clear business value in specialized AI agents working seamlessly in the user’s workflow. Customers are looking for practical solutions that address the specific needs of their functional areas and we are very excited about Veeva AI and what it can do for the industry…

…We are very pleased with our momentum in safety and the transformative potential of Veeva AI as applied to the safety area…

…We’re going to have agents in literally all of our software applications as we get through 2026. We started this year; we’ll have them in commercial and CRM and Commercial Content. Next year, in roughly the first quarter, April, it will be in Safety and Quality. And then through the end of the year, we’ll have agents in clinical operations and then by the end of the year, Clinical Data Management. We think it’s one of those potentially transformative areas in clinicals. It’s our largest single opportunity, the clinical business. There’s a lot of potential to just streamline a lot of core processes, eTMF, when you just intake a document and scanning through that and making sense of that with an agent as an example, just replacing core human labor with agents. So a lot of potential for productivity. That’s just one example, but I think we see that pretty consistently across the broader clinical area.

Veeva’s management thinks AI can change Vault CRM dramatically over the next few years, and customers are excited about it

Now we are entering the age of AI, probabilistic computing to really drive and change what a CRM system can do. So that’s giving people a lot of excitement. This — the Vault CRM of ’26 and ’27 and ’28, that’s not going to be like the Veeva CRM of 2022 and 2023. So that’s where the real excitement is.

Veeva’s management is seeing customers choosing AI partners based on where they think a particular partner can help them; management thinks Veeva can help customers to automate industry-specific applications with AI; customers want Veeva to go faster in AI, but the direction is very aligned; management thinks Veeva’s customers will require change management work to implement AI and this is where Veeva’s business consulting team can help; management thinks customers want an AI partner that can provide a one-stop-shop service for consulting, software, and AI

They want to use partners where partners can help them. So they want to use Microsoft where Microsoft can help them. They want to use Anthropic where Anthropic can help them. And they know where Veeva can help them is helping to automate industry-specific applications with AI, that deep domain knowledge and the business process consulting around it. So how do you enable insight generation in CRM through your field team by the use of compliant free text, okay? That’s a very specific thing. How do you dramatically increase the efficiency of Safety case processing for adverse events, okay? That’s very specific. So that’s what they’re looking to us for, and that’s what we deliver…

…They just want us to go faster, but there’s really rampant alignment on directions…

…Customers also have to be able to adopt and do that change management work, which is that’s not easy either. That’s not going to happen overnight. That’s one of our advantages is we have a great business consulting team…

…The customers are not going to want to knit together consulting over here and software over there and AI over here. They’re not going to want to do that over the long term.

AI’s impact on reduction in sales reps in the pharma industry has been lower than what management predicted; management thinks sales headcount in the pharma industry is going to be stable for a few years

[Question] I think there’s been some debate broadly on AI and how that may impact sales reps or like how efficient sales reps could be. Like as you talk to some of your customers, like how are they thinking about the size of their sales force with the implementation of AI?

[Answer] We have seen some of the reductions that have played out over the past couple of years that we have talked about. We kind of predicted roughly about 10%. It ended up being a little bit less than that. The way to think about it is the customers that they’re calling on the HCPs, number of doctors hasn’t fundamentally changed. You still need people. You need a base level of sales reps to build those relationships, cover those doctors, deliver the information, the service that they need. So I think the industry is cautious and thoughtful about making significant changes or adjustments. So I think there is a lot of potential for productivity gains and effectiveness gains. But I think it will likely be stable, at least for the next couple of years. We’re not hearing of any AI-related reductions.

Wix (NASDAQ: WIX)

Wix’s management thinks of vibe coding as having 2 spheres, one where developers live in, and the other where non-developers live in; vibe coding allows non-developers to create software; management sees parallels between Wix’s important role in website creation in the past, and nascent role in vibe coding in the present; management sees the vibe coding market as being much bigger than the website creation market; management has seen the vibe coding market grow exponentially over the past year, with Wix taking a bigger piece of the pie

When I think about vibe coding, I try to simplify things by breaking the world apart into 2 categories. One is the developer sphere. This is Claude Code, Cursor Windsurf and all these tools, which are great for engineers. These tools integrate directly on the source code of a project, enabling complex technical programming, which require significant user expertise. The second sphere is where everyone else lives, the majority of humanity who don’t code or even think they can code. Suddenly, with vibe coding, they can create pieces of software that improve their personal lives or help to build their businesses, all by simply using natural language. For example, a school teacher can create a custom app to track attendance and post grades. A neighborhood restaurant can build an application to handle their staff schedule, another to manage vendors, another to sort inventory and so on and so forth…

…This story sounds exactly like Wix’ story back in 2006. We didn’t invent websites back then. They were already widely available but only to big companies with engineering budgets. There was an absolute barrier for the average person. We knew there was a way to enable an online presence for everyone. This was and still is the mission of Wix. We intend to do for software, what we did for websites, enabling everybody to build applications without any need for a developer…

…The software application market is many, many times bigger than the website creation market. Think about it. That same neighborhood restaurant needs only one website, which they likely built on Wix, but they may need many applications to successfully run their business…

…The AI-powered app building space has grown exponentially over the past year, and we are taking a bigger and bigger piece of this pie.

Wix acquired Base44 in June 2025 (Base44 is an AI-powered platform that allows users to build web applications using natural language prompts); Base44’s share of audience traffic has increased from almost nothing in June to more than 10% in October; Base44’s capabilities are getting better fast, driven by a fundamental architectural advancement towards an agentic coding environment; Base44’s business has done better than expected since being acquired; the growth in Base44’s share of audience traffic was partly the result of the application of Wix’s proven strategic playbook; the returns on management’s initial marketing investments for Base44 meaningfully exceeded expectations; Base44’s userbase has increased 7x from June 2025 to 2 million today; Base44 has 1,000 new paying subscribers joining daily; management now expects Base44’s ARR (annual recurring revenue) to be at least $50 million by end-2025, higher than before; management expects Base44 to have similar operating and free cash flow margins as Wix in the long-term; management thinks many vibe coded projects are currently only prototypes, but they are already seeing some users build production-grade software with Base44 today; management thinks there’s still some way to go before vibe coding can be used to build production-grade websites

BASE44’s share of audience traffic increased from almost nothing to more than 10% in October. Among local tools, BASE44 is quickly proving to be a leader and the best solution on the market today with enormous white space, still ahead. BASE44 is also getting better, fast. We recently launched our new builder transitioning BASE44 from a predominantly user-reliant tool to an expert developer partner for everyone. The new builder represents a fundamental architectural advancement moving to an agentic coding environment. With multi-agent layers, BASE44 can now validate, debug, refactor for performance and fix its own work, making app creation faster, smarter and more powerful than before…

…We also welcomed our first full quarter of new BASE44 cohorts under the Wix banner in Q3, which performed better than anticipated. As the vibe coding market has exploded this year, BASE44 has meaningfully outgrown most peers. We now estimate our share of audience traffic to AI-powered application builders to be more than 10%, up from low single digits in June. This growth in a matter of just months is a result of a fantastic product with organic reach supercharged by our expertise and investments as well as application of Wix’ proven strategic playbook to BASE44.

In addition to establishing a dedicated customer care team and expanding BASE44’s R&D capabilities, we focused on building up a comprehensive full-scale brand and marketing function. Remember, BASE44 did not have any marketing motion when we acquired it in June. On day 1 after the deal closed, we started to apply a marketing plan that has been fine-tuned and tested over the past 2 decades, a key competitive differentiator for Wix to BASE44. This included refining the company identity, messaging and visual system to better reflect our market ambition. We also launched campaigns in key channels and core geographies, compelling branding and effective marketing is crucial to growing BASE44’s reach beyond just early adopters and capturing the huge white space Avishai spoke about. Returns on our initial marketing investments meaningfully exceeded expectations as demand ramped through the quarter. As a result, we were able to confidently scale marketing efforts above our initial August plan.

Today, BASE44 serves over 2 million users around the world. This is more than 7x more users than we had at the end of June. Impressively, this translates into more than 1,000 new paying subscribers joining daily. We now anticipate BASE44 to achieve at least $50 million of ARR by year-end, an increase from our previous expectations…

…In the long term, I expect BASE44 to have similar operating and free cash flow margins to Wix…

…You’re right, when you say a lot of it is just used for prototyping, right? And that’s great for people to actually build an application that is just for demo for a few people, and then the prototype is the application, right? It doesn’t need scale. It’s okay if it kind of like a tiny bugs. But we are getting to a place that today with BASE44, you can really build more full applications. There’s still quite a way to go on what we can do there and how to make it even better. But we are getting to a place. And of course, we have some users that already built really large applications that have been deployed and we can see that. So if a year ago, you couldn’t do by vibe coding for anything real. And a few months ago, you could do vibe coding for multi prototypes for applications, I think today we are starting to see more applications that are real and are being used in the commercial level.

For website, it’s still different. I think for website, there’s still a gap that needs to be closed, vibe coding to build real websites that are Google-friendly, that are LLM friendly, that are — the privacy rules that are required by law and a bunch of other things. And there’s still quite a distance to go, but we hope to close that early next year.

Wix’s management is already seeing AI costs decrease, and expects the trend to continue or even accelerate, as LLMs improve and competition ramps; management thinks there’s a lot Wix can do to lower AI costs, but it’s not a priority at the moment; the AI costs of new Base44 users is much higher compared to older users

Today, we’re already beginning to see AI cost decrease as LLMs improve and competition continues to ramp. I expect this to continue, if not accelerate…

…[Question] On the gross margins and the AI compute, is there anything that you can do within your control outside of LLM costs coming down to keep costs down, for example, using your own internal data to help build versus relying on third-party LLMs as much?

[Answer] I’m not going to go into all the details here, but yes, there’s a lot we can do on cost, okay? It’s not a priority at this stage, right? It’s something that we’re also investigating. I think the priority now is to build a better product and capture more market share. But I think that long term and log is not multiple years, we can dramatically improve the cost of AI for BASE44. There’s so much we can do from training our own models to do part of it from partnerships with the different vendors, from the fact the simple reality that cost is always declining. And so I think there’s going to be a tremendous amount of opportunities for us to reduce the cost of the AI for BASE44…

…New users coming to BASE, they are obviously consuming more AI tokens, right, more bandwidth as they build their apps. But what we see is a big difference between, obviously, the cost of newcomers. So the one that actually continue because they might modify, do some changes, but it’s really not the same.

Wix’s management thinks Base44 subscriptions will trend towards annual as users gain more trust; Base44’s churn rate is higher than core Wix at the moment, but management is optimistic this will improve with time; management sees Base44 monthly subscriptions performing similarly to core Wix monthly subscriptions; Base44 monthly subscribers are currently performing better than core Wix monthly subscribers in Wix’s early days

[Question] On BASE44. Can we just dive into the dynamics of monthly subs versus the sort of more traditional annual subs that you get for core Wix? What are you seeing there in terms of churn and those subscription dynamics? And as people sign up monthly, can you get them to sign up annually more often over time?

[Answer] At this stage, lean a lot more towards a monthly subscription than annual subscription. And we’ve also seen it in Wix in the beginning. It takes time for people to trust the platform, and then they will actually feel more comfortable to pay an annual subscription. And I think we are heading in that. Vibe coding is still so new that we’re heading towards that direction… When it comes to churn, it’s very early to say, and it’s changing very quickly. So it’s very hard to say. Obviously, churn is higher than the standard Wix, which almost doesn’t exist, right? There’s almost no churn. But if you look on a cohort basis. But Base is better than we expected and we know there’s so much more we can do. So we are very optimistic…

…[Question] Can you talk about the cohort retention trends of BASE44 and how it compares versus Wix on monthly customer plans?

[Answer] We’re seeing kind of similar behavior to what we know from the monthly on Wix. And I would actually dare to say that it’s better than what you used to see at Wix in the early days.

To prepare for an agentic future, management has made every Wix website indexable by LLMs, and has enabled agentic commerce functionalities; management thinks the user interface of websites will change in an agentic future

[Question] Wix is pretty well positioned to kind of reengineer the web for the AI era by making a lot of small business websites kind of agent ready, right? Like so they can be discovered by Gemini, ChatGPT and others more effectively versus the current web architecture, which includes a lot of total consumption for them. Can you talk about the vision you have for Wix for this era?

[Answer] The first thing that we’re doing in Wix in order to support and enable all our customers to enjoy that new mode is that every Wix website is now indexable by LLMs, right? So we make the data available to any LLM, and there’s a few formats for that. And so we ensure that ChatGPT can actually read your content and discover your website. That’s the first part. The second part is that we continuously add new standards for how to do e-commerce, the one that OpenAI released a few months ago, MCP, and a bunch of others in order to enable all the functionality to be available within LLMs or be discovered by LLMs and then run on your website. In addition to that, there’s a few more things that we think that how the user interface will change in the next couple of years. I’m not going to go into details, but I think that, that’s another super interesting opportunity for our customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Amazon, Meta Platforms, MongoDB, Okta, Salesforce, Veeva Systems, and Wix. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

Last week, I published The Latest Thoughts From American Technology Companies On AI (2025 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

A key focus for Airbnb’s management is integrating AI across the Airbnb app

We are driving this growth by focusing on 4 key areas: making our service better, bringing Airbnb to more parts of the world, expanding what we offer and integrating AI across our app.

Airbnb’s management has been laying the foundation for a more personalised Airbnb, powered by AI, over the past year; management’s end goal is to have the entire Airbnb app become an end-to-end AI agent for users to plan and book their trips

Over the past year, we’ve been laying the foundation for a more intelligent, more personalized Airbnb from rebuilding our tech stack to launching a series of new AI features. We now have more than a dozen AI work streams underway, and they’re all focused on really creating a more personal experience for guests and hosts and making it easier to discover what we offer…

…What we want to do take AI search, which is conversational, AI customer service and the messaging platform, which is conversational and integrate them to one AI assistant or concierge. And eventually, the entire app will act like an AI agent from the top of the funnel through your trip on reservation and leaving review and then bringing you back through the app end to end.

Airbnb’s management rolled out an AI customer support assistant in 2025 Q3 that can take actions for customers and deliver personalised responses; the assistant is a custom-built AI interface designed by Airbnb; the assistant was initially launched in the USA and it has reduced customers’ need to contact a human agent by 15%; management will soon expand the AI assistant to over 50 languages in 2026; management thinks AI-powered customer support is a very difficult problem to solve for Airbnb because (1) every single accommodation option on Airbnb is unique, and (2) the stakes are very high; with AI-powered customer support, management has found that problems that used to take hours to handle can be resolved in seconds

This quarter, we rolled out smarter and faster AI customer support. Our AI customer support assistant has smarter responses. It includes answers your reservation or listing and also provides quicker, more personalized responses. It also lets you take common actions like canceling or changing reservation dates directly from the chat. So what we did is we designed this custom user interface that’s not just text-based, but it’s got rich user interface modules. So it’s a really custom-built AI interface built right into the messaging platform. Now we initially launched this in the United States, where it’s already reduced people’s need to contact a human agent by 15%. So now we’re going to expand it to more countries in more language, and we expect this to be in over 50 languages next year…

…Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind. And therefore, the issue types, customer service is really challenging, right? Oftentimes, customer service agent will hear an issue that they’ve never heard before because it’s from a host that might be a first-time host. And the guest and host might be speaking different languages. The might be simply locked out in a small town in a foreign country. You can imagine how complicated some of this stuff is. So we decided with AI to start with the hardest single problem we could think of, which was customer service. Customer service, we think, is a lot harder than, say, travel search. And the reason why is because the stakes are highest. You can’t hallucinate. You have to handle sensitive customer data. You’ve got to be fast in real time. You’ve got to escalate to the agent if there’s a trust and safety incident. And we are finding that it’s working really well. And in fact, we can go from solving a problem in hours to solving a problem in seconds.

Airbnb’s management will be rolling out AI search on the Airbnb app in 2026; the AI search function will allow customers to have a conversation with the Airbnb app to design the perfect trip; Airbnb has access to all the leading AI models that are publicly available, such as Alphabet’s Gemini and OpenAI’s GPT, to power the AI search function; an example of personalisation for Airbnb’s management is knowing what the user’s purpose of travel is and suggesting the appropriate type of accommodation option; management is testing out AI search right now; the roll out of AI search will be in 2 phases; Phase 1 of the roll out will be the ability for users to key in searches in a free text, natural language way, and receive responses in that manner; Phase 2 is when AI search can become truly conversational

We’re also building out AI-powered search. And this is a really, really big part of our AI strategy. You’re going to see this. We’re testing it now. You’ll see this rolling out through the app next year. And this will let people have a conversation with the app, just like a chatbot about what they’re looking for, so we can help them design the perfect trip. And remember that we have access to all the same models that every other chatbot and AI application has. So we think this is going to be a really delightful product to use…

…I think what we want to do in the future, and this is like now going back to our AI strategy is knowing more about the customer. understanding what their intent is. And if people are traveling for business last minute, one night, we should probably prioritize a hotel for them. Some people do want a more hotel-like experience. Other people are hardcore about the original philosophy of Airbnb. They want to feel like a local when they’re traveling. Those people probably should not see hotels very much…

…We have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API…

…What we’re testing now is if you go to the search box in Airbnb, there’s where, location, when, date, who guest, we’re testing a what box and what is a free text natural language input, which is similar to ChatGPT or Gemini. You’ll be able to type it in. And based on that, we’re going to essentially — you’re going to see like natural language results. So the search cards, not just will be structured data, but will be essentially natural language generated copy and search results. That’s Phase 1. Phase 2, it’s going to become what I guess you’d call an AI multi-turn. Multi-turn, I think, is just a fancy way of saying conversational. So you’ll be able to have a conversation. So you’ll be able to like — the information on the cards, my vision is instead of saying like 2-bedroom, 2 bath, $60, 5 reviews, a pool hot tub that no 2 people see the same copy, just like 2 people typing in ChatGPT see different outputs based on the memory and the type of question they have. So we want Airbnb to be the same way where the output is also natural language. It’s unique. And you’re going to start to see this iterably happen over the course of next year. Eventually, it will become more conversational.

Airbnb’s management thinks their approach to AI is different because they want to use AI to help people connect in the real world; management thinks that people will increasingly want real-life experiences in the age of AI, and this is especially so for the younger generation; management thinks a bet on Airbnb is akin to a bet that people will yearn for real-life connections as AI proliferates

What makes our approach different is that we’re not just using AI to pull people deeper into the screens. We’re using it to get them off their phones and help them connect to the real world. Because I believe in the age of AI, more and more, what’s going to happen is what’s on a screen will be artificial. You won’t know if it’s real or not. In the age of AI, people are going to increasingly want what’s real and what’s real is in real life. They’re going to create real experience with real people in the real world. And I think that’s especially true for younger generations who grew up on social media are now surrounded by AI-generated content. So we think Airbnb is the best way to experience the magic of the real world. So while other companies are using AI to keep you online, we’re really trying to do the opposite, get you off your phone and into the real world…

…A bet on Airbnb is a bet on AI because it’s a bet that the more AI proliferates the content we consume on devices, the more people are going to yearn for real connection with real people in the real world.

Airbnb’s management thinks that Airbnb can benefit from AI more so than other companies, especially other travel-related companies; management thinks specialization will win in travel when it comes to AI, and Airbnb has many unique capabilities

I think that Airbnb probably more than most other companies, especially companies in travel can benefit from AI. Probably the reason why is because primarily, we don’t have SKUs. Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind…

…We think that we’re going to be very successful at this because, number one, we have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API. So really, you’re not going to win or lose on the model because they’re all available. You’re going to win or lose on what you do with them. And our thesis of AI is that specialization will win in travel. That’s our theory, that specialization will win. We have a lot of unique capabilities. We understand travel, we have one of the best design teams in the world, so we can design custom interfaces…

…We do think Airbnb could be a one-stop shop for travel. And then we have a lot of capabilities that no one else has built, and we don’t think AI companies will want to develop like a messaging platform in the vast majority of people who book an Airbnb use the messaging platform.

Airbnb’s management thinks ChatGPT’s commerce-integration was not ready, hence Airbnb was notably absent from ChatGPT’s recent launch of app integrations; management thinks being integrated with ChatGPT will cause Airbnb to become a commodity-like data layer; management is open to integrating Airbnb with other chatbots, but there are a number of things that need to happen, namely, (1) customer integrations for Airbnb, (2) not being a commodity data layer, and (3) the right presentation of Airbnb results to highlight the unique nature of the company’s offerings

[Question] Airbnb was notably absent from ChatGPT’s app integration launch when other major travel players were there. Can you just talk about your thought process here?

[Answer] We just didn’t think the integration was ready. We care a lot about how Airbnb shows up in the world. And when I looked at what the demonstration, I thought it was a great concept. It was a little bit hard to discover at the time you had to actually download the app, the company’s application. We didn’t want to be positioned as essentially a data layer like a commodity. There were certain tools that we had to build. 

When you book an Airbnb, you want to make sure that you see personalized results to you that you have to have an account on Airbnb, messaging is core to our platform. So it’s really about making sure that we had enough features. But we are not at all opposed to integrating into like chatbots. And I would imagine in the future that you would see Airbnb across a large surface area of the Internet. We just have a couple of principles when we are integrating.

Number one, we want to make sure that while we like the idea of being a launch partner, we still have — we like to have custom integrations if we’re going to be a launch partner, and we want to make sure that, that integration is really well developed. Number two, we don’t want to appear as a commodity. Number three, we certainly don’t want to be a data layer. And number four, we really want to make sure that people understand the uniqueness of Airbnb when they’re seeing results. So for example, we chose not to integrate with Google Hotel Finder because Airbnbs were positioned like commodities next to hotels, and we just didn’t think that was the right presentation.

Airbnb’s management is currently holding off on building an advertising business on the Airbnb app because they think AI search is disrupting the old digital advertising paradigm, so they want to nail down AI search first before introducing advertising; it appears that Airbnb may be introducing advertising very soon after the launch of AI search

With regards to advertising, we’ve been looking at this for a long time. One of the things that’s really changed is the entire paradigm of search is changing in the age of AI. So what we didn’t want to do is design a like kind of ad unit model around old search to then disrupt the ad model to AI search. So we really want to nail AI search so that as we think about advertising, we integrate into this new search paradigm, which we’re looking at right now. So that’s the status. I don’t have — and obviously, we don’t preannounce things. We are sharing that we are going to be launching AI search imminently. But beyond that, we’re not disclosing other pieces we’re launching, but expect more in this next year.

Arista Networks (NYSE: ANET)

Arista Networks’ management sees the company having superior AI networks that improves the performance of AI accelerators; Arista Networks’ strength in AI networking comes from a few sources, (1) superior hardware, (2) innovative fabric architecture, (3) AI-focused telemetry and provisioning automation, (4) high-quality software, (5) leadership of ethernet consortiums, and (6) partnerships with important AI players; Arista Networks’ Etherlink distributed switch fabric powers some of the largest AI fabrics

On September 11 at our Analyst Day, we showcased both networking for AI and AI for networking with our continued momentum across our data-driven network platforms. Unlike many others, our Etherlink portfolio highlights our accelerated networking approach, bringing a single point of network control for zero-touch automation, trusted security, traffic engineering and telemetry to dramatically improve compute and GPU utilization. Superior AI networks from Arista improves the performance of AI accelerators…

…Our success in AI has many sources, the sheer power and performance of our hardware platforms, our innovations in fabric architecture, our AI-focused telemetry and provisioning automation, our reputation for the highest quality software and our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN. And most importantly, the way we partner with the world’s largest AI companies…

…Our Etherlink distributed switch fabric powers some of the largest AI fabrics in the world. It’s also an excellent underlay for data centers of all sorts, providing a full line rate fabric with no hotspots at petabit scale for all workloads, including AI.

Arista Networks’ networking solutions are compatible with NVIDIA’s systems, but management is also keen to create an open ecosystem to build the AI stack, which includes compute, memory, and networking; the open ecosystems Arista Networks is participating in include the Ultra Ethernet Consortium (UEC) and Ethernet Scale Up Networking (ESUN); Arista Networks has unveiled its first ESUN specification together with 12 industry experts; UEC recently published its first specification; Arista Networks’ Etherlink portfolio is entirely compatible with the UEC; ESUN was started with 4 vendors including Arista Networks, but management expects 20-30 members over time

We interoperate with NVIDIA, the worldwide market leader in GPUs, but we also recognize our responsibility to create a broad and open ecosystem, including AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage and VAST Data to name a few, and build that modern AI stack of the 21st century. This stack includes the trio of compute, memory storage and a solid network foundation to run training and inference models…

…Our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN…

…At the Optical Compute Conference, OCP, Arista unveiled its first Ethernet for Scale-Up Networks or ESUN specification, along with important 12 industry experts. While we began with 4 co-founders, we are now supporting and increasing to more people so that we can build the right interoperable scale-up standard…

…After 2 years of lots of hard work led by Hugh Holbrook and now Tom Emmons, UEC did publish their first specification, I believe it was 1.0 in June of 2025. Arista’s Etherlink portfolio is entirely UEC capable, compatible, and we will continue to add more and more compliance, packet trimming, packet spring, dynamic load balancing. These are all important features that our switches support…

…We’ve been an early pioneer, 4 vendors started this together, including Broadcom, Arista and a couple of our cloud titan customers. I’m pretty sure it will be 20, 25, 30 over time. And having a standards-based OCP ESUN agreement will allow us to expand UEC into the scale up configuration as well, leveraging UEC and IEEE specs.

Arista Networks’ management is confident of hitting their previous goal of $1.5 billion in total AI-related networking revenue; management is targeting $2.75 billion in total AI-related networking revenue in 2026; management is now looking at the goal of $15 billion in revenue in the next few years, and a big chunk of the $15 billion will come from AI-related revenue

Our stated goal of $1.5 billion AI aggregate for 2025, comprising of both back end and front end is well underway. We are now committed to $2.75 billion out of our new target of $10.65 billion in revenue, representing 20% revenue growth in 2026…

…As we get now confident about exceeding our $10 billion goal next year, we’re looking at our next goal of $15 billion in the next few years. And I think AI will be a very large part of it

Arista Networks’ management is seeing unprecedented demand for AI build-outs; management sees a golden era in networking, driven by AI, and a growing total addressable market (TAM) exceeding $100 billion in the coming years

The demand and scale of AI build-outs is clearly unprecedented, as we look to move data faster across multiplanar networks…

…We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista’s driving network scale of AI XPUs, handling the power and performance. Basically, the tokens must translate to terawatts, teraflops and terabits. We are experiencing a golden era in networking with an increasing TAM now of over $100 billion in forthcoming years.

Arista Networks’ Autonomous Virtual Assist (AVA) has agentic capabilities that help customers troubleshoot issues with their networks

Arista AVA or Autonomous Virtual Assist, uses AI to help our customers design, build and operate their networks. AVA draws on both our internal knowledge base and also on the customers’ data stored in NetDL, Arista’s network data lake plus AVA has agentic capabilities to help troubleshoot proactively.

Arista Networks has a recent partnership with Oracle for Oracle Acceleron, which includes migrating Oracle’s Exadata platform from Infiniband to Ethernet

At Oracle AI World, Ken was invited to formally announce our collaboration with Oracle Acceleron. This builds upon a decade of partnership with Oracle, starting with our Exadata migration from InfiniBand to Ethernet for AI networks to RoCE, RDMA over converged Ethernet, and now multiplanar networking across cloud AI for on-time job completion in gigawatt scale AI data centers.

Arista Networks’ management continues to think that the company’s networking solutions can co-exist with white box solutions; management thinks white box solutions are suitable for companies with simple use cases, while Arista Networks’ solutions are suitable for more complex use cases; management has long seen Arista Networks as having 2 groups of competitors for AI networking, namely, NVIDIA’s bundle and white box solutions; management is seeing Arista Networks’ market share remaining stable relative to the other 2 groups of competitors; management is also seeing the entire networking market growing, benefitting all 3 groups; there was a recent case of a neocloud with non-NVIDIA GPUs that could not get a white box networking solution to work for mission-critical AI workloads, and had to adopt Arista Networks’ solution

Arista also continues to clarify our role in white box and how we will continue to coexist like we always have the past decade or more. The concept is clear. It’s all about good, better and best, where in some simple use cases, a commodity white box is good enough. Yet in other cases, customers seek the value of better Arista blue boxes with state-of-the-art hardware with built-in NetDI for signal integrity, physical, passive, active component and troubleshooting management. The best is, of course, the Arista branded EOS platform for the ultimate superiority…

…We always, as you know, coexist with 2 other types of competitors. One is the bundling strategy with NVIDIA and the other is the white box. So we have not seen any significant changes in share up or down at the moment, it’s stable. Having said that, it’s also a massive market. And we think rising tide rises all boats and this boat is feeling pretty good…

…I’ll give you one example where they were just not getting their white box to work. These are AI mission-critical workloads. And we’re seeing a neocloud come right in with, in this case, non-NVIDIA GPUs, in fact, where they’re looking to deploy Arista with its excellent hardware. And at first, they wanted to do an open NOS, but now they are adopting a hybrid strategy where it’s not only an open NOS, but Ken’s EOS is coming to shine in its full glory in this use case. So in this case, I think it’s a Blue Box to start with, but it’s quickly going into a hybrid state of blue and branded EOS box.

Arista Networks earns lower gross margins from its Cloud and AI Titans customers compared to other customer groups

We do have a mix of product margin where it’s significantly below 60% with our cloud and AI titans driving the volume and higher obviously, for the enterprise customers. The average of which, together with services is yielding that number. So when the mix tilts heavily towards the cloud and AI, you can expect some pressure on our gross margins.

Arista Networks is involved with the early designs of 5-7 AI accelerator projects (i.e. AI chip systems projects) at any point in time; management sees the possibility of 4-5 AI accelerators emerging over the next couple of years; management thinks the non-NVIDIA AI accelerators will emerge because the standards for Ethernet are getting stronger over time

I think at any given time, we have 5 to 7 projects with different accelerator options. Obviously, NVIDIA is the gold standard today, but we can see 4 or 5 accelerators emerging in the next couple of years. Arista is being sought to bring all aspects, the cabling, the co-packaging, the power, the cooling as well as the connection to different XPU cartridges, if you may, as the network platform of choice in many of these cases. So we are involved in a lot of early designs.

I think a lot of these designs will materialize as the standards for Ethernet are getting stronger and stronger. We now have a UEC spec. You heard me talk about the Scale-Up Ethernet spec, ESUN, where we can bring different work streams onto the same Ethernet headers, transport headers, data link layer, et cetera. So I think a lot of this will be underway in 2026 and really emerge in 2027 as Scale-Up Ethernet becomes a more important part of that.

In terms of deciding the networking platform of choice for GPU clusters, it’s a joint decision between the AI model builders and the cloud computing infrastructure providers, and Arista Networks works closely with both groups; 

[Question] You mentioned large language model providers like OpenAI, Anthropic, and they have announced partnerships with your cloud titans. Can you share with us who is driving the decision-making on networking hardware on these announcements?

[Answer] Specific to who makes the decision, it’s really a combination. We intimately work with the software and LLM players because they certainly guide the design, but we also work with the cloud titans, and it’s a shared responsibility between both of them and where the responsibility for procuring the large data centers and the power and the location and the cooling is clearly done by our cloud Titans, but the specifications on exactly what’s required on the scale up, scale out network is done by the partners like OpenAI and Anthropic. So it’s really a joint decision.

Arista Networks is progressing well with its 4 major AI customers, with 3 of them having crossed the 100,000 GPUs mark in their clusters; the remaining major AI customer will be crossing the 100,000 GPU mark soon; Arista Networks’ work with the major AI customers has mostly been scale out; management thinks Arista Networks as being seen as a very important participant in the buildout of massive AI clusters

All 4 are doing well on the 100,000 mark. 3 have already crossed it. The fourth one, I don’t know if they’ll cross it by end of the year or next year, but they’re getting there. So we’re feeling pretty good on our large GPU deployments…

…Until now, majority of how we’ve measured our AI success through our Cloud and AI Titans has been number of GPUs and how much are they installing and can we verify that the Ethernet network works. The majority of it to date has been scale out…

…How are these being built? Clearly, they’re being driven by large language models, tokens transformers, inference use cases, you name it all. So the influence is clearly coming from these players you named. But the way they are driving the infrastructure, and I can’t keep track of the gigawatts myself, it’s 10 gigawatts here, 10 there, 30 there. It’s adding up to a lot. But I can just tell you, no matter what it is, Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer. 

Arista Networks’ management is seeing AI demand taking longer to reach a stage where they have a sense of predictability on when the contracts land

The only other thing I’d add to this just generally as a topic is that when you think about that the large AI use cases are acceptance clauses, it really comes down to that coming together and the timing of that. That doesn’t follow a seasonality model…

…Good point. It lands when it lands. That is a very good point that Chantelle is making that in the cloud, we started having predictability of how they landed and how they got constructed. In AI, it’s taking longer.

As Arista Networks’ large customers focus on AI, the other parts of the company’s business are growing slower

It doesn’t leave the core business with a lot of opportunity. But that’s not to say it may be flattish, it may be grow. It’s to say that our customers are putting more attention there and that the existing business, which is already on very large numbers, will have lesser growth. We don’t yet know if it’s flattish or single digit or whether more will go to AI. We frankly can’t predict the mix this early in the game on 2026, but we think we’re in for a great ride in 2026.

Arista Networks’ management sees 3 big use cases for the company’s networking technologies for AI, namely, (1) scale up, (2) scale out, and (3) scale across; Arista Networks is also participating in scale across; management sees Arista Networks eventually participating in scale up, but it will take time; management thinks scale up deployments will have lower margins for Arista Networks, but they will carefully balance scale up, scale out, and scale across to achieve the overall appropriate margins; management thinks Arista Networks will be meeting scale up demand mostly with blue box solutions that come with lower software content from the company

There are 3 big use cases sitting in front of us, scale up, scale out and scale across. Arista’s participation to date has largely been in scale out. So we’ve got 2 major use cases in addition to augmenting this…

…Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer. Today, it is largely a set of proprietary technologies like NVLink or PCIe, and I think that will happen more in ’27…

…As we go to significant scale up volume, we expect more margin and economic capability coming together. In other words, the volume of these things will be larger, the pressure on margins will be greater. So — but we will carefully have a mix of scale up, scale out and scale across to not affect the overall margins, but definitely take our fair share in that…

…What I think the evolution of the blue box will be, I think it will be more significant in the scale up use cases where there’s a higher dependency on the strength of our hardware and our NetDI capability and a lower requirement for software.

Arista Networks’ management is seeing a sea-change happening in back-end networks where the use of Infiniband previously is now switching to the company’s Ethernet solutions; management is seeing back-end and front-end networking converging more and more; management is seeing that Arista Networks is the only networking company outside of China that is successfully selling both front-end and back-end networking; management thinks the convergence of front-end and back-end networking is really advantageous for Arista Networks

I think a year or maybe even 2 years ago, Meta, I may have told you this, we were literally outside looking in at all these back-end networks that were largely being constructed by — with InfiniBand. We’ve seen a sea change, particularly this year, where obviously, more and more times we’re being invited to construct their 800 gig, last year was more 400 gig. And I think next year will be a combination of [ 800 gig and 1.60 terabits ] on the back end. The back end is putting pressure on the front end, which is why it’s getting more and more difficult for us to say, okay, what’s the back-end number that natively connects to GPUs and what is the front end. But we know of concrete cases in our cloud titans, where not only is it putting pressure on the AI number, but they’re having to go and upgrade their cloud infrastructure to deal with it. That part is happening in a small sort of way, but what’s happening in a big sort of way is the back and front are coalescing and converging more. And it’s really becoming hard to tell, and it’s probably six of one half a dozen of the other…

…We’re seeing that Arista, I think, is the only successful vendor outside of China selling both front end and back end. And this is where our engineering alignment is so important because we can offer the customer a consistent solution across their entire infrastructure. I think this is a unique differentiator that will really help us succeed as these networks become more and more mainstream…

…In terms of the front end and back-end converging, this is truly advantageous to us because the front end requires a massive number of features. It’s incredibly mission-critical and supports a whole variety of applications, not just the straightforward of demanding communication patterns of the AI back end. So we see that the — our ability to tackle both of them effectively is a significant source of strength and a real differentiator and something that’s not easy for competitors to replicate. If you look at NVIDIA, for example, the sales volume is small in the front end and Cisco is small in the back end. And so I think we’ll see that kind of convergence being beneficial to us.

Arista Networks is doing well in both disaggregated scheduled fabrics and nonscheduled fabrics; management has no preference over one or the other and is happy to support whichever is best suited for customers’ needs

[Question] How should we think about your market opportunity between disaggregated scheduled fabrics versus nonscheduled fabrics, which appear to be used in the largest AI accelerator clusters at one of your largest customers?

[Answer] we’re not religious. We jointly developed the DSF architecture with one of our leading cloud titans, Meta. And we’ve been selling the nonscheduled fabric for a very long time. So we’ve never been religious about this. And both are doing very, very well at our cloud titans and specifically the one we co-developed with… 

…We’ve had both architectures in massive production scale for, I think, 15 years now. And we’ll continue to offer this range of choice to our customers, offering them their choice between the highest value fabric with deep buffers, no hotspots, congestion-free, loss-free or an unscheduled fabric, which is maybe lower cost, but also can be more difficult to operate. And they both run the same software. So it gives the customer a range of options and a consistent operating model.

In the earlier days of the AI data center buildout, there were 2 of the larger neoclouds that did not even consider Arista Networks’ networking solutions because they wanted to go with NVIDIA’s GPU & networking bundle, but now, management is increasingly seeing more neoclouds wanting to work with Arista Networks

[Question] You mentioned neocloud is an area where you’re getting more momentum. I think you guys actually said at the Analyst Day as well. I’m just curious like what are you seeing with that customer set? I guess, from my perspective I’ve historically kind of thought of that customer as being more focused on the bundle, which isn’t necessarily your game, but it sounds like you’re maybe talking a bit more positively.

[Answer] In the beginning, we were looking at them bundling. I can think of 2 examples where we weren’t even invited to the party because you want my GPU, you’ve got to get the network from me, so we weren’t there. But there are — leaving the 2 aside, and even I think those 2 might be — might get open-minded over time, there are many more neoclouds worldwide coming up that are really looking for Arista’s help, not only on the product, but on the network design, on the software capability, they just don’t have the staff and expertise to do everything themselves, and they would rather let us satisfy their network needs. So we are taking down many neocloud and smaller enterprises, admittedly smaller numbers of GPU clusters as well.

Arista Networks’ management sees power as being a really important asset in AI data center buildouts

But if they start with 1,000 to a few thousand, then we’re hopeful they’ll grow because the one advantage they seem to all have is colo space and power, which is, as you know, is a very prestigious asset going forward.

The size of AI data centers are much larger than the traditional data centers that used to be Arista Networks’ bread-and-butter; companies’ attention are all on new buildouts for AI, and not on the refreshment of existing CPU-based data centers into AI data centers

[Question] We’ve seen a lot of the deals with the hyperscalers or the AI model companies with new data center build-outs, probably not a level since we’ve seen with the cloud build-out. So I’m just curious, is there a way to think about Arista’s opportunity with new network builds versus refreshing or upgrading existing networks?

[Answer] That’s exactly the way to think about it because in the past, with the cloud, we rarely got to talk about gigawatts and beyond. So much of them were multi-megawatts. So these are newly constructed AI build-outs as opposed to the traditional CPU or storage-driven cloud build-outs. Of course, they will have refresh too. But frankly, they’re not getting the attention. All the attention is going to the new build-outs for AI. So that’s the right way to look at it.

Coupang (NYSE: CPNG)

Coupang’s management is focused on building Coupang’s internal AI computing infrastructure; management is running small tests on opening the infrastructure to 3rd-party usage, but there are no concrete plans to do so at the moment; management’s focus with AI is to generate practical savings for Coupang; management is seeing AI deliver tangible benefits across Coupang’s operations, such as in demand forecasting, automating fulfillment processes, optimizing delivery routes, and more; management is confident that AI will deliver significant savings for Coupang; management also sees AI as an opportunity to improve Coupang’s service quality and customer satisfaction

We are focused on building our own internal AI computing infrastructure to support our operations and improve performance and cost efficiencies. We have some small effort to test and learn on the — on making parts of that technology available externally. But we’re not at the stage of having or discussing any real customer demand or capital plans there. I think in all that we do, we’ll focus on practical applications, practical savings for the company for the — primarily and remain disciplined in how we allocate resources…

…AI has always been very central to operations, and that’s only becoming more true. AI is developing — delivering tangible benefits across our operations, including in areas that relate to demand forecasting, automating fulfillment processes, optimizing delivery routes among many other applications. These advances are helping us reduce waste, improve productivity and enhance the customer experience. We’re confident that AI will deliver significant savings and improve our P&L over time. And we have many efforts underway that we expect to bear fruits along those lines…

…AI is also more than just about efficiency. It provides an exciting opportunity to raise the bar for service quality and customer satisfaction. And we’re just as eager to expand our investment and experimentation cycles on that front.

Datadog (NASDAQ: DDOG)

Datadog experienced strong revenue growth in AI native customers in 2025 Q3; management saw an acceleration of growth in the AI cohort in 2025 Q3 when excluding the largest customer (likely to be OpenAI); management is seeing AI native customers broaden in number and size; Datadog has more than 500 AI native companies, of which 100 are spending more than $100,000 annually (was 80 in 2025 Q2), and 15 are spending more than $1 million (was 12 in 2025 Q2); management sees the activity of the AI native customers primarily as an indication of what’s to come as companies of every size and industry incorporate AI; AI native customers accounted for 12% of Datadog’s revenue in 2025 Q3 (was 11% in 2025 Q2); management thinks the percentage of Datadog’s revenue from AI native customers will be less relevant metric over time as AI usage in production broadens to non-AI natives; Datadog’s larger AI native customers encompass a fairly broad group of AI companies

We also experienced strong revenue growth for our AI native customers and a broadening contribution to growth among those customers. There, too, we saw an acceleration of growth in our AI cohort in Q3 when excluding our largest customer…

…We continue to help AI native customers big and small to grow and scale their businesses. And we continue to see this group broaden in number and size with more than 500 AI native companies in this group, but 100 of which are spending more than $100,000 annually with Datadog and more than 15 who are spending more than $1 million annually with us. While we know there’s a lot of attention on this cohort, we primarily see it as an indication of what’s to come as companies of every size and every single industry incorporate AI into their cloud applications…

…In Q3, this group represented 12% of our revenue, up from 11% last quarter and about 6% in the year ago quarter. I will note that over time, we think this metric will become less relevant as AI usage in production broadens beyond this group of customers…

…[Question] On the AI side, and I don’t want to talk about the customer, but more the other ones, like 15 customers over 1 million. That’s like a big number and 100 over 100,000. How do we have to think about the nature of those?

[Answer] It’s actually fairly broad. So there is model vendors, there’s models — model that can be the lens model that can be video, it can be sound generation, it can be all of the various parts of the stack you see as independent companies. It can be — there’s quite a few companies that do that work on the coding side. So coding assistants and vibe coders and everything in that range. Some of these are very new companies. Some of these are not very new companies, some of these started 5, 7, 8 years ago. And we’re sort of not necessarily AI native from day 1, but very quickly, that would give them the growth they see today with the people to AI. So we see a little bit of that. We have companies that are other parts of the stack in AI on the, say, the server side, the other components of the infrastructure. And we have other companies that are purely applications filled with AI. So we have a bit of everything in there.

Datadog’s management is seeing high customer interest in Datadog’s Bits AI agents; the Bits AI SRE agent already has thousands of customers in preview-access; management is getting very enthusiastic feedback from customers on the time and cost savings Bits AI is delivering; a RUM (Real User Monitoring) product user has used Bits AI SRE Agent to significantly improve meantime to resolution; management is currently unsure if Bits AI will have a bigger impact on Datadog’s business from direct monetisation, or indirect monetisation; management thinks Bits AI has been a differentiator for Datadog; management is improving Bits AI in a very aggressive manner; Bits AI has helped Datadog land some large Cloud SIEM (security information and event management) deals

We are seeing high customer interest in our Bits AI agents, which we announced at our DASH user conference in June. We have now onboarded thousands of customers for preview access, the Bits AI SRE agent. And as we prepare for general availability, we are getting very enthusiastic feedback on the time and cost savings enabled by Bits AI. As RUM user recently told us, with Bits AI SRE being on call 24/7 for us, meantime to resolution for our services has improved significantly. For most cases, the investigation is already taken care of well before our engineers sit down and open their laptops to assess the issue. And this is not an isolated comment. We see the potential here for our agents who radically transform observability and operations…

…In terms of the impact for next year, on the packaging side, I’m not completely sure yet whether the biggest impact will be seen from what we charge Bits AI itself or for the rest of the platform, that it gets benefits from the differentiation of Bits AI…

…But what we can tell is this is differentiating, this is good. It works significantly better than anything else we’ve seen or heard of in the market, and we are doubling down on it. We have many, many teams now working on deepening Bits AI SREs to making sure it goes further into the resolution doesn’t just point to the issue, but fixes the code that all these kind of things working hard on that. We’re also working on breadth, making sure that we train it on many more types of data, many types of sources, sometimes even systems that are observe the systems that are not dialed up, so we can cut across to other systems our customers are using. So we are very, very aggressively developing Bits AI SRE…

…The Bits AI Agent is — it really has a growth factor for customers. So what works really well is and we’ve seen that number of times, like we set it up for them. It’s running on their alert and they go through an outage and they still go to the motion, so they still go — they still set up a bridge and they have 20 people and they spend 2 hours and in the end, they have an idea what went wrong. And then they go to Datadog and they see, oh, there’s an investigation that had run. And 3 minutes into the outage, it got the same conclusion that we got 2 hours later with 20 people on the call. And that’s completely eye-opening for customers when they see us. And we have — so that’s why we get many quotes about it…

…That’s what helped us win some large land deals for our Cloud SIEM products because the combination of the SIEM that runs extremely efficiently on top of observability data that runs very efficiently on top of Flex Log, but also saves an immense amount of time by getting 90% of the issues out of the way with automated investigation that’s extremely attractive to customers.

Datadog’s management recently launched LLM Experiments and Playgrounds for general availability, so companies can rapidly iterate on LLM applications and AI agents; management also recently launched LLM-as-a-judge evaluations for general availability for customers to access their AI application’s quality and safety; the number of LLM spans customers are sending to Datadog has quadrupled in the past few months; management is seeing a lot of interest in Datadog MCP servers; the Datadog MCP servers bridges Datadog and AI agents from 3rd parties; preview customers of Datadog MCP servers are using real-time production data for trouble shooting, root causes analysis and automation in AI agents; management sees MCP as a way to cement Datadog into customers’ workflows; management continues to see customer interest grow for next-gen AI observability; 5,000 customers are sending AI data to one or more of Datadog’s AI integrations; Datadog’s foundational model for time series forecasting, TOTO OpenWave, is one of the top downloads on Hugging Face over the past few months; Datadog currently has products getting into market for GPU monitoring, but those are not generating any significant revenue; all of the details on AI-related revenues management has shared are not for GPU monitoring

In LLM observability, we recently launched LLM experiments and playgrounds for general availability, helping teams to rapidly iterate on LLM applications and AI agents. We also launched custom LLM-as-a-judge evaluations for general availability, which lets customers write evaluation prompts to access application quality and safety. As an illustration of growth and adoption in the past few months, the number of LLM spans customers are sending to Datadog has more than quadrupled.

We are seeing a lot of interest in the Datadog MCP servers. Our MCP server acts as a bridge between Datadog and AI agents, such as Codex OpenAI, Claude by Anthropic, Cursor, GitHub Copilot, Goose by Block and many more. Our preview customers are using real-time production data context to drive trouble shooting, root causes analysis and automation in these agents. One user told us, “The Datadog MCP server is a great tool. It enables me to get the last 5 of my app and follow the spans and traces all the way to the root cause. I have never been more hooked on Datadog.” So we see MCP adoption as a great way to cement Datadog even further into our customers’ workflows…

…We continue to see rising customer interest for next-gen AI observability with over 5,000 customers sending us AI data to one or more of our AI integrations…

…A shout out to our AI research team for the amazing work they have published. Our TOTO OpenWave time-series forecasting model has been one of the top downloads on Hugging Face over the past few months, and that is across all categories. It is very impactful as, among other things, the high quality of this work allows us to attract world-class AI researchers and engineers…

…We have products that are getting into the market now for GPU monitoring. But these don’t generate any significant revenue yet. So all the revenues we’ve shared, like the acceleration, et cetera, that’s not related to us capitalizing more on GPUs, that’s a future opportunity.

Example of a 7-figure expansion deal with a heavy equipment company; the heavy equipment company will replace its open source log solution with Datadog’s products; the heavy equipment company also plans to adopt Datadog’s LLM Observability

We signed a 7-figure annualized expansion with a Fortune 500 heavy equipment company. With this expansion, this customer will replace its open source log solution with Datadog log management and Flex logs. They plan to adopt LLM Observability and their IT team is using cloud cost management to improve cost visibility and governance.

Example of a 9-figure expansion deal with a leading AI company; the AI company has expanded its usage of multiple Datadog products and has committed to an early renewal with higher commitments to secure better terms; the AI company is Datadog’s largest AI native customer (and is likely OpenAI)

We signed a 9-figure annualized expansion with a leading AI company. This company has been a long-time Datadog customer and has expanded their usage of multiple products, securing better economics for a higher commitment with an early renewal…

…We extended the contract of our largest AI native customer.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management sees the market opportunity in cloud and AI growing rapidly into trillions of dollars

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers of our business. Meanwhile, we are advancing rapidly in AI, where we are incredibly excited about our opportunities. We’re building a comprehensive set of AI Observability products to help our customers tackle the higher complexity that comes with these technologies. And we are building AI into Datadog…

…The market opportunity in cloud and AI is expected to grow rapidly into the trillions of dollars and companies of every size and industry are looking to adopt AI to deliver value to their customers and drive positive business outcome. So we’re moving fast to help our customers develop, deploy and grow into the cloud and into the AI world.

Datadog’s management’s current guidance for 2025 is significantly higher than the guidance given in the 2024 Q4 earnings call, and the biggest surprise has been the adoption of AI growing faster than expected

[Question] If we go back to the beginning of the year, Datadog was expecting 19% revenue growth. It looks like you’re tracking to something over 26% growth now, and that’s just the high end of your guidance. So I guess my question is, what surprised you the most this year?

[Answer] I think the biggest surprise for us has been that — so AI in general has or AI adoption has grown faster than we thought it would at the beginning of the year. So we’ve seen that across our AI cohort. We’ve seen also that we got some of our new products and new, like the changes we’re making on the go-to-market side to click perhaps earlier than we would have thought otherwise. So all in all, we saw the leading part of the business with AI growth faster, not the lagging but the slower growing, more traditional part of the business also accelerate and that gets us where we are today.

Datadog’s management thinks that customers will eventually want agentic monitoring capabilities in a unified platform for observability because (1) it’s not practical for customers to manage so many integrations that each have their own management control and observability control, and (2) the AI parts and non-AI parts cannot be separated

[Question] When you look at some of the independent software vendors that are releasing Agentic solutions, Agentic portfolios. A number of them are including observability as part of their sort of value proposition. Is there any work you think Datadog has to do to sort of infiltrate that market or make sure that customers look to Datadog as that Agentic monitoring capability as some of these independent software vendors try to bundle in observability into their solutions.

[Answer] There’s absolutely no doubt to us that the customers will even want a unified platform for observability for all of this. There’s 2 parts to that. One is, historically, every single piece of software we integrate with, whether that’s SaaS or things that customers on themselves, also has its own management control and observability control. But you’re not going to log into [ 70 ] or in the case of customers we mentioned that they use 60 integrations for the smaller customers, 150 integrations for the larger ones. It’s not practical to actually go and manage that separately. So we think all of that belongs in a central place, and that’s the historical trend we’ve seen. We also think that you can’t separate the AI parts from the non-AI parts of the business. So you’re not going to look at your agents separately that you do at your web hosting and your database and your — everything else you have in your stack. So all of that in the end will be attached to observability.

Datadog’s management thinks AI technology allows Datadog to build capabilities into the On-Call product that it otherwise could not; management thinks the future of On-Call, when infused with AI in eras such as incident prediction, is very exciting

We entered the field with On-Call because we wanted to own the end-to-end incident resolution. So we wanted because we before that, we were detecting the incidents and sending the alerts, and then we were pretty much where the resolution happened after that. Customers were spending their time in data to diagnose and understand what was going on. So we wanted to own the full cycle. .

And we thought that with AI, in particular, we’d have the ability to do things if we are on the whole cycle that we couldn’t do otherwise. So what you see right now is, I mean, this resonates with customers, they adopting to product. We’ve mentioned like some exciting customers with say, [ one ] with 5,000 seats for On-Call, which is very exciting. But in the future, there’s many more things we can do in working on for that product.

If we both detect incident and notify, we can do some sort of things such as even predicting the incident and notifying early or rerouting early or telling people before the incident actually takes place, how they can potentially fix it. So these are all things we’re working on. I mean, look, if you look at the various product announcements we’ve made, whether that’s Bits AI or SRE or the time series forecasting model we have released. When you assemble all that, you get to a very, very interesting picture of what we can do in the future. So we’re excited by that.

Nu Holdings (NYSE: NU)

Nu Holdings’ management’s vision for Nu Holdings is to be AI-first, where foundational models are deeply integrated into the company’s operations; management thinks Nu Holdings is uniquely positioned to be a leader in the use of AI in financial services, ahead of incumbent banks and regional fintechs; management has developed Nuformer, in the past 12-15 months; Nuformer is Nu Holdings’ proprietary approach for building large generalizable models that are based on principles similar to those behind leading LLMs (large language models); Nuformer has 330 million parameters and was trained on approximately 600 billion tokens, which is already a leading scale of data for the financial services industry, but Nu Holdings’ full data set is much larger; management believes Nu Holdings’ full data set gives Nuformer a unique edge in improving its capabilities; the adoption of foundational models has delivered an improvement 3x higher than what’s typically observed in successful machine learning upgrades for credit models; the adoption of foundational models has helped Nu Holdings meaningfully increase credit limits for eligible customers while maintaining the same level of risk; management is scaling the use of foundational models to Mexico and every other part of Nu Holdings’ business; management thinks that embedding AI into Nu Holdings is a nce-in-a-lifetime opportunity to further differentiate the company from traditional banks

Our vision is to become AI first, which means integrating foundation models deeply into our operations to drive an AI-native interface to banking, while creating meaningful benefits for both our customers and our business…

…We believe Nubank is uniquely positioned to become AI first and a leader in the use of AI in financial services globally, and we’re already starting to see the first breakthroughs. Since our early days, we’ve known that technology and data will be our strongest competitive advantage, being cloud native and built entirely on modern architecture enables us to simulate, experiment, train and deploy foundation models at scale. Coupled with our proven ability to attract world-class talent, this puts us ahead of incumbent banks and regional fintech competitors and places us in a unique position globally.

Over the past 12 to 15 months, we developed Nuformer, our proprietary approach for building large generalizable models based on advanced transformer architectures and self-supervised learning principles similar to those powering world-class LLMs. These models provide a deeper understanding of customer behaviors and can be deployed across our critical risk and personalization engines. To reach this level of performance, the first generation of our Nuformer model was built with 330 million parameters and trained on approximately 600 billion tokens, an unprecedented scale of data by financial industry standards. That data represents only a fraction of our full data set, which spans trillions of tokens and reflects the vast scale and diversity of Nubank’s platform. Our business model with principality at its core generates a deep repository of high-quality transactional and behavioral data, giving us a distinctive edge by enabling Nuformer to learn from richer context and continuously strengthening its predictive power.

Historically, gains in credit performance have come from our main fronts, incorporating more and better data sources into models, expanding training samples or reducing bias within them, optimizing positive frameworks, including the use of complementary models that evaluate different dimensions of credit risk; and finally, refining modeling techniques from definition of targets to model architecture and feature engineering. The adoption of foundation models represents a radical expansion of this last frontier. It brings a research-driven approach that moves the needle through advances in model architecture and training processes, enabling rapid and continuous improvement as AI researchers push the boundaries of what’s possible. When we applied this approach, the models were built to deliver an average improvement about 3x higher than what’s typically observed in successful machine learning model upgrades. Translating this into business outcomes, our initial models enabled a major upgrade to credit card limit policies in Brazil, allowing us to meaningfully increase limits for eligible customers while maintaining the same overall risk appetite. This successful breakthrough within an already robust underwriting model, like credit card in Brazil, underscores the significant potential of these advanced approaches. We’re now focused on scaling this innovation beyond Brazil, already in motion in Mexico and extending them across every part of Nubank from personalization and cross-sell to fraud and collections, further reinforcing both the strength of our model and our ability to execute at scale.

That said, we’re still just scratching the surface. As always, at Nubank, it’s still day 1, but we believe that embedding AI into our business represents a once-in-a-lifetime opportunity to further differentiate Nubank from traditional banks.

Nu Holdings’ management sees AI improving Nu Holdings’ understanding of each customer; management sees AI changing the way users interact with the company; management sees significant opportunity to use agentic workflows across Nu Holdings’ products

For our customers, AI is enhancing our understanding of each individual and their financial needs, allowing us to deliver personalized recommendations, contextual offers and products and proactive insights at the right amount. It will also transform the way people interact with Nubank, be it through a simpler and seamless app or to a number of additional channels, embedding conversational user interfaces. We think there is a significant opportunity to include Agentic workflows across most products and services, improving customer experiences across the board.

Nu Holdings’ management thinks AI is helping Nu Holdings improve risk management and scale efficiently; AI is helping Nu Holdings reduce credit losses and fraud losses; AI is helping Nu Holdings improve productivity

For our business, AI is strengthening how we manage risk and scale efficient. It is helping us to design safer and more precise financial solutions, reducing credit and fraud losses and enabling tailored collection strategies that drive better recoveries. At the same time, it is enhancing productivity across the company from leaner operations to faster development cycles and higher engineering throughput.

Paycom Software (NYSE: PAYC)

Paycom’s management has now enabled IWant, Paycom’s new command-driven AI product, across the entire client base; IWant has successfully responded to millions of queries from employees, managers and executives; management is seeing a dramatic uptick in usage of IWant, especially among new users (and this includes C-suites); management is particularly encouraged by the engagement with IWant among C-suites; IWant is hosted by Paycom and draws from a single database, which minimises errors; management sees IWant as a new way of accessing and navigating Paycom’s software ecosystem; IWant is changing how new and existing Paycom users are accessing Paycom’s software ecosystem and deriving value; IWant gives C-suites access to information about their companies that they previously did not have directly; management is currently seeing sticky user behaviour with IWant, but it’s still early for IWant since it was launched in July 2025

We also executed the launch of our award-winning and industry-first command-driven AI product, IWant. Now enabled across our entire client base, IWant is transforming how our clients and their employees engage with their HR and payroll data.

IWant has already successfully responded to millions of queries from employees, managers and executives, extending the power of our full solution automation. We are seeing a dramatic uptick in usage, especially among new users, which include the C-suite and newly onboarded employees of our clients. The intuitive nature of IWant means new employees no longer need training on the system and are able to utilize the full solution upon hire. I’m particularly encouraged by the engagement we are seeing among the C-suite. Traditionally, executives have not been daily users of HCM solutions. With IWant, thousands of C-suite executives are already pulling data and insights directly from the Paycom system and the feedback has been phenomenal…

…IWant hosted by Paycom only draws from Paycom’s single database, which eliminates conflicts created by inconsistent or duplicative external data sets, significantly improving data integrity and the quality of the user experience…

…If you’re a new user being added on to our system, meaning you’re a new employee, meaning you’re just now gaining access to this system, it’s your predominant way to use our software. And so as we look into the future, I would expect we would see more and more people utilizing IWant as a way to access and navigate through our system in order to make changes and receive information than what you would — those that are actually navigating through the traditional way…

…With IWant, the more of our product that you have that you’re utilizing, the more access to the information that you have. So it becomes important in that as well as with IWant, you’re eliminating all navigation as well. So you don’t really need training on the system. Most new employees, they would come into our system and they would have some level of training on how to use the system. With IWant, we’re just not seeing that with new employees coming on to the system. You just tell it what you want, and it takes it there. And so again, sometimes usage patterns are hard to change. And I don’t think someone should change their usage pattern unless there’s an opportunity to be more efficient or get something — get there quicker. And we’re seeing that with new people that are onboarded in the system. And then we’ve also seen that with traditional users that may not have been achieving full value for all the modules that they have…

…I, as a CEO, I’m not set up on our benefit system to go run benefit information. I’m not set up on our applicant tracking or talent acquisition system. I’m not set up on our payroll to run all the payroll stuff or HR or any of it, expenses, any of it. With IWant, I can go in and I get access to everything. I don’t need to know how to use it. I don’t need to know how to do anything. I just tell it the information that I want…

…[Question] We see some AI systems out there that users may use initially, but then go back to how they’ve operated before. Can you share a little bit about the ramp and consistency of usage you’re seeing so far?

[Answer] We’re not seeing people use it a couple of times and then stop using it. I will say that when you looked at it in the early days, people didn’t know how to use it. If you ask IWant where the closest pizza restaurant is to you, it’s not going to be real successful in answering that question. And so people had to kind of learn how to use it to their benefit. And it’s been a short period of time. Again, we’ve had IWant out since July. And every client we have has it and all their employees do now.

Paycom’s management thinks that command-driven functionality will be the future for all software

I’m confident that command-driven functionality is the future for all software.

Paycom’s management has significantly expanded Paycom’s data center capabilities to support the company’s push for automation and future AI developments; management frontloaded $100 million in capex in 2025 Q3 to match the IWant rollout; the $100 million capex provides Paycom with multiyear capacity to support its AI initiatives; management has to do extensive optimisation to run IWant on Paycom’s own infrastructure; management thinks it’s more expensive to rent AI compute capacity than to build Paycom’s own capacity, especially since Paycom has been operating its own data centers for the last 27 years

To facilitate the automation experience, including IWant and future AI developments in the pipeline, we significantly expanded our data center capabilities, spending roughly $100 million of AI-focused CapEx on our Phoenix and Oklahoma City data centers. We front-loaded this CapEx to match the timing of our IWant rollout in Q3…

…More specifically, we invested approximately $100 million into our data centers, and that spend is now largely complete. This investment provides us a multiyear capacity runway to support our AI initiatives…

…[Question] Are you guys doing anything to optimize the usage of GPUs to better handle the millions of queries you’re already seeing, whether it be in the underlying LLM or just teaching users what they can and can’t do?

[Answer] There’s a lot you have to do to optimize. It matters how many times you’re hitting it. It matters how you’re filtering through. We use these things to also look at nonresponse rates and everything else. So there’s a lot that we go through to be able to analyze. And this is a daily analyzation of what’s going on within our product. So I don’t want to describe everything that we’re doing. It does matter though, how you develop something to how much capacity of the GPU you’re going to actually utilize or need…

…We also looked at utilizing public cloud type data centers, if you will, to be able to host for us and utilizing their GPUs. And with where we see ourselves going in the future and what the costs were associated with just being able to handle our current load, initial load for IWant, we felt it better for us to go ahead and just set up and buy our own plus that way we have control over it, and it’s operating just as all the rest of our business has for the last 27 years of operating our own data centers. So it’s really worked for us.

Paycom’s management does not see the need for major capex again in the next few years after the $100 million capex for AI infrastructure

We did have to make a spend in order to have that capacity for both what we’re doing now and into the future. So we’re in this business now. I don’t expect that we would have any level even close to this type of spend over the next couple of years…

…I was just saying I don’t know of any major CapEx opportunities for next year or even the year after from a CapEx perspective…

Paycom’s management is not seeing Paycom’s competitors coming up with AI-powered solutions

To the extent our competitors do have AI, we’re not running into it when we talk to their clients. And I don’t know how they’re paying for it because when we looked into it, it was pretty expensive to rent.

Sea Ltd (NYSE: SE)

Sea’s management is seeing Shopee’s AI efforts contribute meaningful monetisation gains in 2025 Q3; Shopee’s AI efforts include (1) smarter search, (2) better recommendations, (3) more personalized content, (4) enhanced product discovery for shoppers, and (5) generative AI tools for sellers to make their product listings more appealing; Shopee’s AI efforts  have led to the following in 2025 Q3, (1) a 10% year-on-year increase in purchase conversion rate, (2) a 12% year-on-year increase in buyer purchase frequency, and (3) a 15% year-on-year increase in average monthly active buyers; Shopee will not be building foundational large language models or data centers like what Big Tech is doing; management wants Shopee to utilise AI technologies developed by Big Tech, and focus on applications; the majority of Shopee’s customer service is now handled by AI and customer-satisfaction is very high

Our AI efforts have already begun to bear fruit, contributing meaningfully to our monetization gains in the third quarter. Smarter search, better recommendations and more personalized content have made Shopee easier and more enjoyable to shop on. We have also used AI to enhance product discovery beyond search, helping buyers find relevant and interesting items even when they arrive without a specific purchase in mind. We empowered sellers with AI tools, enabling them to generate image, videos, text descriptions and virtual showrooms to make their product listings more appealing. These initiatives have increased buyer engagement, improving our purchase conversion rate by 10% year-on-year in the third quarter. Taken together, all these efforts have resonated with our customers. Buyer purchase frequency across our markets continued to improve, going up a further 12% year-on-year in the third quarter. Average monthly active buyers also increased 15% year-on-year in the third quarter…

…We’re not going to like develop — trying to make some fundamental large language model breakthrough. We’re not going to build data centers. I think like for that part, we are very much like open to work with all the like big tech like who are kind of — we have a lot of admiration with respect to how much effort and how much they can do to continually have the breakthrough of the technology and make technology more powerful and more useful. And what we are going to more focus on applications and how that technology built in Silicon Valley or anywhere in the world transform to a consumers’ daily life, a small business like in Indonesia, in Vietnam, in Brazil. So that will be specially what we are good at…

…Now majority of our customer service is handled by AI like a chatbot and the satisfaction rate is very, very high.

Shopify (NASDAQ: SHOP)

Shopify’s management sees Shopify holding the data-advantage in the AI revolution in commerce; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly

If AI is fueled by data, then Shopify has a clear advantage. We power millions of merchants and billions of transactions. That gives us access to a world of data across a spectrum of commerce. And we’re using that data to create better shopping experiences for both merchants and shoppers…

…We’ve structured data across billions of products so our partners can surface the most relevant items in seconds.

Shopify’s management is thinking of AI in 3 ways, (1) how AI helps merchants sell better, (2) how AI helps merchants operate better, and (3) how AI helps Shopify operate better

We think about the evolution of AI in 3 ways: how AI will help our merchants sell everywhere, how AI will help our merchants operate smarter, and how we, as a company, will use AI to build better.

Shopify’s management thinks agentic commerce will fundamentally change how consumers shop; Shopify has a number of tools for merchants to thrive with agentic commerce, namely, Catalog, Universal Cart, and Checkout Kit; management thinks agentic commerce has 3 layers, (1) discovery, (2) the purchasing experience, and (3) the post-purchase journey; management is building for a seamless and intuitive shopping experience across the 3 layers; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly; leading AI players including ChatGPT and Perplexity, are already using Shopify’s Catalog tool to power product discovery directly inside their chat interfaces; Universal Cart and Checkout Kit are powering in-chat shopping flows within ChatGPT and Microsoft Copilot; Shopify is building tools that help AI agents keep customers engaged and informed throughout the entire post-purchase experience; management believes that Shopify is helping its merchants be primed for success in agentic commerce; management has seen AI-driven traffic to Shopify stores grow 7x, and orders attributed to AI searches up 11x, since January 2025; a recent survey of shoppers by Shopify showed 64% of shoppers are likely to use AI for their buying in BFCM (Black Friday – Cyber Monday); management thinks agentic commerce is still really early; Shopify’s AI agent tools were built only last year

AI is helping our merchants sell everywhere, what’s known as agentic commerce. Put simply, AI is able to fundamentally change how we shop, moving from search to conversation, helping all consumers purchase more efficiently. And that’s why we built the commerce for agents tools that we introduced on our last call, Catalog, Universal Cart and Checkout Kit. These tools make it easier for agents to shop across merchant stores on a buyer’s behalf…

…Agentic commerce is so much more than just the last click. Think about it in 3 layers: product discovery, purchasing experience and the post-purchase journey. Now if you’re only looking at the payment or checkout layer, you’re missing the bigger picture of what we’re building: a seamless and intuitive shopping experience end to end.

First, let’s talk discovery. We’ve structured data across billions of products so our partners can surface the most relevant items in seconds. It’s clear where this is going. Shopping is becoming more conversational, more personalized and much more efficient. And that’s why the leading AI partners are already using Catalog to power product discovery inside their experiences. I’m sure you all saw the announcement about our partnership with ChatGPT, which is a strategic play that we’re really excited about. But let me be clear, we’re also partnered with other leaders in conversational AI like Perplexity, and our goal is to power product discovery for all agents, making us the standard across the Internet…

…On purchasing experience. Once a shopper finds what they want, Universal Cart and Checkout Kit make add to cart and checkout seamless inside the conversation. ChatGPT, along with Microsoft Copilot have already partnered with us here to make in-chat shopping flows possible.

And finally, post purchase. We’re investing in tools that help agents keep customers engaged and informed, order status, return, support, reorder prompts, so the experience stays smooth and merchants build durable relationships with their customers…

…What all this should tell you is that our merchants are primed for success in the new world of agentic commerce…

…Since January, we’ve seen AI-driven traffic to Shopify stores up like 7x. And we’ve actually seen orders attributed to AI searches up like 11x since that. So the data is showing it’s already growing. And we actually just recently did a survey for — to consumers to better understand some BFCM trends, and something like 64% of shoppers told us they’re likely to use AI to some extent in their buying…

…It’s still obviously very, very early. But what we’re really trying to do is laying the rails for agentic commerce…

…We built AI agent tools last year, now we’re partnering with everyone that matters. 

Shopify’s AI assistant for merchants, Sidekick, saw 750,000 shops using it for the 1st time in 2025 Q3; to-date, Sidekick has had 100 million conversations with merchants, with 8 million in October 2025; merchants’ conversations with Sidekick can go 50-100 turns deep, covering a wide range of topics; Sidekick will get better over time; Sidekick was built 2 years ago, before there was hype about AI assistants

Sidekick, our on-platform intelligent assistant, is a prime example of that commitment. And frankly, the rate of adoption speaks for itself. In Q3 alone, over 750,000 shops used Sidekick for the first time. And to date, Sidekick has had almost 100 million conversations with merchants, with 8 million in October alone. And it’s quickly becoming the default way merchants get things done. Hundreds of thousands of merchants are running core parts of their business using Sidekick. In fact, conversation can go from 50 to 100 turns deep, covering everything from analytics and building new customer segments, to automating better SEO and so much more…

…At this scale, Sidekick will only get smarter and more powerful…

…We built Sidekick 2 years ago, well before any of the hype around that.

Shopify’s management is using AI to drive Shopify to build better products; Shopify has an internal tool known as Scout, which is a voice of the customer system that indexes hundreds of millions of merchant feedback items and makes them searchable; anyone in Shopify can use Scout to get grounded answers in seconds when similar requests would have taken weeks in the past; Shopify is developing other similar tools to Scout to make faster, better, decisions

The last thing I’ll touch on with AI is how we’re using it to build better products. For years, we’ve been honing our internal capabilities in the same way we’ve been empowering our merchants: shipping fast, measuring what matters and scaling what works using AI…

…We’re turning vast amounts of raw signal into ship products and features quickly and relentlessly…

…We have a tool effectually known as Scout. Now Scout is an internal voice of the customer system that indexes hundreds of millions of merchant feedback items, making them searchable within our tools. Any PM, designer, engineer or, frankly, anyone at the company, including myself and Jeff, can ask a question and get grounded answers in seconds. That used to take weeks. Patterns emerge by market, vertical and merchant size, allowing us to write clear specs, prioritize better and ship with confidence. And Scout is just one of many tools we’re developing to turn our own signals, whether it’s support tickets, usage data, reviews, social interactions or even Sidekick prompts into fast informed decisions.

Shopify Campaign has seen 9x year-on-year increase in budget commitments in 2025 Q3; Shopify Campaign has seen 4x year-on-year increase in merchant adoption of Shopify Campaign in 2025 Q3; management has delivered product-improvements to Shopify Campaign, including an AI-powered ranking improvement

We’ve seen 9x year-on-year increase in budget commitments from merchants this quarter for Campaigns. In fact, if you just look at Q3 2024 to Q3 2025, we’ve actually seen a 4x year-on-year growth in merchant adoption of Campaigns…

…On the product side, this thing keeps getting better and better. We introduced Gross Sales, which is this new default high-reach objective in campaigns. We just shipped an AI-powered ranking improvement, which is showing some really good early results in terms of performance gains.

Tencent (OTC: TCEHY)

Tencent’s investments in AI are benefiting its ad targeting, game engagement, coding, and gaming and video production activities; management is currently seeing AI efficiency gains in the form of growth in revenue and gross profit; management thinks that AI enables Tencent to build more, instead of reducing costs

Our strategic investment in AI are benefiting us in business areas such as ad targeting and game engagement as well as efficiency enhancement areas such as coating and game and video production…

… And if you look at the benefit AI, at this stage, a lot of the efficiency gains are more on the revenue side. and the gross profit side. So you see pretty good growth in those items. But in terms of the cost item, I would say we have already done a pretty big organizational optimization a few years back. And the organization that we have is actually an efficient and AI adoption actually allows our team to do more as well as instead of to reduce cost, which I think some other companies you are probably comparing with.

Tencent’s management is upgrading the team and architecture of Tencent’s Hunyuan foundation model; management believes that Hunyuan’s imaging and 3D generation models are industry-leading; management is hiring more top-notch research talent for Hunyuan; management believes that Hunyuan’s capabilities will improve, and that all the models in China are currently pretty similar 

We are upgrading the team and architecture of our Hunyuan foundation model, whose imaging and 3D generation models are now industry-leading…

…In AI, we enhanced Hunyuan’s large-language models, complex reasoning capabilities, especially in coding, mathematics, and science. Our Hunyuan full length image generation model is ranked first globally at text-to-imaging models by LLMArena. And our Hunyuan 3D model is the top-ranked generative model of Hugging Face…

…In terms of the Hunyan team and the Hunyuan architecture, we are actually hiring more top-notch talent, especially in the research area in order to complement our existing strong engineering team and they are complementary to each other. And we have also been improving the Hunyuan overall architecture across different dimensions such as improving the hardware and software infrastructure in order to support better data preparation, to support better pretraining of the model, as well as to support reinforcement learning across different knowledge domains at scale. So these are the improvements that we are making more specifically on the Hunyuan team as well as the Hunyuan architecture…

…And I would say we are actually happy with the progress we have made already. And if you wait a little bit for our mix model, you can see, meaning for improvement in terms of the Hunyan capability. And I believe with the new improvements that we have been making, we’ll continue to pick up pace on the Hunyuan capability. And at this point in time, we actually do not believe that there is a decisive better model in China as everybody is actually locked in the pretty close range and different models may be differentm maybe better in different use cases as well. So we don’t believe we are really behind. 

Tencent’s Management thinks Weixin’s adoption will gain further traction as Hunyuan becomes more capable, Yuanbao becomes more widely used, and more agentic AI capabilities are introduced by Tencent; Tencent’s AI assistant Yuan Bao has new features to serve Weixin users better, such as Yuanbao-generated content in Tencent News Feed; management wants to add more functionalities from Yuanbao into Weixin; management thinks as Weixin users get exposed to Yuanbao’s capabilities through Weixin, they will become Yuanbao app users too; management is currently seeing a pretty good ramp in Yuanbao engagement; management’s blue-sky scenario for agentic AI is an AI agent that can help users perform a multitude of tasks within the Weixin ecosystem; Tencent is still very, very early in building a capable AI agent; management is also starting to work on vertical agentic capabilities

As Hunyuan’s capabilities continue to improve, our investment in growing Yuanbao adoption and our efforts in developing agentic AI capabilities, we think Weixin will gain further traction…

…Now in terms of how Yuanbao and Weixin complement each other. I would point to the fact that Weixin has actually introduced a number of AI features based on Yuanbao’s capability…

And we also enriched the Tencent News Feed in Weixin with Yuanbao-generated content and allowed a lot of users to use that as a way to explore more news content, related news content, as well as ask questions on the news content. And we’re actually adding more and we are planning to add more functionalities of Yuanbao into Weixin. So those functionalities actually, one, serve the Weixin users better; and two, actually help Yuanbao to gain a larger audience and more and more of these audiences find Yuanbao’s capability through Weixin and eventually become a Yuanbao app user…

…We actually have been also seeing quite a good ramp in terms of Yuanbao engagement. So I think you see both the model capability as well as our AI products keep on improving…

…I think the blue sky scenario is that eventually, Weixin will come up with an AI agent actually can help the user to essentially do a lot of tasks within Weixin and leveraging AI, right? Because if you look at the ecosystem of Weixin, it has a very strong communications and social ecosystem and it has a lot of data that allows the agent to understand about the users, feeds as well as the intentions and interest. It has a very strong content ecosystem in the form of official accounts and video accounts. It has mini-program ecosystem, which essentially includes most of the use cases on Internet. It has a commerce ecosystem, which allows people to buy stuff and the payment ecosystem, which actually allows people to pay for it almost immediately. So that is almost ideal assistant for users and understands about the users’ needs and can actually perform all the tasks within the ecosystem. So that’s the blue sky scenario.

Now I think, how do we get there, right? At this point in time, it’s actually very early stage in terms of development. Weixin is doing a number of things. In parallel, for example, it’s introducing Yuanbao capabilities into Weixin so that we can test out a lot of the AI features on a stand-alone basis with innovation. It’s also enhancing search with AI so that we can serve the users search and information collecting as well as analysis needs more efficiently.

We are also starting to work on vertical agentic capabilities. And that’s something that we are working on. We have not launched it yet. But then very likely, we’ll be sort of working on a functionality one by one.

Tencent’s management has introduced AI Marketing Plus, Tencent’s automated ad campaign solution for targeting, bidding, placement, and ad creation; AI Marketing Plus helps improve advertisers’ return on marketing spend; increases in commercial query volume and kick-through rates contributed to notable revenue growth in Weixin Search; management has increased the relevance of Weixin search ads through the upgrade of LLM capabilities; AI Marketing Plus helps advertisers reach inventories and user profiles automatically rather than manually, and SMBs are the most eager to adopt AI Marketing Plus; management is also seeing large enterprises being interested in AI Marketing Plus, in a similar manner to how enterprises are adopting Meta Platforms’ Advantage+ automated ad campaign platform; when AI Marketing Plus was initially released, large advertisers initially had some trust issues, but they started adopting it when they tested it and saw superior ROI (return on investment); the percentage of Tencent’s advertisers and the percentage of Tencent’s advertising spending that is going through AI Marketing Plus are steadily increasing

We introduced our automated ad campaign solution, AI Marketing Plus through which advertisers can automate targeting, bidding and placement, as well as optimize ad creation, improving their return on marketing investment…

In terms of the AI Marketing Plus automated campaign solution. We believe the automated ad campaign solution benefits all the advertisers who deploy it by enabling them to automatically reach inventories, as well as user profiles that are more performant than the inventories and user profiles they were manually targeting. You’re right to say that small- and medium-sized businesses are the first — or the most eager to adopt this kind of product because they have the least legacy process to replace, and that’s what we’re experiencing right now. But we’re also seeing bigger advertisers adopting AI Marketing Plus too, parallels the experience of Meta’s Advantage+ automated ad solution overseas…

…In terms of the advertising revenue, roughly half of the growth or about 10 points was due to a higher CPM, which contribute primarily to AI supported ad tech as well as to close loop benefits. And then the other half was due to increased impression volume, which reflects increased user engagement and increased — in terms of the commercial payment volume trends, then there is a measured improvement…

…[Question] What you hear how the AI marketing plus product, any early data points on the performance and ROI for merchants on that.

[Answer] When you introduce this automated ad campaign system, the biggest sort of leap for the advertisers is allowing us as the platform operator to actually manage the bidding process on their behalf. And of course, there’s a degree of internal conservatism within the bigger advertisers as to whether to entrust the platform to manage the price or not. And typically, the larger advertisers will run the automated and the manual processes in parallel for a period of time and compare the ROI to verify whether the automated process is delivering more performance or not. And we’ve turned on that automated bidding tool relatively recently. But the early results are positive, those advertisers who are adopting the automated solution are enjoying superior returns. And therefore, the percentage of our advertisers and the percentage of our advertising spending that is going through AIM+ are steadily increasing. 

Within the Fintech and Business services segment, Business Services revenue grew in the teens year-on-year in 2025 Q3, despite supply constraints with GPUs; management thinks Tencent’s current amounts of GPUs are sufficient for the company’s internal usage, although there are some shortages for external usage; Tencent’s cloud revenue would have grown more if not for GPU constraints, because management prioritises internal consumption of GPUs

Turning to Business Services. Despite supply chain constraints on sourcing GPUs, revenue grew at a teens rate year-on-year in the third quarter, benefiting from higher cloud services revenues and increased technology service fees generated from rising Mini Shop e-commerce transaction volumes. Revenue from our Cloud Storage and Data Management products, namely Cloud Object Storage, TC House, and factor GB grew notably year-on-year due to increased demand, including from leading automotive and Internet companies. And for WeCom, we launched an AI summarization feature to generate project recaps and provide advice based on users e-mails and conversations to enhance some project collaboration efficiency…

In terms of Yuanbao adoption and also the CapEx spending at this point in time. We actually believe that there’s no insufficiency of GPUs for us at this moment. All our GPUs actually sufficient for our internal use. And there is some limiting factor for external cloud revenue…

…In terms of cloud business, I think we have been increasing our revenue finally sort of this year, right? In the past few years, our revenue has not grown that much, but our gross profit has grown very significantly. And this year, we’re growing both the revenue as well as the gross profit and the business is actually sort of profitable. One constraint of cloud business growth is availability of AI chip because when AI chips are actually in short supply, we actually prioritize internal use as opposed to renting it out externally. And the other way to say is if there is not an AI supply constrained and our cloud revenue should be growing more.

Tencent’s operating capex in 2025 Q3 was down 18% year-on-year because of supply changes; non-operating capex was down 59% year-on-year; free cash flow was flat year-on-year but down 11% sequentially; there is a difference between capex and cash-capex because of timing differences; management has lowered capex target for 2025 from previous level of low-teens percentage of revenue; the lower capex target for 2025 is because of AI chip availability

Operating CapEx was RMB 12 billion, down 18% year-on-year, primarily due to supply changes. Nonoperating CapEx was RMB 1 billion, down 59% year-on-year, reflecting higher base last year relate construction. Free cash flow was RMB [ 38.5 ] billion, largely stable year-on-year as operating cash flow growth was offset by higher CapEx payments. On a quarter-on-quarter basis, free cash flow was up 36% due to higher gains gross ratio…

…[Question] This quarter, CapEx was around RMB 13 billion, but the cash payment for CapEx was RMB 20 billion. So how should we interpret the difference between these 2 figures?

[Answer] In terms of CapEx, the difference timing gap between the accrual of server-related expenditure and cash payment, which can cause temporary mismatches between the two. In particular, the credit period for us to pay server suppliers is usually 60 days…

…In terms of the CapEx for 2025 to share with you in 2024, our total CapEx grew by 221% year-on-year was about 12% of the revenue. Previously, 2025, we guided total CapEx was as a percent of revenue to be at low teens. The 2025 CapEx will be lower to our previous guided range, but the amount will be higher than that of 2024…

…[Question] The capex for 2025 will be lower than the previous guidance, but higher than the ’24 actual capex spending, if I get that right, does it reflect a change of AI chip availability or a change of investment strategy or a change of your expectation of a future token consumption.

[Answer] It’s not a change in terms of expectation of future token consumption. It is indeed a change in terms of AI chip availability.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s management sees AI improving the effectiveness of the open Internet by allowing vastly superior price discovery

AI is accelerating the improved effectiveness of the open Internet. Of course, every significant AI innovation and AI product needs quality data. The most valuable data to an advertiser is their own conversion and customer data. We will win long term because we built a business where buyers can own their future, which requires them to own, protect and use their own data. AI is fast-tracking progress for companies that are eager to put their data to work and can leverage automation intelligently…

…AI is accelerating the path to the open Internet having vastly superior price discovery and fungibility. A world with better price discovery and better open Internet supply chains for the quality content will decrease the value of user-generated content destinations and other similar apps and sites that are full of ads and unsafe content.

Nearly all of Trade Desk’s clients have already tried Kokai, and 85% are using Kokai as the default; management thinks Kokai is a significant upgrade over Trade Desk’s previous platform, Solimar, when Solimar was already the most performant DSP (demand side platform) in the world; compared to Solimar, Kokai has delivered 26% better cost per acquisition, 58% better cost per unique reach, and 94% better click-through rate; Kokai has a distributed AI architecture where every function has a separate AI model; management thinks the distributed AI architecture allows Kokai to parallelize all AI efforts and enables checks and balances between disparate functions; Bayer used Kokai to advertise on Spotify and saw 15% growth in incremental reach; Specsavers used Kokai in the UK and saw a 43% reduction in cost of securing customer appointments, and a 50% reduction in conversion time; Danone used Kokai and achieved a 33% increase in conversion rates

Today, nearly all of our clients have tried Kokai with nearly 85% using Kokai as their default experience…

…Kokai is the best upgrade we have ever made to our product relative to all previous versions and certainly relative to Solimar. Campaigns that have switched to Kokai are seeing impressive results. Since its launch, Kokai has delivered on average 26% better cost per acquisition, 58% better cost per unique reach and a 94% better click-through rate compared to Solimar. These are incredible performance improvements on top of what was already considered the most performant DSP in the world.

Kokai has a number of features in it that are game changers for our clients and for the open Internet. We’ve used the industry’s most advanced AI to enhance our system with an architecture we call distributed AI. We break down every function and create separate AI models for each of them from valuing impressions to managing identity to choosing supply paths to predicting a price required to clear and to forecast the performance and reach of a campaign before a single dollar is even spent. This effort to distribute allows us to parallelize all AI efforts and enables checks and balances between these disparate functions…

…Bayer recently added Spotify to their omnichannel campaigns on Kokai and saw a 15% growth in their incremental reach…

…Specsavers in the U.K. saw a 43% reduction in the cost of securing customer appointments using Kokai while also cutting the conversion time by almost 50%. Danone saw conversion rates go up by 1/3 for their Actimel yogurt product, leveraging the retail data marketplace and omnichannel strengths of Kokai.

Trade Desk’s management has launched and grown several AI-powered products in 2025 that upgrade the supply chain so that advertisers get more bang for buck; OpenPath has grown by hundreds of percent in 2025 9M; OpenPath gives advertisers a clearer picture of the inventory they are buying, and gives publishers better sense of what advertisers are willing to pay; management has just launched OpenAds, an auction that Trade Desk developed and sometimes hosts as an option for publishers; Trade Desk enables other buyers or DSPs to use OpenAds to bid into a fair auction; Trade Desk is working to integrate OpenAds with more than 20 of the web’s biggest publishers; management expects OpenAds to dramatically improve the supply chains of mobile in-app ads and browser-based ads; Trade Desk has launched Pubdesk, which is improving the supply chain by publishing data for the sell side; the data includes what advertisers paid the supply chain and what signals advertisers value; Pubdesk was largely fueled by Sincera, which Trade Desk acquired earlier in 2025; Deal Desk helps advertisers better manage one-to-one deals; Deal Desk is powered by AI and can predict how a deal will perform relative to the open market; management thinks Deal Desk can replace outdated upfronts; deals on Deal Desk are performing 35% better than those running on Solimar; OpenPath connects directly into premium publisher auctions; Disney is using OpenPath because it wants its premium inventory to be correctly assessed and valued; management will open source key elements of OpenAds; Hearst used OpenPath and achieved a 4x improvement in ad fill rates and 23% revenue increase; SSPs (supply side platforms) are integrating with Deal Desk; Trade Desk’s use of AI in supply chain optimization is finding better paths to publishers with double-digit percentages of efficiency

It cannot be overstated how much AI has changed and will change our business and the open Internet. This year, we’ve launched and grown several products that are solely focused on substantially upgrading supply chains so that buyers get more for their money.

We have grown OpenPath by many hundreds of percentage points this year, which means our clients are getting clear views of exactly what they’re buying and publishers have a clearer sense of what advertisers are willing to pay when they describe their inventory in a transparent and accurate way.

OpenAds is an auction that we develop and sometimes host as an option for publishers. We then bid into a fair auction and even enable other buyers or DSPs to do the same thing, too. The market needs a healthy auction and some sell-side players have continually weakened the integrity of the auction. So we’re developing an open source option that raises that bar. We just launched it, and we’re already working to integrate with more than 20 of the biggest publishers on the web. We expect this to dramatically improve the supply chains of mobile in-app ads and browser-based ads, which, of course, can use the help in an AI scraping world.

Pubdesk is improving the supply chain by publishing data for the sell side. Resellers, sellers and publishers can log into the platform and see what we paid the supply chain, what signals we value and adjust their sites and inventory to get more. This is largely fueled by the Sincera team and data that we acquired earlier in the year.

Deal Desk is a better way to manage one-to-one deals. Not only does it facilitate the buy, but using AI, it predicts how a deal will perform relative to the open market. This product enables them to do deals, but also gives them the unprecedented data and tools to avoid bad deals. It is important to note that this product will be foundational to a healthy forward market that can replace the outdated upfronts. So far, deals on Deal Desk are performing about 35% better than those running on Solimar, which is more similar to the way they run everywhere else in the programmatic ecosystem…

…OpenPath connects directly into many of these premium publisher auctions and companies like Disney do this because they want to ensure that their premium inventory can be correctly assessed and valued…

…We will open source key elements of OpenAds, and we will expose its mechanics for review. Just like recent innovations such as UID2 or OpenPath or Ventura, our intent here is to incentivize a more transparent, competitive marketplace for all…

…Publishers like Hearst are seeing a 4x improvement in ad fill rates and 23% revenue increase when integrating OpenPath…

…On the supply side, SSPs such as PubMatic are integrating with Deal Desk using the new price discovery provisioning API that helps sellers better understand and identify how sellers can increase the quality of their inventory.

The injection of AI into our supply path optimization is finding better paths to publishers with double-digit percentages of efficiency.

Trade Desk’s management thinks that the emergence of agentic search will not change much of the premium open Internet, but it will result in more search-like inventory available, and this inventory will be new premium advertising opportunities that is up for grabs for Trade Desk

[Question] Are you seeing an impact from agentic search on available publisher inventory?

[Answer] We look at roughly 20 million ad impression opportunities every single second. That’s about 1.7 trillion every single day… When you look at that many impressions, and just to be open, we buy a low single-digit percentage of that total, you’d have to when it’s that big. So that means that if we take 20 million down to 15 million per second because of AI, there’s not really much different about our business model, nothing at all…

…I actually see whatever effect the AI search world is having on inventory supply, given that I’ve said over and over again in this call that there’s more supply than demand and it is more of a buyer’s market than ever. Whatever effect it’s having on it, first of all, is fairly de minimis as it relates to the open Internet at large. We shouldn’t define the open Internet as just what happens in a browser. It is much bigger than that. It’s everything that happens in CTV, movies, sports, journalism, everything, and that’s both in an app and in a browser, that’s in every form of media that touches the Internet. If you look at it from that perspective, I don’t think it’s had any meaningful effect. And because I don’t think CTV is going anywhere, I don’t think music is going anywhere. I don’t think sports is going anywhere. I think that the premium open Internet will continue to play the most significant role in building brands and doing actual advertising. And I don’t think that AI will change that.

I do actually think that there will be more search-like inventory available, which I think is really premium advertising opportunities. In the past, companies like ours have not had access to companies like Google’s ad inventory as it relates to search. I think in a world where that’s much more competitive and there isn’t a winner-take-all outcome, which I don’t think there will be. I think there’s going to be a bunch of opportunities for us to buy into their inventory. think it will actually look a lot like CTV in the sense that fragmentation will be nearly perfect in the sense that there are enough players that there’s competition and that no one is big enough to have a monopoly or be a draconian, but it’s consolidated enough that everybody will be rational and highly competitive. And I anticipate that, that will create new advertising opportunities that will have really amazing results and efficacy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Coupang, Datadog, Meta Platforms, Nu Holdings, Paycom Software, Sea, Shopify, Tencent, and The Trade Desk. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the third quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Alphabet (NASDAQ: GOOG)

Alphabet’s 1st-party AI models, including Gemini, now process 7 billion tokens per minute from direct APIs; the Gemini App now has 650 million monthly active users; queries on Gemini App has 3x-ed from 2025 Q2; management sees Alphabet’s AI models as world-leading; 230 million videos have been created with Veo3; 13 million developers have built with Alphabet’s generative AI models; management will release Gemini 3 in 2025 Q4; the number of tokens per month processed by Alphabet has increased from 980 trillion in May 2025 to 1.3 quadrillion, up 20x from a year ago; Alphabet is applying Gemini internally and this has increased the productivity of its sales team by over 10%, leading to hundreds of millions in incremental revenue; Alphabet’s customer support division has used Gemini to manage over 40 million customer sessions year-to-date; management thinks the pace of frontier model development is still phenomenal 

Our first-party models, like Gemini, now process 7 billion tokens per minute via direct API used by our customers. The Gemini app now has over 650 million monthly active users, and queries increased by 3x from Q2…

…Our models are world-leading. GEMINI 2.5 Pro, Veo, Genie 3 under viral sensation Nano Banana are among the very best in class. Over 230 million videos have been generated with Veo 3, and more than 13 million developers have built with our generative models. We are looking forward to the release of Gemini 3 later this year…

…In July, we announced that we processed 980 trillion monthly tokens across all our surfaces. We are now processing over 1.3 quarterly and monthly tokens, more than 20x growth in a year, phenomenal…

…We’re also applying Gemini internally to help us serve customers with increased speed, intelligence and efficiency. Our sales teams use Gemini enriched with ads knowledge to streamline customer interactions. This increased productivity by over 10% led to hundreds of millions in incremental revenue and frees up sellers to engage with more customers at a deeper, more strategic level. In our customer support division, Gemini-powered solutions have managed over 40 million customer sessions so far this year and resolved hundreds of thousands of customer inquiries, and we’re just getting started…

…On the on the pace of frontier model research and development. Look, I think 2 things are both simultaneously true. I’m incredibly impressed by the pace at which the teams are executing and the pace at which we are improving these models. But it also is true at the same time that each of the prior model you’re trying to get better over is now getting more and more capable. So I think both the pace is increasing, but sometimes we are taking the time to put out a notably improved model, so I think — and that may take slightly longer. But I do think the underlying pace is phenomenal to see.

Google Cloud saw accelerating growth in 2025 Q3 with AI as a key driver; Google Cloud backlog grew 46% sequentially to $155 billion in 2025 Q3 (was $106 billion in 2025 Q2); Google Cloud is singing new customers faster, with 34% year-on-year increase in new GCP (Google Cloud Platform) customers in 2025 Q3; Google Cloud signed more deals over $1 billion in 2025 9M than in 2023 and 2024 combined; more than 70% of existing Google Cloud customers use Alphabet’s AI products; Google Cloud has 13 product lines that have annual run rate of more than $1 billion each; management thinks Google Cloud offers the widest array of chips, and 9 of the top 10 AI labs are on Google Cloud; revenue from products built on Alphabet’s generative AI models in 2025 Q3 was up more than 200% year-on-year; nearly 150 Google Cloud customers have each processed 1 trillion tokens in the last 12 months; WPP is using Alphabet’s AI models to improve efficiency by up to 70% when creating advertising campaigns; Swarovski is using Alphabet’s AI models to raise e-mail open rates by 17% and accelerate campaign localization by 10x; management recently launched Gemini Enterprise and it is seeing strong adoption of agents; the packaged enterprise agents by Gemini Enterprise have already exceed 2 million subscribers aross 700 companies

Cloud had another great quarter of accelerating growth with AI revenue as a key driver. Cloud backlog grew 46% quarter-over-quarter to $155 billion…

…Next, Google Cloud. Our complete enterprise AI product portfolio is accelerating growth in revenue, operating margins and backlog. In Q3, customer demand strengthened in 3 ways. One, we are signing new customers faster. The number of new GCP customers increased by nearly 34% year-over-year. Two, we are signing larger deals. We have signed more deals over $1 billion through Q3 this year than we did in the previous 2 years combined. Third, we are deepening our relationships. Over 70% of existing Google Cloud customers use our AI products, including Banco BV, Best Buy and FairPrice Group…

…Today, 13 product lines are each at an annual run rate over $1 billion…

…We have a decade of experience building AI accelerators and today, offer the widest array of chips. This leadership is winning customers like HCA Healthcare, LG AI Research and Macquarie Bank, and it’s why 9 of the top 10 AI labs choose Google Cloud…

…In Q3, revenue from products built on our generative AI models grew more than 200% year-over-year. Over the past 12 months, nearly 150 Google Cloud customers each processed approximately 1 trillion tokens with our models for a wide range of applications. For example, WPP is creating campaigns with up to 70% efficiency gains. Swarovski has increased e-mail open rates by 17% and accelerated campaign localization by 10x…

…Earlier this month, we launched Gemini Enterprise, the new front door for AI in the workplace, and we are seeing strong adoption for agents built on this platform. Our packaged enterprise agents in Gemini Enterprise are optimized for a variety of domains, are highly differentiated and offer significant out-of-box value to customers. We have already crossed 2 million subscribers across 700 companies.

Alphabet has a full-stack approach to AI, spanning infrastructure, research, products, and platform; management continues to see Alphabet’s AI infrastructure as a key differentiator; Alphabet is the only company scaling both NVIDIA’s GPUs as well as its own TPUs; Alphabet is now shipping the new A4X Max instances powered by NVIDIA GB300; the 7th-generation of Alphabet’s TPU will be available soon; management is seeing tremendous demand for TPUs; AI startup Anthropic recently announced that it would access up to 1 million TPUs

Our full stack approach spans AI infrastructure, world-class research including models and tooling, and our products and platforms that bring AI to people everywhere…

…Our extensive and reliable infrastructure, which powers all of Google’s products is the foundation of our stack and a key differentiator. We are scaling the most advanced chips in our data centers, including GPUs from our partner, NVIDIA, as well as our own purposeful TPUs. And we are the only company providing a wide range of both. As we announced yesterday at NVIDIA GTC, we are now shipping the new A4X Max instances powered by NVIDIA GB300 to our cloud customers. We are investing in TPU capacity to meet the tremendous demand we are seeing from customers and partners, and we are excited that Anthropic recently shared plans to access up to 1 million TPUs.

Alphabet’s management sees AI expanding Google Search; the growth in overall queries and commercial queries seen in 2025 Q2 accelerated in 2025 Q3, driven by AI Overviews and AI Mode; the acceleration of growth from AI Overviews in Google Search in 2025 Q3 was more pronounced with younger people; AI Mode has seen strong and consistent week-over-week growth in usage since launch in the USA and queries doubled sequentially; AI Mode has been rolled out globally in 40 languages; AI Mode now has 75 million daily active users; AI Mode is driving incremental total query growth for Google Search, including commercial queries; Google Search users can now shop conversationally in AI Mode; all US users of Google Search now have access to try-on capabilities for clothing items; management sees agentic experiences as additive to the way Google Search users seek information; management is working on agentic experiences across key verticals and they think it’s important that Alphabet also creates value for its partners when building these experiences; Alphabet has introduced agentic checkout and partnerships for agentic commerce; AI Overviews now has 2 billion users; AI Overviews continue to monetise at a similar rate as traditional Google Search, but management sees the opportunity for the monetisation to improve; Google Search’s paid clicks and CPCs were both up 7% year-on-year in 2025 Q3; management sees the opportunity in AI Mode to take queries that are not fully commercial and yet still serve attractive advertising offerings

AI is driving an expansionary moment for Search. As people learn what they can do with our new AI experiences, they’re increasingly coming back to Search more. Search and its AI experiences are built to highlight the web, sending billions of clicks to sites every day. During the Q2 call, we shared that overall queries and commercial queries continue to grow year-over-year. This growth rate increased in Q3, largely driven by our AI investments in Search, most notably AI Overviews and AI Mode…

…AI Overviews drive meaningful query growth. This effect was even stronger in Q3 as users continue to learn that Google can answer more of their questions, and it’s particularly encouraging to see the effect was more pronounced with younger people.

We’re also seeing that AI Mode is resonating well with users. In the U.S., we have seen strong and consistent week-over-week growth in usage since launch and queries doubled over the quarter. Over the last quarter, we rolled out AI Mode globally across 40 languages in record time. It now has over 75 million daily active users, and we shipped over 100 improvements to the product in Q3, an incredibly fast pace. Most importantly, AI Mode is already driving incremental total query growth for Search…

…Our investments in new AI experiences, such as AI Overviews and AI Mode, continued to drive growth in overall queries, including commercial queries, creating more opportunities for monetization. These AI experiences are enhancing how people connect with businesses and shop on Search. We recently added shopping capabilities in AI Mode, which now help people shop conversationally in Search, and we expanded try-on capabilities to more clothing items, now available to anyone in the U.S…

…This is all early, but we see agentic experiences really as additive to the way people seek information. It helps us answer people’s tough questions. It helps us — it helps people get stuff done, and it helps businesses in the process…

…We’re working on multiple agentic experiences across key verticals such as travel, commerce, shopping and so on, and we’re paying a lot of attention to creating a seamless user experience but also to the fact that we need to integrate different partner ecosystems in a way that it creates value for them…

…At I/O, we also introduced new agentic checkout, which will let shoppers use like agentic AI to buy products from merchant sites and so on. We have a partnership with PayPal to help merchants build agentic commerce experiences. We have a new open protocols for agent-to-agent transactions and so on and so on…

…AI Overviews is scaling up and working for our entire user base. We’re now scaled to over 2 billion users here, and we’re continuing to expand ads in AI Overviews in English to more countries, across desktop, mobile and so on. And as I’ve shared before, for AI Overviews, even at our current baseline of ads below and within the AI’s response, overall, we see the monetization at approximately the same rate…

…We’re excited about the opportunity of richer experiences in AI Mode and AI Overviews to basically open up then the opportunity for also much richer placements…

…As you will see in the 10-Q, paid clicks were up 7% year-on-year and CPCs were up 7% year-on-year…

…There is the question of whether queries actually increase with AI Mode, and Sundar actually talked about it and mentioned the opportunity that he sees here. So I think it’s important to separate those 2 things. And I personally also see this, what I just said in my last remarks, that I think, over time, there’s an opportunity to actually take, let’s say, queries that are not fully commercial but could have an adjacent commercial relationship to basically expand this into more attractive ads offerings without — while really creating a really interesting user experience at the same time.

Alphabet’s management recently rolled out AI features that help Youtube content creators streamline their entire content creation workflow; Youtube can now automatically products in content creators’ videos to make them more shoppable; Alphabet’s recommendation systems are driving watch time growth in Youtube; the use of Gemini in Youtube is driving improvement in content discovery; management is excited about the revenue growth powered by Demand Gen in Youtube’s direct response advertising business; Alphabet has improved Demand Gen’s performance on Youtube where it can now increase conversion value by more than 40%; Demand Gen is helping Youtube further monetise shopping-related categories; more advertisers are adopting interactive direct response ads on Youtube in the living room, with an annual revenue run rate exceeding $1 billion globally; management has introduced Veo 3 integration and speech to song for content creators in Youtube; Youtube Shorts has lower revenue-share than traditional Youtube

At our Made on YouTube event, we rolled out a number of AI-powered features that are helping create a supercharged creation and build their businesses. AI is now streamlining the entire content creation workflow from generated video tools and more efficient editing to AI-powered insights that help creators optimize their channels. We are also using AI to expand monetization, automatically identifying products to make their videos more shoppable…

…Our recommendation systems are driving robust watch time growth in our key monetization areas like Shorts and Living Room. As we leverage Gemini models, we’re seeing further discovery improvement…

…On direct response, we’re excited about the growth in revenue we’re seeing, especially from small and medium advertisers adopting Demand Gen. We also improved performance on Demand Gen with over 100 launches helping to increase conversion value by more than 40% for advertisers using target-based bidding on YouTube. The retail vertical continues to lead our growth on YouTube with Demand Gen helping us further monetize shopping-related categories.

Looking at the living room, our long-term bet, more advertisers are adopting interactive direct response ads, leading to an annual revenue run rate exceeding $1 billion globally for this format…

…We continue to invest in AI-powered features that are helping creators supercharge creation and build their businesses. with Veo 3 integration and speech to song, creators go from idea to iteration quicker, and new channel insights help them better understand performance…

…Shorts, which has a lower revenue share than in stream that helps to improve some of our gross margins.

Alphabet’s management intends to launch Waymo in London and Tokyo in 2026; Waymo has expanded operations in a number of US cities, and testing in New York City continues to scale; Waymo now has the option for enterprises to offer Waymo as a work-travel option; management launched Waymo teens accounts in Phoenix recently and usage is growing steadily; management thinks there’s a real opportunity to infuse Gemini into Waymo to improve the in-vehicle experience for users in 2026

Next year Waymo aims to open service in London, and they are working to bring service to Tokyo. They’ve also announced expansions to Dallas, Nashville, Denver and Seattle and secured permission to operate fully autonomously at San Jose and San Francisco Airports. Autonomous testing continues to scale in New York City. The new Waymo for Business allows enterprises to offer Waymo as a work travel option. And we launched Waymo teens accounts in Phoenix this summer. We are pleased to see usage steadily increase with positive feedback from teens and their parents alike…

…[Question] How far are we from an integration of Waymo into more of the core Gemini capabilities and the users on the platform taking your user data of where I’m going, what hotel I’m staying at, what airport I’m staying at and having integrated that into Waymo?

[Answer] Waymo clearly is scaling up, particularly in 2026. And I think the possibility, as you said, of Gemini, particularly with the multimodal experience as well as services like YouTube, I think there’s a real opportunity to make the in-car experience dramatically better. Definitely something we are excited about, and you’ll see newer experiences in 2026 for sure.

Alphabet’s management recently rolled out AI Max in Search for businesses and it can understand and predict consumer intent in Google Search;  AI Max is already used by hundreds of thousands of advertisers and is Alphabet’s fastest-growing AI-powered Search ads product; AI Max unlocked billions of net new queries in 2025 Q3; AI Max helps advertisers discover new customers at the exact moment they need their product or service; Kayak used AI Max in Search and grew conversion value by 12%

Businesses can now tap into our most powerful AI search experiences. Using our most advanced AI models, we can understand and predict intent like never before, unlocking entirely new commercial pathways to provide valuable new consumer connections and helping us monetize even more efficiently. Rolled out globally in September, AI Max in Search is already used by hundreds of thousands of advertisers, currently making it the fastest-growing AI-powered search ads product. In Q3 alone, AI Max unlocked billions of net new queries. By delivering the most relevant ad across surfaces and matching advertisers against additional queries they weren’t reaching before, AI Max helps advertisers discover new customers at the exact moment they need their product or service. Kayak, for example, look to grow conversions while staying within their ROAS goals. After turning on AI Max and Search, they grew their conversion value by 12% in early tests.

Alphabet’s management notes that GCP is seeing strong demand for enterprise AI infrastructure and enterprise AI solutions; management notes that GCP will be in a tight demand/supply situation going into 2026; management now expects capex of $91 billion to $93 billion in 2025 (up 66% from $55.4 billion in 2024 and 2024’s capex was up 69% from 2023), up from previous guidance of $85 billion; management expects capex to increase significantly in 2026; management expects the growth rate in depreciation to accelerate in 2025 Q4; when management makes capex decisions, they go through a rigorous process of assessing the return on the investment

In Cloud, demand for our products remains high as evidenced by the accelerating revenue growth and the $49 billion sequential increase in Cloud backlog in Q3. In GCP, we see strong demand for enterprise AI infrastructure, including TPUs and GPUs, enterprise AI solutions driven by demand for Gemini 2.5 and our other AI models, and core GCP infrastructure and other services such as cybersecurity and data analytics. As I’ve mentioned on previous earnings calls, while we have been working hard to increase capacity and have improved the pace of server deployments and data center construction, we still expect to remain in a tight demand-supply environment in Q4 and 2026.

Moving to investments. We’re continuing to invest aggressively due to the demand we’re experiencing from Cloud customers as well as the growth opportunities we see across the company. We now expect CapEx to be in the range of $91 billion to $93 billion in 2025, up from our previous estimate of $85 billion, keeping in mind that the timing of cash payments can cause variability in the reported CapEx number. Looking out to 2026, we expect a significant increase in CapEx, and we’ll provide more detail on our fourth quarter earnings call.

In terms of expenses, first, as I’ve mentioned on the previous earnings calls, the significant increase in our investments in technical infrastructure will continue to put pressure on the P&L in the form of higher depreciation expenses and related data center operations costs such as energy. In the third quarter, depreciation increased $1.6 billion year-over-year to $5.6 billion, reflecting a growth rate of 41%. Given the overall increase in CapEx investments, we expect the growth rate in depreciation to accelerate slightly in Q4. Second, we expect sales and marketing expenses to be more heavily weighted to the end of the year in part to support product launches and the holiday season…

…When we make a decision on investment in the long term, we go through a very rigorous process of assessing what the return could be and over what time frame we will see that return to give us the high level of confidence to then invest and make those investments for the long term.

Nearly half of all code in Alphabet is now generated by AI

The percent of code, now nearly half of all code generated by AI, that’s a way for us to leverage AI to drive further productivity across the business.

Amazon (NASDAQ: AMZN)

AWS grew 20.2% year-on-year in 2025 Q3, and is now growing at a pace last seen in 2022; AWS’s run rate has reached $132 billion (was $123 billion in 2025 Q2), and management thinks 20% growth off such a huge base is more impressive than what competitors have achieved (faster growth off a much smaller base); AWS’s backlog is $200 billion in 2025 Q3 (was $195 billion in 2025 Q2, up 25% year-on-year) and is higher now given unannounced new and large deals in October 2025; AWS has been a Gartner Magic Quadrant leader for 15 consecutive years; management sees AWS continuing to be the destination for most big enterprises and governments’ cloud migrations; AWS is where most companies’ data and workloads reside, and why most companies want to run AI in AWS; AWS operating income in 2025 Q3 was $11.4 billion, reflecting 34.6% operating margin (was 32.9% in 2025 Q2 and 38.1% in 2024 Q3); the AI–portion of AWS’s growth in 2025 Q3 come from both training and inference; a broad-base of AWS’s AI products also contributed to AWS’s AI-growth; cloud migrations by enterprises was also a strong contributor to AWS’s growth in 2025 Q3; management thinks AWS can continue growing at a similar clip as in 2025 Q3 for a while

AWS is growing at a pace we haven’t seen since 2022, reaccelerating to 20.2% year-over-year, our largest growth rate in 11 quarters…

…It’s very different having 20% year-over-year growth on a $132 billion annualized run rate and to have a higher percentage growth rate on a meaningfully smaller annual revenue, which is the case with our competitors. 

Backlog grew to $200 billion by Q3 quarter end and doesn’t include several unannounced new deals in October, which together are more than our total deal volume for all of Q3…

…Gartner has named AWS leader in its strategic cloud platform services Magic Quadrant for 15 consecutive years…

…Because of its advantaged capabilities, security, operational performance and customer focus, AWS continues to earn most of the big enterprise and government transformations to the cloud. As a result, AWS is where the preponderance of companies’ data and workloads reside and part of why most companies want to run AI in AWS…

…Moving next to our AWS segment. Revenue was $33 billion, up 20.2% year-over-year. This is an acceleration of 270 basis points compared to last quarter, driven by strong growth across both our AI and core services and more capacity, which has come online to support customer demand. AWS revenue increased $2.1 billion quarter-over-quarter and now has an annualized revenue run rate of $132 billion. AWS operating income was $11.4 billion, and reflects our continued growth, coupled with our focus on driving efficiencies across the business…

…We see the growth in both our AI area, where we see it in inference. We see it in training. We see it in the use of our Trainium custom silicon. Bedrock continues to grow really quickly. SageMaker continues to grow quickly…

…I think the other place we see a lot of growth in AWS also is just the number of enterprises who are — who have gotten back to moving from on-premises infrastructure to the cloud. And we continue to earn the lion’s share of those transformations. And I look at the momentum we have right now, and I believe that we can continue to grow at a clip like this for a while.

Amazon’s management thinks a lot of the value companies will derive from AI will come from agents and AWS is investing heavily in agents; management thinks companies will both create their own agents and use 3rd-party agents; management has launched Strands in AWS to make it easier for companies to build their own agents; management has launched AgentCore in AWS for companies who have built agents to deploy them in a secure and scalable way; Ericsson, Sony and Cohere Health are all users of AgentCore; Cohere Health is using AgentCore to deploy agents that reduces medical review times by up to 30% to 40%; AgentCore’s SDK (software development kit) has been downloaded over 1 million times; AWS has the coding agent Kiro, which attracted more than 100,000 developers in its first days of launch and that number has since doubled; AWS’s migration agent, Transform, has saved customers 700,000 hours of manual effort in 2025 9M; Thomson Reuters used Transform to transform 1.5 million lines of code per month to complete tasks faster than with other migration tools; customers have already used Transform to analyse 1 billion lines of mainframe code; AWS’s business agent, Quick Suite, has delivered 80% time savings and 90% cost savings to users; AWS’s contact center agent, Amazon Connect, is at a $1 billion annualised revenue run rate and has handled 12 billion minutes of customer interactions in the last year; customers of Amazon Connect include Capital One, Toyota, American Airlines and Ryanair

A lot of the future value companies will get from AI will be in the form of agents. AWS is heavily investing in this area and well positioned to be a leader.

Companies will both create their own agents and use agents from other companies. For those building their own, it’s been harder to build than it should be. It’s why we launched Strands to make it much easier to create agents from any foundation model that builders desire. For companies who successfully built agents, they’ve hesitated putting them into production because they lack secure, scalable runtime services or memory or observability, built specifically for agents. It’s why we launched AgentCore, a set of infrastructure building blocks that allow builders to deploy secure, scalable agents. Ericsson used AgentCore to deliver AI agents across their workforce, Sony used it to build a agentic AI platform with enterprise-level security, observability and scalability. And Cohere Health is using AgentCore to deploy agents that will reduce medical review times by up to 30% to 40%. AgentCore’s SDK has already been downloaded over 1 million times, and our builders are excited about it…

…For coding, we’ve recently opened up our agentic coding IDE called Kiro. More than 100,000 developers jumped into Kiro in just the first few days of preview and that number has more than doubled since. It’s processed trillions of tokens thus far, weekly actives are growing fast, and developers love its unique spec and tool calling capabilities.

For migration and transformation, we offer an agent called Transform. Year-to-date, customers have already used it to save 700,000 hours of manual effort. The equivalent of 335 developer years of work. For example, Thomson Reuters used it to transform 1.5 million lines of code per month, moving from Windows to open source alternatives and completing tasks or at times faster than with other migration tools. Customers have also already used Transform to analyze nearly 1 billion lines of mainframe code as they move mainframe applications to the cloud.

For business customers, we’ve recently launched Quick Suite to bring a consumer AI-like experience to work, making it easy to find insights, conduct deep research, automate tasks, visualize data and take actions. We’ve already seen users churn months long projects into days, get 80% plus time savings on complex tasks and realize 90% plus cost savings…

…For contact centers, we offer Amazon Connect which creates a more personalized and efficient experience for contact center agents, managers and their customers. Connect has recently crested $1 billion annualized revenue run rate with 12 billion minutes of customer interactions being handled by AI in the last year and is being used by large enterprises like Capital One, Toyota, American Airlines and Ryanair.

AWS has added 3.8 gigawatts of capacity in the last 12 months, more than any competitor; AWS now has double the capacity it had in 2022, and is on track to doubling capacity again by 2027; management expects to add 1 gigawatt of capacity in 2025 Q4; management is growing AWS capacity very aggressively because they see the demand; as soon as capacity is added to AWS, it is monetised

We’ve been focused on accelerating capacity the last several months, adding more than 3.8 gigawatts of power in the past 12 months, more than any other cloud provider…

…We’re now double the power capacity that AWS was in 2022, and we’re on track to double again by 2027. In the last quarter of this year alone, we expect to add at least another 1 gigawatt of power. This capacity consists of power, data center and chips, primarily our custom silicon, Tranium and NVIDIA…

…You’re going to see us continue to be very aggressive in investing in capacity because we see the demand. As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early and represents an unusual opportunity for customers in AWS.

Project Rainier, a massive AWS AI compute cluster consisting of 500,000 of AWS’s in-house Trainium 2 chips, is now online; AI startup Anthropic is using Project Rainier to build and deploy the next generation of its leading AI model; management expects Anthropic to use up to 1 million Trainium 2 chips by end-2025; Trainium 2 is currently fully subscribed, and is a multi-billion dollar business that grew 150% sequentially in 2025 Q3; Trainium is currently used by only a small number of very large customers, but management expects more customers to use Trainium once Trainium 3 comes online; the token usage of Amazon Bedrock, AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps, is mostly on Trainium; even as AWS scales Trainium, management continues to order significant amounts of chips from NVIDIA, AMD, and Intel; management sees Trainium as 30%-40% more price-performant than other options; management thinks that as companies start to scale production AI workloads, they will care a lot about price performance, and this will lead to strong demand for Trainium; Trainium 3 should preview at end-2025, with full volume coming in early-2026; there are many large and medium-sized customers who are interested in Trainium 3; management thinks that AWS will always have multiple chip options for customers and that has been true for every major technology building block; management thinks the chip team behind Trainium, Annapurna, is really strong; management expects Trainium 3 to be 40% better than Trainium 2; it was not easy to build Project Rainier to be able to scale from 500,000 chips to 1 million; Project Rainier is specific for Anthropic

We’ve recently brought Project Rainier online, our massive AI compute cluster spanning multiple U.S. data centers and containing nearly 500,000 of our Trainium2 chips. Anthoropic is using it now to build and deploy its industry-leading AI model Claude, which we expect to be on more than 1 million Trainium2 chips by year-end. Trainium2 continues to see strong adoption, is fully subscribed is now a multibillion-dollar business that grew 150% quarter-over-quarter.

Today, Trainium is being used by a small number of very large customers but we expect to accommodate more customers starting with Trainium3.

We’re building Bedrock to be the biggest inference engine in the world and in the long run, believe Bedrock could be as big a business for AWS as EC2, and the majority of token usage in Amazon Bedrock is already running on Trainium.

We’re also continuing to work closely with chip partners like NVIDIA, with whom we continue to order very significant amounts as well as with AMD and Intel. These are very important partners with whom we expect to keep growing our relationships over time…

…Because Trainium is 30% to 40% more price performant than other options out there, and because as customers, as they start to contemplate broader scale of their production workloads, moving to being AI-focused and using inference, they badly care about price performance. And so we have a lot of demand for Trainium. Trainium3 should preview at the end of this year with much fuller volumes coming in the beginning of ’26. We have a lot of customers, both very large, and I’ll call it, medium-sized who’re quite interested in Trainium3…

…We’re always going to have multiple chip options for our customers. It’s been true in every major technology building block or component that we’ve had in AWS. Really in the history of AWS, it’s never just one player that over a long period of time has the entire market segment and then it can satisfy everybody’s needs on every dimension…

…We’re different from most technology companies in that we have our own very strong chip team, and this is our Annapurna team. And you saw it first on the CPU side with what we built with Graviton which is about 40% better price performance than the other x86 processors, and you’re seeing it again on the custom silicon on the AI side with Trainium, which is about the same amount of price performance benefit for customers relative to other GPU options…

…As we think about Trainium3, I expect Trainium3 will be about 40% better than Trainium2 and Trainium2 is already very advantaged on price performance…

…It’s not simple to be able to build a cluster that has 500,000 plus chips going to 1 million. That’s an infrastructure feet that’s hard to do at scale…

…Project Rainier is something that is specific for Anthropic.

Rufus, Amazon’s AI shopping assistant, has 250 million active customers in 2025 9M; Rufus monthly users are up 140% year-on-year and interactions are up 210%; customers using Rufus are 60% more likely to complete a purchase; Rufus is pacing towards $10 billion in incremental annualized sales; management is very excited about agentic commerce; management thinks agentic commerce will be very useful for consumers who don’t know what they want to buy; management sees Rufus as a part of Amazon’s agentic commerce efforts; Amazon has a Buy For Me agentic feature where products will be surfaced for consumers, even items that Amazon does not stock but that other merchants have; management is also looking to partner with 3rd-party agents; search engines are a very small part of Amazon’s traffic today, and 3rd-party agents are an even smaller part; management thinks the current agentic commerce experience is not good for consumers; management thinks agentic commerce will expand the amount of online shopping and this bodes well for Amazon

Rufus, our AI-powered shopping assistant has had 250 million active customers this year with monthly users up 140% year-over-year, interactions up 210% year-over-year and customers using Rufus during a shopping trip being 60% more likely to complete a purchase. Rufus is on track to deliver over $10 billion in incremental annualized sales…

…As a business, we’re very excited about in the long term the prospect of agentic commerce. And it has a chance to be good for customers, it has a chance to be really good for e-commerce…

…If you know what you want to buy, there are a few experiences that are better than coming to Amazon. But if you don’t know what you want, it’s — physical store with a physical salesperson still has some advantages. Obviously, lots of people do it on Amazon all the time. But you very often want to ask questions and help — get help narrowing what you’re going to look for. And as you keep asking new questions, having a whole bunch of different options presented to you. And I think AI and agentic commerce are going to change the experience online where that experience where you’re narrowing what you want when you don’t know is going to get better online than it even is in physical environments…

…We obviously have our own efforts here in agentic commerce. We have Rufus, which I talked about in my opening comments, which is continuing to get better and better and used more broadly. And we have features like Buy for Me where we will surface on Amazon, even items that we don’t stock that other merchants have. And then if customers want us to go and buy it for them on those merchants’ websites, we will do that. And both of those have been successful for us. But we’re also having conversations with and expect over time to partner with third-party agents…

…Today, search engines are a very small part of our referral traffic and third-party agents are a very small subset of that…

…We have to find a way, though, that makes the customer experience good. Right now, I would say the customer experience is not. There’s no personalization, there’s no shopping history, the delivery estimates are frequently wrong, the prices are often wrong…

…I do think that the exciting part of this and the promise is that AI and agentic commerce solutions are going to expand the amount of shopping that happens online. And I think that’s really good for customers, and I think it’s really good for Amazon because at the end of the day, you’re going to buy from the outfit that allows you to have the broadest selection, great value and continues to deliver for you very quickly and reliably. And I think that bodes well for us.

Customers are talking to Alexa+ 2x more compared to the classic Alexa experience; customers are talking to Alexa+ for longer compared to classic Alexa; compared to classic Alexa, customers are using Alexa+ in Fire TV 2.5x more, to discover audio content 4x more, to engage with photos 4x more, and to complete shopping conversations (that end with a purchase) 4x more

We continue to be energized by the response to Alexa+. Compared to what we call the classic Alexa experience, Alexa+ customers are talking to Alexa 2x more. Those interactions are much longer, and they’re covering a broader range of topics. So using Alexa+ in Fire TV at 2.5x the rate of classic, using natural conversation to discover audio content 4x more, engaging with photos 4x more, and customers are completing 4x more shopping conversations that end in a purchase.

More than 1.3 million sellers have used Amazon’s generative AI capabilities to speed up the launch fo high-quality listings; 3rd-party seller unit mix was 62% in 2025 Q3 (62% in 2025 Q2)

Our millions of global third-party sellers continue to be important contributors to our vast selection, which helps customers find the items they need at competitive prices. We’re committed to building innovative services and features for our sellers, including our ongoing advancements in generative AI. Today, more than 1.3 million sellers have used our generative AI capabilities to more quickly launch high-quality listings. Better listings translate into better traction with customers. And in Q3, worldwide third-party seller unit mix was 62%, up 200 basis points from Q3 of last year.

The majority of Amazon’s capital expenditure (capex) in 2025 Q3 was for AWS’s technology infrastructure, including the Trainium chips

Now turning to our cash CapEx, which was $34.2 billion in Q3. We’ve now spent $89.9 billion so far this year. This primarily relates to AWS as we invest to support demand for our AI and core services and in custom silicon, like Trainium as well as tech infrastructure to support our North America and international segments. We’ll continue to make significant investments, especially in AI, as we believe it to be a massive opportunity with the potential for strong returns on invested capital over the long term. Additionally, we continue to invest in our fulfillment and transportation network to support the growth of the business, improve delivery speeds and lower our cost to serve. These investments will support growth for many years to come.

Apple (NASDAQ: AAPL)

Apple’s management sees Apple’s silicon as the heart of the company’s efforts in AI; management thinks the A19 Pro chip and M5 chip make Apple products the very best place to experience the power of AI; management has introduced dozens of new features in Apple Intelligence, including Live Translation, Visual Intelligence, Workout Buddy, Clean Up in photos, and more; management is seeing developers build on the foundation models on Apple’s devices; management expects to release the new, more personalised version of Siri next year; Apple is using Private Cloud Compute (PCC) to handle some of Siri’s queries, and the company continues to build that out; there were capital expenditures in FY2025 that were related to the build out of PCC; management intends to continue using internal foundation models together with other LLMs in building the personalised version of Siri

As we continue to expand our investment in AI, we’re bringing intelligence to more of what people already love about our products and services, making every experience even more personal, capable and effortless. At the heart of it all is Apple silicon, and we were thrilled to launch new products powered by the A19 Pro chip and M5. These incredibly advanced chips make Apple products the very best place to experience the power of AI.

With Apple Intelligence, we’ve introduced dozens of new features that are powerful, intuitive, private and deeply integrated into the things people do every day, features like Live Translation, which help users communicate across languages in real time; and Visual Intelligence, which opens new ways to learn about and explore the world. We also introduced Workout Buddy, a new experience that uses AI to provide personalized motivational insights based on a user’s workout data and fitness history. And these joined so many others from Clean Up in photos and new image creation tools to powerful writing tools. We’re also seeing developers take advantage of our own device foundation models to create entirely new experiences for users around the world. We’re also excited for our more personalized Siri. We’re making good progress on it, and as we’ve shared, we expect to release it next year…

…We’re obviously using PCC, our Private Cloud Compute, today for a number of queries for Siri, and we will continue to build it out. In fact, the manufacturing plant that makes the servers used for Apple Intelligence just started manufacturing in Houston a few weeks ago, and we’ve got a ramp plan there for use in our data centers and it’s robust…

…In ’25, we did have CapEx costs associated with building out our Private Cloud Compute environment in our first-party data centers. So you would have seen that in some of the CapEx investment in the year…

…[Question] Good to know that the personalized Siri is making good progress and on track for next year. Will you continue to use a three-pronged approach with your own foundation models and partner with other LLM providers and maybe potential M&A?

[Answer] We’re obviously creating Apple foundation models within Apple. We ship them on device and use them in the Private Cloud Compute as well. And we’ve got several in development. And so we also, from a — continually to surveil the market on M&A and are open to pursuing M&A if we think that it will advance our road map.

The Apple Watch Series 11 has the most comprehensive set of health features yet, and these health features are powered by AI and advanced machine learning; the latest Apple Watch now has hypertension notifications that were developed using large-scale machine learning models; AirPods Pro 3 can pair very well with Live Translation to deliver an incredibly new and exciting experience for users

Apple Watch Series 11 brings our users the most comprehensive set of health features yet. And Apple Watch SE 3 delivers advanced capabilities at an incredible value. AI and advanced machine learning are at the core of powerful health features like heart rate monitoring, fall detection, crash detection and more. With our latest Apple Watch lineup, we were proud to introduce hypertension notifications, developed using large-scale machine learning models. Hypertension is one of the leading risk factors for heart attack and stroke affecting more than 1 billion adults worldwide, and we expect to notify more than 1 million users of this life-threatening condition…

…With Live Translation powered by Apple Intelligence, AirPods deliver an incredibly new and exciting experience for users around the world.

Apple’s management has committed to invest $600 billion over the next 4 years (was $6500 billion in 2025 Q2; Apple has $190 billion in gross profit per year, for perspective) in the USA in areas such as advanced manufacturing, silicon engineering and artificial intelligence; Apple already built a new factory in Houston for advanced AI service

A great example is the work we’re doing in the U.S. where we’re committed to invest $600 billion over the next 4 years with a focus on innovation in strategic areas like advanced manufacturing, silicon engineering and artificial intelligence. These commitments build on our long-standing investments in America while supporting more than 450,000 jobs with thousands of suppliers across all 50 states. We built a new factory in Houston for advanced AI service, for example, which just started shipping its first products off the line, and we’re leading the creation of end-to-end silicon supply chain across the country.

It’s still too early to tell for sure, but management thinks Apple Intelligence has been a driver of demand for Apple devices, and the driving force will become greater over time

[Question] With all the hype now around AI, are you seeing evidence that AI capabilities or features are a material purchase consideration for consumers?

[Answer] I think that there are many factors that influence people’s purchasing considerations and so — and we don’t have a great in-depth survey yet on the current iPhone 17 because it’s very new in the cycle, and we give it some time to formulate. But I would say that Apple Intelligence is a factor, and we’re very bullish on it becoming a greater factor. And so that’s the way that we look at it.

Apple’s management will continue with Apple’s hybrid approach when it comes to data centers, of using its own data centers as well as those of 3rd-parties

[Question] In the wake of nearly every other large tech company massively increasing their CapEx in advance of AI demand and also mentioning that there’s scarce capacity, do you anticipate Apple altering its sort of long-standing hybrid approach to your own and third-party data centers?

[Answer] As we’ve talked about before, we are expecting increases in our CapEx spending related to AI investments. For example, as I mentioned earlier, we did end up having investments this year to build out our Private Cloud Compute environment. And we do believe this hybrid model has served us very well, and we continue to want to leverage it. And so I don’t see us moving away from this hybrid model where we leverage both first-party capacity as well as leverage third-party capacity. We’ll continue to want to build out Private Cloud Compute, as Tim outlined, as we have more usage there over time. But I think, in general, we want to continue to have this hybrid model.

ASML (NASDAQ: ASML)

Some of the recent uncertainties hampering ASML’s business has reduced, driven by AI; management thinks there will be continued investment in advanced Logic and DRAM because of AI; management thinks AI-demand will benefit a larger part of ASML’s customer base than previously expected; management is a little careful with how all the big AI-related announcements can eventually translate into real capacity, but nonetheless, has been preparing for growth; the broadening of ASML’s customer base because of AI is a big positive development for the company

I think we have seen a flow of positive news in the last few months that has helped to reduce some of the uncertainties we discussed last quarter. First, we continue to see strong news about commitment to AI. Which means, we think, investment in advanced Logic and DRAM. Second, and it’s very important for us, it looks like AI is going to benefit a larger part of our customer base. Third, we continue to make very good progress with our litho intensity, especially with EUV that continues to be adopted with DRAM and advanced Logic customers…

…If you look at the sum of the announcement, I would say this creates a pretty positive backlog of opportunity for AI moving forward…

…I think we are a bit careful with how the big announcement can translate into real capacity need on the ground.

I think the one thing I’d still like to stress one more time is we see the broadening of the customer base, I think, a very important news in that matter because whatever you do is the first set of news, I think we can all agree that we need to make sure that the market will not be supply limited. And this has always been a risk with a limited amount of customers supplying AI chips, both in logic and DRAM…

…We have said for a few quarters that we have been preparing for growth. So we were following those dynamic. And I think we know now that EUV most probably will be stronger next year. So we’ve been preparing for that. We have, as you know, also worked on longer-term capacity. So we continue to track basically the market carefully, having in mind that we want to be able to follow the demand. S

ASML recently invested in AI startup Mistral and management sees it as a strategic partnership that will help ASML improve the software in its systems and also speed up product development; ASML invested €1.3 billion in Mistral AI 

We entered into a partnership with Mistral AI. I think Mistral is really recognized on a number of fronts. They’re recognized for their business-to-business approach. They’re also recognized for the quality of their large language model. Particularly when it comes to software coding and software coding development. So, they’re recognized for that. That’s the reason why we entered into the partnership with them. Because many people look at ASML, look at our products and are really looking at hardware. But increasingly I think people appreciate the very significant software content that is within those systems. People really understand that if you get to the level of precision and the level of speed that we have in our scanners, but also quite frankly, what we need in metrology and inspection, it’s pretty clear that the software contingent therein becomes increasingly important.

So, that’s the reason why this is very strategic to us. Why it’s very strategic to improving the performance. Improving the precision and the speed of our tools as we bring them to our customers. So, therefore, this collaboration is truly a strategic choice for us. I would also say that, on top of the significance that it has for our products, AI is also a great way to improve the speed of our product development. To improve the speed of our time-to-market of any product development to our customers. That’s another big area that we’re collaborating with Mistral on.

So, all in all, we believe a very strategic partnership. We also, to underscore that strategic partnership, we were the lead investor for their Series C funding round. By being the lead investor we took approximately an 11% share in Mistral. We also have a seat on their Strategic Committee…

…ASML has invested EUR 1.3 billion in Mistral AI’s Series C funding round as lead investor. 

ASML’s management continues to see strong growth for the semiconductor market in the long-term, driven by AI; management thinks the shift of ASML’s customers towards advanced Logic and Memory chips will drive demand for advanced lithography and higher lithography intensity; management thinks the shift from 6F-square to 4F-square in DRAM will not cause the number of EUV layers to drop; it’s hard to tell how exactly how all the AI-announcements will translate into business for ASML in the next few years

[Question] Can I ask you to remind us of the long-term opportunities for ASML and a little bit the market you see there?

[Answer] We said that most probably AI will drive more advanced applications in semiconductors. So advanced DRAM, advanced Logic. This is happening and this is driving more advanced litho, higher litho intensity. We expect that to continue. As we just discussed, we see that 3D integration will become a new opportunity which we are going to pursue. As Roger explained very nicely, we also see that AI could create a lot of value in our products moving forward. So we continue to see a very strong opportunity on our technology roadmap… 

…[Question] There is a view that when you go into 4 F-squared from 6 F-squared for DRAM architecture, that’s actually negative for EUV, the EUV layer count comes down. Can you just help us understand that?

[Answer] The short answer is no. If we look at the number of EUV layer going from 6 F-squared to 4 F-squared, we do not expect the number of layers to drop. In fact, as 4 F-squared road map continues after the transition, we, in fact, expect the number of EUV layers to continue to grow. And I make that statement after many discussions with our customer. On top of that, what I’d like to add is 4 F-Squared has a bit of a more complex structure. So it’s, in fact, adding overall more litho mask, more advanced litho mask. So there is a benefit also to some extent, to advanced deep UV. So in any case, you still doubt about it. 4 F-squared is in no way a bad news for ASML…

…I think we wish we had a formula to translate all the announcement on what it means exactly for us in the next few years. But I think no one has that.

The entire semiconductor supply chain does not have very clear visibility into AI-driven demand

[Question] It feels like you wake up every day to another massive announcement from somewhere within the AI food chain. And you sort of — you spent a lot of time talking about how that to create a theoretical backlog for you, not yet orders. But I’m just wondering, you are a critical supplier into this market. You have the potential to be a backlog — to be a bottleneck rather for the market. Now of course, you don’t want that to be the case. You’re preparing for growth, et cetera. But do you feel like there’s sufficient understanding through the chain, whether it’s of where you sit or perhaps your customers?

[Answer] I think we wish we had a formula to translate all the announcement on what it means exactly for us in the next few years. But I think no one has that…

…[Question] Do you feel like your customers are giving you that heads up, right? Is there a sufficient acknowledgment through the chain?

[Answer] I think they do their very best. I will say it this way because they have the same challenge as we do.

Cloudflare (NYSE: NET)

A large digital media platform expanded its relationship with Cloudflare because the media company saw Cloudflare as the only company building the essential platform to protect and manage content for the emerging AI-driven web; the media company is looking to ramp up Cloudflare’s Pay Per Crawl service; the media company thinks Pay-Per-Crawl could turn Cloudflare from an expense-line into a revenue-generator

A Global 2000 digital media platform expanded its relationship with Cloudflare, signing a 3-year $22.8 million pool of funds contract for application services and workers. This contract marks the culmination of a powerful comeback story. We actually lost this customer to a competitor in 2016, but the Internet and Cloudflare evolved. We earned their trust back in 2023, starting with our Zero Trust portfolio. During 8 months of testing before signing this deal, our world-class security, unmatched product breadth and powerful Workers platform ran circles around the incumbent. But that’s not the whole story. The decisive factor of the win was AI. This customer looked at the landscape and correctly identified Cloudflare is the only company building the essential platform to protect and manage content for the emerging AI-driven web. This strategic win established us as the customer’s clear forward-looking partner and creates a direct on-ramp for Pay Per Crawl, which could transform Cloudflare from a vendor they pay for services into a powerful revenue generator for their business.

A web infrastructure platform signed a contract with Cloudflare for AI Crawl and Bot Management after seeing a huge surge in visits from AI scrapers and bots, leading to cost inflation with growth in revenue; management thinks the web infrastructure platform could become a Pay Per Crawl customer im the future

A global web infrastructure platform expanded its relationship with Cloudflare, signing a 14-month $1.2 million contract for AI Crawl Control and Bot Management. This customer is experiencing a massive surge in AI scrapers and malicious bots hitting their origin servers, inflating costs without revenue conversion and obscuring visibility into legitimate traffic. They selected Cloudflare for our innovative best-of-class bot blocking capabilities in addition to seamless expedited deployment by our deep platform integration. We’re already exploring a much larger opportunity with this customer for Pay Per Crawl.

Cloudflare’s management sees the company having emerged as a strategic partner to media companies in managing the new business model of the internet in the AI world; management thinks AI is a massive information consumption platform shift that will also change the business model of the internet from the previous long-standing model of (1) create content, (2) generate traffic, and (3) sell products or advertising; management thinks there are many questions that will arise with the new business model of the internet and they do not even know what this new business model will look like, but they think Cloudflare will be an important force in shaping the conversation; 80% of leading AI companies are relying on Cloudflare’s infrastructure; management sees Cloudflare as a thought leader in what the future business model of the Internet looks like; even companies such as the research departments of banks are speaking with Cloudflare to figure out a new business model of the internet in the AI world, and the same goes for brands and small businesses

We talked last quarter about how the rise of AI would impact media companies. Cloudflare has emerged as a strategic partner to these firms as they work through what the new business model of the Internet will be. But it goes beyond just media. Businesses of all shapes will be transformed by the rise of AI. I don’t think people yet appreciate how AI is another massive information consumption platform shift, just as we move from consuming information via a browser on a desktop to social media and then to apps on mobile devices, AI is another information consumption platform shift. It changes where and how we will consume and interact with information.

With the last 3 platform shifts, the business model of the Internet remains the same: create content, generate traffic and then sell things, subscriptions or ads. With AI, for the first time in a long time, the fundamental business model is going to change. Human eyeball traffic is unlikely to be the currency of the Internet’s future. We already can see glimpses of that future. It’s represented in SciFi. When George Jetson asks his helpful robot Rosie for a recipe for cookies, the response isn’t 10 blue links to hunt through. It’s a recipe for cookies. Most of us are increasingly living in some version of that future now with tools like ChatGPT, and it seems inevitable that more and more commerce will be facilitated by AI-powered agents working on our behalf. 

As that happens, new questions will arise. What happens to small businesses? What happens to brands? Brands, of course, are just shortcuts for humans to be able to assess quality and value. What do they mean in the world of agentic commerce? I don’t know what the future business model of the Internet will look like, who the winners and losers will be, but I do believe Cloudflare will help shape it. We estimate 80% of the leading AI companies already rely on us. A huge percentage of the Internet sits behind us. The agents of the future will inherently have to pass through our network and abide by its rules. And as they do, we will help set the protocols, guardrails and business rules for the Agentic Internet of the future…

…One of the things that has really set us apart is — and this is thanks to our over time, just significant investment in public policy and the side of the house that maybe doesn’t always get as much attention. But I think we have been thought leaders in thinking about what does the future business model of the Internet look like…

…At banks, the research departments they’re a little nervous because they’re seeing ticks down in the amount of research that people are paying for because the AI companies are slowing that up. So that’s open conversations with financial services companies. We’re seeing challenges with brands that are worried about what does a brand mean in the future of agentic commerce. We’re seeing challenges from small businesses. And I think one of the things that I am passionate about is how do we make sure that as this new paradigm, as this new platform emerges, how do we make sure that everybody has a fair shot to be able to participate in it.

Cloudflare’s management is seeing companies increasingly adopting Cloudflare’s Workers developer platform for running AI inference, and building AI agents and applications; management has always been investing behind demand for Cloudflare, not ahead of it; Workers is not facing any form of capacity constrain

Our Workers developer platform continues to deliver outsized growth with the world’s most innovative companies increasingly adopting Workers for running AI inference tasks as well as building AI agents and full stack applications…

…[Question] do you think that you’re capacity constrained in Workers?

[Answer] I don’t think we’re capacity constrained because of somewhat the nature of how we’ve architected Cloudflare and the philosophy of how we make CapEx and network investments. We always have tried to invest behind demand, not ahead of demand.

Cloudflare’s management believes Cloudflare can get the utilisation rate of its GPUs up to 70%-80%, given the company’s excellent track record with utilising CPUs; Cloudflare has been able to generate revenue from its hardware deployment even before it starts paying for the equipment

It’s been remarkable to see over the last 15 years, how our team has been able to squeeze as much as possible out of the CPU capacity that we have, where we can run that CPU capacity at 70% to 80% utilization and get more out of every CapEx dollar we spend. But what’s fascinating is we’re sort of speed running the last 15 years now with GPUs, where we’re figuring out how to make GPUs multi-tenant, how to make them load and unload models more quickly and driving the utilization of GPUs up substantially. And so that is still well below what we have with CPUs, but we see no reason that we can’t get GPUs also up to that 70%, 80% utilization…

…The supply chain within Cloudflare is so optimized to a large degree because we use off-the-shelf equipment and parts that we can deploy hardware, especially in Tier 1 cities and generate revenue even before we start to pay for the equipment. So not only do we have the flexibility that Matthew described really well at length, our reaction time to deploy hardware where we need it is really, really fast.

Cloudflare’s management sees the biggest competition for the company from winning inference workloads is the hyperscalers; management thinks Cloudflare can show much better TCO (total cost of ownership) than the hyperscalers when it comes to inference workloads; Cloudflare can become very sticky for inference workloads once customers realise there’s a different way to run these workloads from the traditional way of doing it with the hyperscalers; AI inference is still a tiny portion of Cloudflare’s revenue today, even though management is excited about its potential; management does not see any concentration risk in its AI-native business; management has found that the first product from Cloudflare that AI companies are often interested in is security-related, because the AI companies’ cost-to-serve queries is high, so they want to block out fraudulent queries; of the 80% of leading AI companies that rely on Cloudflare’s infrastructure, many of them are using Cloudflare’s security products; management thinks a particular strength of Cloudflare is being able to bring the inference workloads close to users, resulting in lower latency; management thinks that many inference workloads in the future will be run on the edge (i.e. on-device) and if it can’t be done, then it will be run on the network, which suits Cloudflare’s strength

[Question] On competition for Cloudflare in the enterprise for securing those inference workloads and winning those inference workloads in particular. Matthew, I would love to hear you comment how do you think competition is evolving in the enterprise as you build out some of the breadth and depth of your functionality?

[Answer] I think that the primary competition for inference workloads continues to be the hyperscalers. And it continues to be the model of do you want to do this work yourself and have to optimize yourself or do you want to hand it off to Cloudflare. And I think in the cases where we’re in the conversation, we’re able to show that there’s just a much better TCO, total cost of ownership, a much lower cost, much better performance when we manage that for you. And so there’s kind of a standard way people do things, which is the hyperscaler way. We’re having to teach them that there is a different way that’s out there…

…I think that we are finding, though, that once somebody learns that there’s a better way that Cloudflare is very, very sticky, and we keep those customers over the long term…

…Even though we’re excited about AI and AI inference, it is still a relatively de minimis portion of our overall revenue, growing fast, but not — I don’t see any current concentration risk that’s there. And what we’re seeing is actually sometimes it’s not the inference products that initially get interest from the AI native companies. It’s actually the security products. And the reason why is the cost of AI, every query can be so high that making sure that you don’t have fraudulent queries running through your system is critical in order to make sure that you can continue to operate cost effectively. And so many of the AI companies, we estimate that about 80% of AI companies use us in one way or another. But a lot of the times, that’s using us for actually securing some of our — really our Act 1 products. And then we are working on getting more and more of them to use the inference products as well.

In terms of what we can do that others can’t do, I think you’re absolutely right that being able to be close to users is important for a latency perspective. And that’s — and when you have human computer interaction, especially with something that is seems almost alive when you’re interacting with it. Every millisecond counts because it breaks that illusion if things slow down, especially as you get to things like voice communication and other things that need to have kind of a natural rhythm to them. And so I think we’re well positioned for that…

…It’s clear to me that there is something very, very real here that it is going to be transformative that a lot of inference will run on your handset or your driverless car directly there, but that if it can’t run there, it needs to run somewhere else, the next best place for it to run is in the network. And Cloudflare is the only network that gives you that capability on a global basis today. And I think that, that’s going to continue to allow us to win workloads regardless of what happens to AI generally.

Cloudflare’s management started the NET Dollar project because they think a common currency would be needed in agentic commerce transactions; management thinks NET Dollar fits well with the regulatory regimes of the US and other parts of the world; Cloudflare has other irons in the fire apart from NET Dollar when it comes to facilitating payments in agentic transactions; management believes there will be multiple different ways to pay in agentic transactions, and they want Cloudflare to be in the center of that

So as we have really interacted with AI companies, but also the merchants and media companies and the real long tail of the Internet, much of which sits behind us. What we realized was that as we move into a world of agentic commerce, we’re going to need a currency to pay for the commerce that is done between agents that is really designed specifically for that task. And that’s the spirit with which we started the NET Dollar project…

…I think we’re approaching it in a thoughtful way and are confident that we can execute in a way that is both going to help facilitate agent-to-agent commerce and be something that it fits well within any of the regulatory regimes that we have both in the U.S. and around the rest of the world…

…We want to be the Babel fish of AI, sort of the universal translator, whether you’re using MCP, the Anthropic protocol or Google’s version of it or Microsoft’s version of it, Cloudflare supports all of those. And so I think in addition to the excitement that we’ve seen around NET Dollar, I am equally excited about the partnerships that we’re doing with Coinbase around X402, with Visa, Mastercard, American Express, around how you can create agent-to-agent payments. And I think that Cloudflare is a network, and what you want networks to be able to do is facilitate the ability for connection to happen and do it regardless of what makes sense. So we think there are potentially some advantages to what we’re building with NET Dollar, but we’re not all in on any one of these things…

…We also believe that there are going to be multiple different ways to pay. There are going to be multiple different agentic protocols, and they are going to be hopefully many, many, many AI companies interacting with many media and businesses to create a more frictionless and AI-powered future of commerce. And I think that we see ourselves in the center of that.

Cloudflare’s management is seeing good progress with Pay Per Crawl; media companies have gotten markedly better deals with AI companies with Pay Per Crawl; 

I think you’re asking about the product around us thinking about how do we help media companies figure out a new business model for the future. I think that, yes, I think that’s going just extremely well. Like the number of media companies that are signed up and engaged is powerful. We’re hearing from them about how the deals that they are able to do with AI companies have gotten markedly better, and we are getting a lot of praise for that.

Mastercard (NYSE: MA)

Mastercard is building the foundation for agentic commerce in partnership with key players such as OpenAI and Google; the Mastercard Agent Pay feature enables agents to facilitate transactions over Mastercard’s payment network; Mastercard processed its first agentic payment in 2025; US Bank and Citibank cardholders can now use Agent Pay, with more US issuers able to use Agent Pay in November, followed by a global rollout in early 2026; merchants can use Agent Pay without any significant need for integration; Mastercard has a partnership with Walmart for Agent Pay; agents can use Mastercard’s inside tokens to deliver personalised agentic commerce experiences to consumers; management thinks the runway for agentic commerce is long

With our global acceptance reach, trusted brand and services capabilities, we’re instrumental in creating the foundation for agentic commerce. We’re now working with key players such as OpenAI on their agentic commerce protocol and with Google and Cloudflare to set industry standards, all to drive safety and security.

To Mastercard Agent Pay, we’re enabling agents to facilitate transaction over a Mastercard’s payment network in a secure and scalable way. You already have agents registered and have tools in place for easy onboarding as others are ready. Our first agentic transaction took place on our network this quarter at a pivotal moment in payments, and that’s just the start. U.S. Bank and Citibank cardholders can now use Agent Pay. The rest of our U.S. issuers will be enabled in November with a global rollout to follow early next year.

And the beauty of it all, we’ve made it easy for merchants across the globe to benefit on day 1 with the same trust and security they are used to do from us today. Our acceptance framework enables any Mastercard merchant to participate without significant development or integration, a no-code approach…

…We have strong partnerships with the players I just mentioned and many more, including Walmart, to accelerate the adoption of agentic commerce using cards through Mastercard Agent Pay…

…Agents through Mastercard’s inside tokens can make agentic commerce even more personalized. By harnessing our proprietary data, we will be able to provide agents with predictive insights to help drive smarter decisions and recommendations.

The shift we’re seeing in commerce is creating further opportunity for our capabilities, more consulting, more loyalty, more security and so on. The runway for agentic focused services in consumer and business use cases is long, and we’re well positioned to capture this opportunity.

Mastercard’s management is seeing consumer search behaviour change because of AI chatbots; management thinks agentic commerce is a significant paradigm shift for the payments ecosystem because the agent is now an extra party that has entered the loop and this increases complexity for merchants; in agentic commerce, it’s important to determine the identity of an agent, and this is what Mastercard Agent Pay can do; in agentic commerce, the consumer identity also needs to be determined, as in a traditional online transaction; there are tricky aspects on agentic commerce to solve, such as handling a challenged transaction, and this is something Mastercard can do; management thinks agentic commerce will be very hard to handle for local payment networks, and this will be an opportunity for Mastercard to win share; the transition from physical payment to online payment unlocked new suite of services Mastercard could provide, and management expects a similar thing to happen with the transition to agentic commerce

What we’re seeing is behavioral change, driven and powered by generative AI and bots and so forth, where search behavior is changing. So, that’s on the consumer side, if we start right there. So, consumers are migrating their search increasingly so to their favorite chatbot and they’re asking their queries there, and they get potentially better answers, who knows…

…It’s really quite a significant paradigm shift for the payment ecosystem, because in the payment ecosystem, what happens is there’s now an extra party that has entered the realm, and that is the agent. So, that comes with a lot of those aspects you just talked about in your question, is there’s legal questions, there’s a security question…

…Some of the things that need to happen in a world of agentic commerce is, the first is, is this a real bot? Is this a bot that we believe matches up to Mastercard’s safety and security standards? So, we will certify and register bots out there. So that’s what Mastercard Agent Pay does. So, nothing really new from us on a perspective, but it’s a new party. Not really visible to the consumer in that way, but certainly driving some complexity potentially for merchants, for issuers, for every other party because that is just a new flow for the transaction…

…The merchant needs to know that the agent on the other side that we have certified is actually the agent. So, we have to pass through that information and ensure that the circle closes. We’re doing that. Well, there’s still the question of what is in focus today very much so is the consumer, the person they claim to be. So, consumer authentication needs to continue, but it now needs to flow through a somewhat more complicated transaction. So, all of this is happening…

…If you have asked an agent to buy you something in a chat, and then in the end, you challenge that transaction, who can prove who’s right. Is it the consumer? Is it the merchant? What happens? What do you do on return policies and various other things. Those are all complexities that we’re pretty good at solving in today’s world, and they were pretty busy solving in the future world, and that comes down to some of the aspects that you’ve talked about in your question. Where is the legal and regulatory framework on this yet? This is not something that’s specifically contemplated, but that will evolve over time…

…On the point of challenging a transaction. We’ve bought a company a couple of years ago called Ethoca, and what they do is they provide transaction detail at the moment of a charge back to a consumer that says, “Hey, you actually did this transaction because you were here at this time doing the following.” And the same can be done with this audit trail that would be capturing out of the chat that I talked about earlier. That is one example…

…One thing that I think is a pretty obvious opportunity is, this is going to be very hard to do for local payment networks. So, if you look around various kind of local payment systems that exist in Europe, in Asia and so forth. Big markets for us is an opportunity for us to continue to drive up our switching ratio as we’ve done in years, and this gives us another, kind of, field to execute on. I think that’s the first thing to say…

…You think back about the days where everything was in store and what kind of services portfolio we had and the opportunities we had to apply services and drive differentiation for us versus others. And then it went online. There was a whole different set of solutions that were suddenly needed to keep the online transaction safe. And agentic, it’s going to be even more opportunity for us to do that.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about the potential of infusing AI agents within MercadoLibre’s ecosystem; management recently launched Seller Assistant, a chatbot that provides personalized advice and recommendations to sellers; in MercadoPago, management just launched an AI assistant that can help users with a wide range of tasks; management thinks it’s still early in terms of determining OpenAI’s impact on e-commerce but what MercadoLibre needs to do is to develop agentic capabilities first so that it can be utilised if needed

We are extremely excited about the potential of Agent to enhance discovery, service and productivity within our ecosystem. There are several examples of things that we are doing on that regard. We just launched our own Seller Assistant, which is a conversational tool that gives sellers personalized advice and recommendations on how to manage data activity in our platform. In FinTech, as you probably know, we just launched our first AI assistant that can help our users with a wide range of tasks like making or scheduling money transfer through a conversation platform, asking for questions on the user’s operation and so on…

…[Question] I wanted to hear from you how you are thinking about OpenAI’s recent move into e-commerce?

[Answer] We need to continue to focus ourselves in building the best agentic experience within our platform, and that will give us optionality on what to do next and how to move forward. I think it’s early to make comments on OpenAI and their partnership with Etsy, Shopify, and so on. We need to understand how this will develop in the long run, what role agent will play in the relationship with consumers. And eventually, decide if there’s something different that we need to do for sure. We need to put the technology in place in order to have an agentic experience in MercadoLibre and in Mercado Pago in the near term.

Meta Platforms (NASDAQ: META)

Meta’s management is building an industry-leading amount of compute to be ready for whenever superintelligence arrives; if superintelligence takes longer than expected, the extra compute can be used to accelerate Meta’s core business; Meta’s core business has been able to profitably use much more compute than what’s available; management is seeing very high demand for compute; the worst case for building compute now is that Meta will be growing into the compute that it’s building; management recognises the possibility that Meta could overshoot on building compute capacity, and if so, it will lead to the worst case scenario

We’re also building what we expect to be an industry-leading amount of compute. Now there’s a range of time lines for when people think that we’re going to get superintelligence. Some people think that we’ll get there in a few years. Others think it will be 5, 7 years or longer. I think that it’s the right strategy to aggressively frontload building capacity so that way we’re prepared for the most optimistic cases. That way, if superintelligence arrives sooner, we will be ideally positioned for a generational paradigm shift in many large opportunities. If it takes longer, then we’ll use the extra compute to accelerate our core business which continues to be able to profitably use much more compute than we’ve been able to throw at it. And we’re seeing very high demand for additional compute, both internally and externally. And in the worst case, we were just slow building new infrastructure for some period while we grow into what we build…

…Now I mean, it’s, of course, possible to overshoot that, right? And if we do… the kind of the very worst case would be that we effectively have just prebuilt for a couple of years, in which case, of course, there would be some loss and depreciation, but we’d grow into that and use it over time.

AI recommendation systems are improving the content delivered across Facebook, Instagram, and Threads; AI recommendation systems have led to 5% more time spent on Facebook in 2025 Q3, and 10% on Threads; AI recommendation systems have led to 30% more time spent on video in Instagram in 2025 Q3; improvements in Meta’s AI recommendation systems will also benefit the company with the coming growth of AI-generated content; Facebook is now surfacing twice as many Reels published that day than at the start of 2025; management expects to evolve Instagram’s recommendation systems in 2026 to surface broader content that cater to diverse interests of each person; Meta has produced promising results in creating foundational ranking models and management expects to significantly scale up data and compute for training recommendation models in 2026 to yield better recommendations; management expects Meta to leverage LLMs (large language models) in 2026 to improve understanding of content by the recommendation systems; ranking optimisations made in 2025 Q3 alone led to a 10% increase in time spent on Threads

Across Facebook, Instagram and Threads, our AI recommendation systems are delivering higher quality and more relevant content, which led to 5% more time spent on Facebook in Q3 and 10% on Threads. Video is a particular bright spot with video time spent on Instagram up more than 30% since last year…

…Improvements in our recommendation systems will also become even more leveraged as the volume of AI-created content grows. Social media has gone through 2 eras so far. First was when all content was from friends, family and accounts that you followed directly. The second was when we added all of the creator content. Now as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content on top of those. Recommendation systems that understand all this content more deeply and can show you the right content to help you achieve your goals are going to be increasingly valuable…

…On Facebook, our systems are now surfacing twice as many Reels published that day than at the start of the year.

Looking to 2026, we expect to advance our recommendation systems across several dimensions. On Instagram, one focus is evolving our systems to surface content across a broader set of topics that cater to the diverse interest of each person. This follows a similar approach we’ve implemented on Facebook that has driven good results. We also expect to make significant progress on our longer-term ranking innovations in 2026. We’re seeing promising new results from our research efforts to create foundational ranking models and expect the new model innovations we’re developing as part of this will enable us to significantly scale up the amount of data and compute we use to train our recommendation models in 2026, yielding more relevant recommendations.

Another large focus next year is leveraging LLMs to improve content understanding. We expect this is going to enable our systems to more precisely label the keywords and topics within videos and posts, which will allow our systems to both develop deeper intuition about a person’s interest and retrieve the content that matches them…

…The ranking optimizations we made in Q3 alone drove a 10% increase in time spent on Threads.

Meta’s advertising business has benefited from improvements in AI ranking systems; the unification of different models into simpler, general models have led to improvements in the advertising business in 2025 Q3; management rolled out Lattice, its unified model architecture for advertising ranking models, to app ads in 2025 Q3 and drove a 3% gain in conversions; since the introduction of Lattice and other improvements in 2023, Meta has reduced the number of ads ranking and recommendation models by around 100, and the reductions have led to performance improvements; management expects Meta to achieve additional gains as it consolidates another 200 models over the coming years; management is innovating on run time models used for advertising inference; a new run time advertising ranking model was piloted in 2025 Q3 that uses more compute and data than prior models, and it drove a lift in conversions on Instagram of more than 2%; management has improved the performance of the Andromeda model architecture in 2025 Q3, driving a 14% increase in advertising quality on Facebook surfaces

Our ads business continues to perform very well, largely due to improvements in our AI ranking systems as well. This quarter, we saw meaningful advances from unifying different models into simpler, more general models, which drive both better performance and efficiency…

…We are driving performance gains through ongoing improvements in our larger scale ads ranking models. For example, we continue to broaden the adoption of Lattice, our unified model architecture. In Q3, we rolled out Lattice to app ads, which drove a nearly 3% gain in conversions for that objective. 

Since introducing Lattice back in 2023, along with other back-end improvements, we have now cut the number of ads ranking and recommendation models by approximately 100 as we consolidated smaller and more specialized models into larger ones that use the Lattice architecture to generalize learnings across surfaces and objectives. We continue to observe performance improvements as we combine models and expect to drive additional gains as we consolidate another 200 models over the coming years into a smaller number of highly capable models…

…We’re innovating on our run time models we use downstream of them for ads inference. For example, we began piloting a new run time ads ranking model in Q3 that leverages more compute and data than our prior models to select more relevant ads. In testing, we’ve seen this new model drive a more than 2% lift in conversions on Instagram.

We also significantly improved performance of Andromeda in Q3 by combining models across retrieval and early-stage ranking into a single model, driving a 14% increase in ads quality on Facebook surfaces.

Meta’s end-to-end AI-powered advertising tools, which are under Advantage+, is now handling $60 billion in annualised run rate revenue; management rolled out a streamlined campaign creation flow for Advantage+ lead campaigns in 2025 Q3, so end-to-end automation is turned on from the beginning; the number of advertisers using at least 1 of Advantage+’s video generation features grew 20% sequentially in 2025 Q3; management has added more generative AI features to Advantage+ to help advertisers optimise and improve ad creatives; management introduced AI generated music in Advantage+ in 2025 Q3; management continues to think a fully automated AI advertising product, where advertisers just have to tell the system what its objectives are, and the AI figures out everything else, is still important; advertisers who run lead campaigns using Advantage+ are seeing a 14% lower cost per lead; a lot of advertisers only use Advantage+ for a portion of their campaigns, so management thinks there are share gains to be made

Now the annual run rate going through our completely end-to-end AI-powered ad tools has passed $60 billion…

…In Q3, we completed the rollout of our streamlined campaign creation flow for Advantage+ lead campaigns. So now advertisers running sales app or lead campaigns have end-to-end automation turned on from the beginning, allowing our systems to look across our platform to optimize performance by automatically choosing criteria like who to show the ads to and where to show them. The annual run rate of revenue running through our end-to-end automated solutions has now reached $60 billion following the implementation of the new streamlined creation flow, as we continue to see more advertisers leverage the performance benefits of our solutions.

Within our Advantage+ creative suite, the number of advertisers using at least 1 of our video generation features was up 20% versus the prior quarter as adoption of image animation and video expansion continues to scale. We’ve also added more generative AI features to make it easier for advertisers to optimize their ad creatives and drive increased performance. In Q3, we introduced AI generated music so advertisers can have music generated for their ad that aligns with the tone and message of the creative…

…I mean there’s one opportunity that we just usually talk about on these calls, but hasn’t come up as much here is just the ability to make it so that advertisers are increasingly just going to be able to give us a business objective and give us a credit card or bank account and like have the AI system basically figure out everything else that’s necessary. Including generating video or different types of creative that might resonate with different people that are personalized in different ways, finding who the right customers are. All of these — all of the capabilities that we’re building, I think, go towards improving all of these different things. So I’m quite optimistic about that…

…Advertisers who run lead campaigns using Advantage+ are seeing a 14% lower cost per lead on average than those who are not…

…A lot of advertisers only use our end-to-end automated solutions for a portion of their campaigns so we can grow share there. And to capture that opportunity, we’re focused on driving continued performance improvements and addressing some of the key use cases that we still need in order to grow adoption.

Meta AI has more than 1 billion monthly actives, with usage increasing as the underlying models improve; the majority of Meta AI’s responses to queries in the US now show related Reels; users have created over 20 billion images with Meta AI; the launch of Vibe within Meta AI in September has led to a 10x increase in media generation in Meta AI; Meta AI is still powered by Llama 4

More than 1 billion monthly actives already use Meta AI and we see usage increase as we improve our underlying models…

…We’re increasingly leveraging first-party content into Meta AI results with the majority of Meta AI’s responses to Facebook Deep Dive queries in the U.S. now showing related Reels. We’re also seeing a lot of traction with media generation. People have created over 20 billion images using our products. And since launching Vibes within Meta AI in September, we have seen media generation in the app increased more than tenfold…

…A lot of people use Meta AI today. I mean, as I said in my comments upfront, there’s more than 1 billion people who use it on a monthly basis. And what we see is that as we improve the quality of the model, primarily for post-training Llama 4 at this point. We are — we continue to see improvements in usage.

Meta sees more than 1 billion active threads happening everyday with business accounts across its messaging platforms; management thinks Meta’s Business AI will help tens of millions of businesses scale the conversations and improve sales at low cost; business messaging continues to be a significant opportunity for Meta; Click-to-WhatsApp ads revenue was up 60% year-on-year in 2025 Q3; management has broadened Business AI access in the initial test markets of Philippines and Mexico, and strong usage has been seen, with millions of conversations between people and Business AIs taking place since July; in the US, management is rolling out the ability for merchants to add their Business AIs to their website 

Every day, people have more than 1 billion active threads with business accounts across our messaging platforms, ranging from product questions to customer support. Our business AIs will enable tens of millions of businesses to scale these conversations and improve their sales at low cost and the better our models get, the better this is going to work for all businesses…

…Business messaging remains a significant opportunity for us. We’re seeing strong growth across our portfolio of solutions, including with Click-to-WhatsApp ads, which grew revenue 60% year-over-year in Q3.

We’re also making good progress on our business AI efforts, where we’ve been focused on building a turnkey AI that helps businesses generate leads and drive sales. We’ve been opening access in recent months to more businesses within our initial test markets, the Philippines and Mexico. And we’ve seen strong usage with millions of conversations between people and Business AIs taking place since July. This month, we expanded availability within WhatsApp and Messenger to all eligible businesses in Mexico and the Philippines, respectively. In the U.S., we’re also starting to roll out the ability for merchants to add their Business AIs to their website so we can support the full sale funnel from ad to purchase.

Retention at Vibes is looking good so far, with usage growing fast weekly; management sees Vibes as new content type enabled by AI; the launch of Vibe within Meta AI in September has led to a 10x increase in media generation in Meta AI

This quarter, we also launched Vibes which is the next generation of our AI creation tools and content experiences. Retention is looking good so far. And its usage keeps growing quickly week over week…

…I think that Vibes is an example of a new content type enabled by AI, and I think that there are more opportunities to build many more novel types of content ahead as well…

…And since launching Vibes within Meta AI in September, we have seen media generation in the app increased more than tenfold.

The response to Meta’s 2025 line of AI glasses has been great; sales of the new Ray-Ban Meta glasses and Oakley Meta Vanguards are both good; the new Meta Ray-Ban Display glasses that come with the neural band as an interaction touch-point, sold out within 48 hours; management wants to invest to increase manufacturer of the Meta Ray-Ban Display glasses; management thinks there’s huge opportunity ahead with the Meta Ray-Ban Display glasses; management thinks that if the smart glasses continue on their current trajectory, then Meta’s ongoing investments in Reality Labs (via operating losses) will generate a good return; the return on investment of the smart glasses will come from both the hardware sales and new AI-enabled services that are layered on top; management will continue investing in virtual reality hardware products, such as the Orion

At Connect, we announced our 2025 line of AI glasses, and the response so far has been great. The new Ray-Ban Meta glasses and Oakley Meta Vanguards are both selling well as people love the improved battery life, camera resolution, new AI capabilities and the great design.

And there’s our new Meta Ray-Ban Display glasses, our first glasses with a high-resolution display and the Meta Neural Band to interact with them. They sold out in almost every store within 48 hours with demo slots fully booked through the end of next month. So we’re going to have to invest in increasing manufacturing and selling more of those. This is an area where we are clearly leading and have a huge opportunity ahead…

…[Question] On wearables, in particular, do you think you’ll be able to sell enough hardware to recoup your investment?

[Answer] The work on Ray-Ban Meta and the Oakley Meta product is going very well. I think, yes, I mean, at some point, if these continue going as well as it has been, then I think it will be a very profitable investment. I think that there’s some revenue that we get from basically selling the devices and then some that will come from additional services from the AI on top of it. So I think that there’s a big opportunity. Certainly, the investment here is not just to kind of build just the device. It’s also to build these services on top. Right now, a lot of people get the devices for a range of things that don’t even include the AI even though they like the AI. But I think over time, the AI is going to become the main thing that people are using them for and I think that that’s going to end up having a big business opportunity by itself.

But as products like the Ray-Ban Meta and Oakley Metas are growing, we’re also going to keep on investing in things like the more full field of view, product form of the Orion prototype that we showed at Connect last year. So those things are obviously earlier in their curve towards getting to being a sustaining business. And our general view is that we want to build these out to reach many hundreds of millions or billions of people and that’s the point at which we think that this is going to be just an extremely profitable business.

Meta’s management is focused on preserving maximum long-term flexibility for Meta’s AI capex; Meta Superintelligence Labs’ compute needs account for the largest chunk of Meta’s capex growth in 2026; when management was planning for 2025’s capex, they had investments they thought would be paying off in 2026, and those are already paying off through the course of 2025; one of the ways management looks at the ROIC (return on invested capital) of AI capex is growth in conversions relative to impressions, and Meta is putting out conversion growth that is faster than impressions; the new model architectures Meta has been deploying in its advertising systems has enabled Meta to deploy more data and compute to drive ads performance management expects this to continue in 2026; management wishes they had more compute capacity today than what’s available and they know that at least some of the capacity can be put towards positive ROI use-cases in the core business

Our primary focus is deploying capital to support the company’s highest order priorities including developing leading AI products models and business solutions. As we make significant investments in infrastructure to support this work, we are focused on preserving maximum long-term flexibility to ensure we can meet our future capacity needs while also being able to respond to how the market develops in the years ahead. We’re doing so in several ways, including staging data center sites so we can spring up capacity quickly in future years as we need it as well as establishing strategic partnerships that give us option value for future compute needs…

…I will say that the growth in 2026 CapEx relative to 2025 comes from growth in each of the core areas, MSL, core AI as well as non-AI spend. So all of those areas are growing, but the MSL AI needs are growing the most…

…[Question] Can you help us a little to understand some of the early quantifiable signals you’re seeing on AB tests from some of these improvements to come that sort of make you most excited and give you confidence you’re going to get ROIC from all this CapEx?

[Answer] In terms of the core AI pipeline, I think, we talked about last year when we were going into the 2025 budget process, we had a road map of resource investments across both head count and compute that we thought would pay off in 2026. And it’s really a very broad range of sort of different ads ranking and performance efforts. And we’re continuing to see that those have paid off through the course of the year. There is a long list of specific efforts, but 1 of the measures that we look at to monitor this is how are we driving ad performance, how are conversions growing?

Conversions is a complex metric for us because advertisers optimize for so many different conversions on different values. But when we control for that and look at value-weighted conversion rates, we’re seeing very strong year-over-year growth in conversion — weighted conversions continue to grow faster than impressions.

We also talked about some of the new model architecture over the course of the year and the degree to which the new model architecture is enabling us also to take advantage of having more data and more compute to drive ads performance. So we expect that, that’s going to be a continued story in 2026. We are, in fact, at the beginning of our 2026 budgeting process now, and we see a similar list of revenue investments that we’re excited to be able to invest in. And so we think that, that’s going to be a big part of our ability to continue to drive strong revenue performance throughout the year…

…We’re certainly seeing that we wish we had more capacity today than we do. We would be able to put it towards good use certain not only with the MSL team appreciate having more capacity, but we’d be able to put it towards good and ROI-positive use in the core business as well.

Meta’s management  has repeatedly seen a pattern of Meta building compute capacity based on an aggressive assumption, only to see even higher demand for compute; Meta’s core business keeps having the ability to use more compute in profitable ways than what’s available

To date, we keep on seeing this pattern where we build some amount of infrastructure to what we think is an aggressive assumption. And then we keep on having more demand to be able to use more compute, especially in the core business in ways that we think would be quite profitable, then we end up having compute for.

Meta does not use its large models for inference work because that is too expensive; Meta gets the large models to transfer knowledge to smaller models for inference work

We don’t use our larger model architectures like GEM for inference because their size and complexity would make it too cost prohibitive. The way that we drive performance from those models is by using them to transfer knowledge to smaller lightweight models that are used at run time.

Meta’s management  is unsure of the margin-profile of the new products Meta may develop with AI

[Question] You mentioned the prior 2 content cycles, and obviously, you’ve been able to generate very attractive margins on them. As we get into the AI cycle, obviously, some concerns on the investment. But can you talk a little bit about how you’re thinking about tools that could be coming out for users? I know there’s some new competition. And then secondly, how do you think about margins in this content cycle? Any reason to think they would be different versus prior cycles.

[Answer] I think it’s too early to really understand what the margins are going to be for the new products that we build. I mean, I think certainly, every — each product has somewhat different characteristics. And I think we’ll kind of understand how that goes over time. I mean, my general goal is to build a business that maximizes value for the people who use our products and maximizes profitability, not margin. So I think we’ll kind of just try to build the best things that we can and try to deliver the most value that we can for most people.

Meta’s management  thinks being the best at a given capability in the AI world will drive the greatest returns; management thinks it’s unlikely that one company will become the best at all capabilities; management wants Meta to develop novel capabilities with AI

I think the art of product development here is looking at the list of technology capabilities and figuring out what new products are going to be useful and prioritizing those. But fundamentally, I would sort of expect this exponential curve in new technology capabilities that are going to become available. And the other thing that I expect is that I think being the best in a given area will drive great returns rather than — this is not like a check-the-box exercise of like, okay, we can generate some kind of content and someone else can. I think that like the company that is the best at each of these capabilities, I think, will get a large amount of the potential value for doing that. So there are lots of different capabilities to build. I’m not sure that any one company is going to be the best at all of them. I doubt that’s going to be the case. But a lot of what we’re trying to do is not like not kind of do some things that others have done. We’re really trying to build novel capabilities.

Meta’s management  thinks a lot of AI apps today are still really small, but there’s huge opportunity

But if you look at it today, the companies that are building apps, I mean, a lot of the apps are still relatively small. And I think that’s obviously going to be a huge opportunity.

Meta’s management  thinks AI is different from past technological developments because AI allows new capabilities to be introduced fast, and new products and businesses can be built around these capabilities

I think what we haven’t really seen as much in the history of the technology industry is the rate of new capabilities being introduced because around each of these capabilities, you can build many new products that I think each will turn into interesting businesses.

Microsoft (NASDAQ: MSFT)

Microsoft and OpenAI have a new agreement; Microsoft’s investment in Open AI has 10x-ed in value; under the new agreement, Open AI has a $250 billion contract with Azure while Microsoft has model and product IP rights to 2032; management does not think AGI will be achieved any time soon, but a lot of value from AI can still be derived

We closed a new definitive agreement with OpenAI, marking the next chapter in what is one of the most successful partnerships and investments our industry has ever seen…

Already, we have roughly 10x-ed our investment. OpenAI has contracted an incremental $250 billion of Azure services, our rev share, exclusive IP rights and API exclusivity for Azure continue until AGI or through 2030. And we have extended the model and product IP rights through 2032…

…I don’t think AGI as defined at least by us in our contract is ever going to be achieved anytime soon. But I do believe we can drive a lot of value for customers with advances in AI models by building these systems.

Azure has the most expansive data center fleet for the AI era and is adding capacity at scale; Azure will increase AI capacity by >80% in FY2026 and will double its total data center footprint in 2 years as management sees strong demand; Azure announced the most powerful AI data center in the world in 2025 Q3 and it will start operations in 2026 and scale to 2 gigawatts; Azure has the world’s first large-scale cluster of NVIDIA GB300s; Azure is building a fungible GPU fleet that’s continuously modernised for all stages of the AI lifecycle (from pretraining to inference) and for workloads that go beyond generative AI; management thinks Azure has the best ROI (return on investment) and TCO (total cost of ownership) for customers; Azure increased the token throughput of GPT-4.1 and GPT-5 by 30% per GPU in 2025 Q3 (FY2026 Q1); Azure is supporting sovereign AI needs; Azure has customers in 33 countries who are developing their AI capabilities within local borders, such as OpenAI and SAP in Germany; Azure has Azure AI Foundry to help customers build own AI apps and agents; Azure AI Foundry offers enterprises access to 11,000 models (including GPT-5 and Grok 4) which is more than any competitor; Azure has 80,000 customers; Azure AI Foundry also provides other tools beyond models for developers to customize and manage AI applications and agents; real production-scale AI deployments are driving Azure’s overall growth; Azure took share again in 2025 Q3 (FY2026 Q1)

We have the most expansive data center fleet for the AI era, and we are adding capacity at an unprecedented scale. We will increase our total AI capacity by over 80% this year and roughly double our total data center footprint over the next 2 years, reflecting the demand signals we see. Just this quarter, we announced the world’s most powerful AI data center, Fairwater in Wisconsin, which will go online next year and scale to 2 gigawatts alone.  And we have deployed the world’s first large-scale cluster of NVIDIA GB300s. We are building a fungible fleet that’s been continuously modernized and spans all stages of the AI life cycle from pretraining to post training to synthetic data generation and inference. And it also goes beyond GenAI workloads to recommendation engines, databases and streaming. We’re optimizing this fleet across silicon systems and software to maximize performance and efficiency. 

It’s this combination of fungibility and continuous optimization that allows us to deliver the best ROI and TCO for us and our customers. For example, during the quarter, we increased the token throughput for GPT-4.1 and GPT-5, two of the most widely used models by over 30% per GPU.

We also have the most comprehensive digital sovereignty platform. Azure customers in 33 countries are now developing their own cloud and AI capabilities within their borders to meet local data residency requirements. In Germany, for example, OpenAI and SAP will rely on Azure to deliver new AI solutions to the public sector…

We are building Azure AI Foundry to help customers build their own AI apps and agents. We have 80,000 customers, including 80% of the Fortune 500. We offer developers and enterprise access to over 11,000 models, more than any other vendor, including as of this quarter, OpenAI’s GPT-5 as well as xAI’s Grok 4…

…Beyond models in Foundry, we are providing everything developers need to design, customize and manage AI applications and agents at scale. Our new Microsoft Agent Framework helps developers orchestrate multi-agent systems with compliance, observability and deep integration out of the box…

…These kinds of real production scale AI deployments are driving Azure’s overall growth. And once again, this quarter, Azure took share.

Ralph Lauren used Azure AI Foundry to build a conversational shopping experience; Open Evidence used Azure AI Foundry to build a clinical assistant; KPMG used the Microsoft Agent Framework in Azure AI Foundry to connect agents with internal data

For example, Ralph Lauren used Foundry to build conversational shopping experience in its app, enabling customers to describe what they’re looking for and get personalized recommendations. And OpenEvidence used Foundry to create its AI-powered clinical assistant which surfaces relevant medical information to physicians and help streamline charting…

…KPMG used the framework to modernize the audit process, connecting agents to internal data with enterprise-grade governance and observability.

Microsoft has 900 million MAU (monthly active users) of AI features across its products; Microsoft’s family of Copilot apps now has 150 million MAU (was 100 million in 2025 Q2); management sees Copilot becoming the UI (user interface) for agentic AI; a chat feature released in Microsoft 365 just 9 months ago already has tens of millions of users; adoption of chat is up 50% sequentially in 2025 Q3 (FY2026 Q1), and usage intensity is increasing; management introduced Agent Mode in 2025 Q3 (FY2026 Q1), which can turn prompts into full Powerpoint slides or Excel spreadsheets; Agent Mode is ranked best-in-class by 3rd-party benchmarks; adoption of Microsoft 365 Copilot is growing super fast; more than 90% of the Fortune 500 are using Microsoft 365 Copilot; a number of large companies each purchased over 15,000 Microsoft Copilot seats in 2025 Q3 (FY2026 Q1); Lloyds Banking Group deployed 30,000 Microsoft Copilot seats in 2025 Q3 (FY2026 Q1), saving each employee 46 minutes daily; enterprises are coming back to purchase even more seats of Microsoft 365 Copilot after the first purchase; PwC employees interacted with Microsoft 365 Copilot over 30 million times in 6 months, saving millions of hours on employee productivity

We now have 900 million monthly active users of our AI features across our products. And our first-party family of Copilots now has surpassed 150 million monthly active users across the information work, coding, security, science, health and consumer.  

When it comes to information work, we continue to innovate with Microsoft 365 Copilot. Copilot is becoming the UI for the agentic AI experience. We have integrated chat and agentic workflows into everyday tools like Outlook, Word, Excel, PowerPoint and Teams. Just 9 months since release, tens of millions of users across Microsoft 365 customer base are already using chat. Adoption is accelerating rapidly, growing 50% quarter-over-quarter, and we continue to see usage intensity increased. 

This quarter, we also introduced Agent Mode, which turns single prompts into export quality Word documents, Excel spreadsheets, PowerPoint presentation and then iterate to deliver the final product much like agent mode in coding tools today. We’re thrilled by the early response, including third-party benchmarks that rank it best-in-class…

…Customers continue to adopt Microsoft 365 Copilot at a faster rate than any other new Microsoft 365 suite. All up more than 90% of the Fortune 500 now use Microsoft 365 Copilot. Accenture, Bristol-Myers Squibb, EY Global and the U.K.’s Tax and Payments and Customs Authority all purchased over 15,000 seats this quarter. Lloyds Banking Group has deployed 30,000 seats, saving each employee an average of 46 minutes daily. And a large majority of our enterprise customers continue to come back to purchase more seats. Our partner, PwC, alone added 155,000 seats this quarter and now has over 200,000 deployed across its global operations. In just 6 months, PwC employees interacted with Microsoft 365 Copilot over 30 million times, and they credit this agentic transformation with saving millions of hours on employee productivity.  

Microsoft’s management is observing a growing list of software companies, including Adobe and Asana, building their own agents that connect with Copilot; management is seeing customers building their own agents that connect with Copilot; the number of agent users doubled sequentially in 2025 Q3 (FY2026 Q1); management has announced App Builder, a new Copilot agent that turns prompts into apps and agents in Microsoft 365

We are seeing a growing Copilot agent ecosystem with top ISVs like Adobe, Asana, Jira, LexisNexis, SAP, ServiceNow, Snowflake and Workday, all building their own agents that connect to Copilot. And customers are also building agents for their mission-critical business processes and workflows using tools like Copilot Studio and integrating them into Copilot. The overall number of agent users doubled quarter-over-quarter. And just yesterday, we announced App Builder, a new Copilot agent that lets anyone create and deploy task-specific apps and agents in minutes grounded in Microsoft 365 context.

Github Copilot is the most popular AI-pair programmer now with >26 million users; tens of thousands of developers at AMD use GitHub Copilot and they are saving months of developer time; Github now has 180 million developers, and is growing its fastest rate ever; 80% of new developers start on Github with Copilot; GitHub Copilot had 500 million pull requests merged over the past year; management has released Agent HQ; management sees GitHub Copilot and Agent HQ as the organising layer for all coding agents 

GitHub Copilot is the most popular AI pair programmer now with over 26 million users…

…Tens of thousands of developers at AMD use GitHub Copilot, accepting hundreds of thousands of lines of code suggestions each month and crediting it with saving months of development time…

GitHub is now home to over 180 million developers and the platform is growing at the fastest rate in its history, adding a developer every second. 80% of new developers on GitHub start with Copilot within the first week. Overall, the rise of AI coding agents is driving record usage with over 500 million pull requests merged over the past year.

And just yesterday, at GitHub Universe, we introduced Agent HQ. GitHub Copilot and Agent HQ is the organizing layer for all coding agents, extending GitHub privatives like PRs, issues, actions to coding agents from OpenAI, Anthropic, Google, Cognition, xAI as well as OSS and in-house models. GitHub now provides a single mission control to launch, manage and review these agents, each operating from its own branch with built-in controls, observability and governance.

Half of Microsoft’s cloud and AI-related capex in 2025 Q3 (FY2026 Q1) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs, driven by strong AI- and Azure-related demand; there is a difference between Microsoft’s total capital expenditure and cash expenditure because of the use of finance leases; Microsoft’s AI capital expenditure for CPUs and GPUs are backed by signed-contracts and the useful lives of the GPUs are quite matched with the duration of the contracts; Microsoft’s AI capital expenditure for long-lived assets are not backed by contracts, but management is confident these assets will be useful over their lifespans; when building AI infrastructure, management’s priority is for Microsoft’s internal workloads, such as Copilot and AI research

Capital expenditures were $34.9 billion, driven by growing demand for our Cloud and AI offerings. This quarter, roughly half of our spend was on short-lived assets, primarily GPUs and CPUs, to support increasing Azure platform demand, growing first-party apps at AI solutions, accelerating R&D by our product teams as well as continued replacement for end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond, including $11.1 billion of finance leases that are primarily for large data center sites. And cash paid for PP&E was $19.4 billion. As a reminder, the difference between total CapEx and cash paid for PP&E is primarily due to finance leases as well as the normal timing of goods received, but not yet paid…

…Increasingly, we talked about this short-lived assets, both GPUs and CPUs, Again, we talk about all these workloads are burning both in terms of app building. Now when that happens, short-lived assets generally are done to match sort of the duration of the contracts or the duration of your expectation of those contracts. And so I sometimes think when people think about risk, they’re not realizing that most of the lifetimes of these and the lifetime of the contracts are very similar. And so when you think about having revenue and the bookings and coming on the balance sheet, the depreciation of short-lived assets, they’re actually quite matched, Mark…

… We’re continuing to do that also using leases. Those are very long-lived assets, as we’ve talked about 15 to 20 years. And over that period of time, do I have confidence that we’ll need to use all of that, it is very high…

…Because when you think about real priorities that you have to fill first, it’s obviously the increasing usage and adoption and sales we’ve seen of M365 Copilot and the usage of Copilot chat, which we’ve seen very different patterns, which we’re encouraged by. It’s the adoption of security features. It’s the GitHub momentum.  And so when you’re thinking about it, that is where and it is a priority for us to allocate resourcing there first. And so you are right to ask how do I think about that. We’ve worked very hard to try to mitigate it as best we can, but we have been short in Azure, and we’ve been clear on it. And I would say the other 2 priorities that I haven’t mentioned maybe as much before is also just making sure our product teams and the AI talent that we’ve been able to hire into the company really over the past 1.5 years have access also to significant capacity because we’re seeing it make the product better in a loop that is adding great benefit today into products people are using today for real-world work. And so we are making that a priority to make sure our research teams have that as well as our product engineering teams. And yes, it does impact Azure directly. That is the place where you see that prioritization. But I think it’s probably hard for me to give an exact number, but it is safe to say that the number could be higher.

Azure grew revenue by 40% in 2025 Q3 (FY2026 Q1) (was 39% in 2025 Q2); Azure’s core infrastructure business had better than expected growth; Azure’s AI services revenue was in line with expectations; Azure was capacity-constrained in 2025 Q3 (FY2026 Q1) despite bringing more capacity online; management expects Azure to be capacity-constrained through at least FY2026; management will continue to balance capacity-additions between Azure’s revenue growth, and Microsoft’s internal-needs for compute; the demand signals that management is seeing is accelerating faster than they expected; management is seeing demand increasing across many places and they are investing in capacity with confidence in usage patterns and in bookings

In Azure and other Cloud services, where we continue to see accelerating demand, revenue grew 40% and 39% in constant currency. Results were ahead of expectations, driven by better-than-expected growth in our core infrastructure business, primarily from our largest customers. Azure AI services revenue was generally in line with expectations, and this quarter, demand again exceeded supply across workloads, even as we brought more capacity online…

…In Azure, we expect Q2 revenue growth of approximately 37% in constant currency as demand remains significantly ahead of the capacity we have available. And while we’re accelerating the amount of capacity we’re bringing online, we will continue to balance Azure revenue growth with the growing needs across our first-party apps and AI solutions, our own R&D efforts and the end-of-life server replacements. Therefore, we now expect to be capacity constrained through at least the end of our fiscal year…

…Demand signals across bookings, RPO and product usage are accelerating faster than we expected. We’re investing in infrastructure, AI talent and product innovation to capture that momentum and expand our leadership position…

…Demand is increasing. It is not increasing in just one place. It is increasing across many places. We’re seeing usage increases in products. We are seeing new products launch that are getting increasing usage, and increasing usage very quickly. When people see real value, they actually commit real usage. And I sometimes think this is where this cycle needs to be thought through completely is that when you see these kind of demand signals and we know we’re behind, we do need to spend. But we’re spending with a different amount of confidence in usage patterns and in bookings, and I feel very good about that. 

Azure is expected to grow revenue by 37% in 2025 Q4 (FY2026 Q2) in constant currency, driven by demand that remains significantly ahead of capacity; management now expects capital expenditure in FY2026 to have a higher growth rate than in FY2025 (previous guidance was for capital expenditure growth in FY2026 to moderate from FY2025’s level) because of an increase in spend on GPUs and CPUs

For Intelligent Cloud, we expect revenue of USD 32.25 billion to USD 32.55 billion or growth of 26% to 27%. In Azure, we expect Q2 revenue growth of approximately 37% in constant currency as demand remains significantly ahead of the capacity we have available… As a reminder, there can be quarterly variability in the year-on-year growth rates depending on the timing of capacity delivery and when it comes online as well as from in-period revenue recognition depending on the mix of contracts…

…Capital expenditures. With accelerating demand and a growing RPO balance, we’re increasing our spend on GPUs and CPUs. Therefore, total spend will increase sequentially, and we now expect the FY ’26 growth rate to be higher than FY ’25. 

Microsoft’s management thinks AI models, even when they become more powerful over time, will have spiky intelligence (being really good at only certain areas), and software systems such as GitHub Agent HQ or M365 Copilot or Azure AI Foundry will be needed to smooth out the spikiness

I think your question touches on something that’s pretty important, which is how are these AI systems going to truly be deployed in the real world and make a real difference and make a return for both the customers who are deploying them and then obviously, the providers of these systems. And I think the best way to characterize the situation is that even as the intelligence capability increases, let’s even say, exponentially like model version over model version, the problem is it’s always going to still be jagged, right? I think the term people use is the jagged intelligence, even — or spiky intelligence, right? 

So you may even have a capability that’s fantastic at a particular task, but it may not uniformly grow. So what is required is in fact, these systems, whether it is GitHub Agent HQ or the M365 Copilot system. Don’t think of this as a product. Think of it as a system that in some sense smooths out those jagged edges, and really helps the capability…

…If I am in M365 Copilot, I can generate an Excel spreadsheet. The good news is now an Excel spreadsheet does understand Office JS, has the formulas in it. It feels like, wow, it is a great spreadsheet created by a good model. The more interesting thing is I can go into agent mode in Excel and iterate on that model. And yet, it will stay on rail. It won’t go off rail, it will be able to do the iteration. Then I can even give it to the analyst agent, and then it will even make sense of it like a data analyst would of our Excel model. The reason I say all of that is because that’s the type of construction that will be needed even when the model is magical, all powerful. I think we will be in this jagged intelligence phase for a long time. So one of the fundamental things that these — whether it’s GitHub, whether it’s security, whether it’s M365, the 3 main domains we’re in, we feel very, very good about building these as organizing layers for agents to help customers.

And by the way, that’s the same thing that we want to put into Foundry for our third-party customers. So that’s kind of how people will build these multi-agent systems.

Microsoft’s management believes that AI software can grow the overall revenue-pie for Microsoft, in a similar manner as how cloud computing expanded the overall server market

I should also say one of the things I like about Copilot is, I mean, Copilot ARPU is compared to M365 ARPUs, right? It’s expansive. The same thing that happened between server and cloud like we used to always say, well, is it zero-sum, it turned out that the cloud was so much more expansive to the server market. The same thing is happening in AI because first, you could say, hey, our ARPUs are too low when it comes to M365 or you could say we have the opportunity with AI to be much more expansive. Same thing with tools, right? I mean, tooling — the tools business was not like a leading business, whereas coding business is going to be one of the most expansive AI systems. And so we feel very good about being in that category. 

To deal with customer-concentration risk from OpenAI, in the event OpenAI cannot follow-through on its spending-commitments, management is (1) building fungible data centers that can serve a broad base of customers including Microsoft itself, (2) only selectively building out data centers for OpenAI, and (3) having internal needs for AI infrastructure, such as Copilot; management walked away from building certain capacity for OpenAI (which Oracle won the contract for) because they wanted to avoid customer-concentration, and they did not want to build capacity that was specific to only one company

[Question] We seem to be entering into a new era where the contractual commitments from a small number of AI natives are just incredibly large, not only in absolute terms, but sometimes relative to the size of the companies themselves. For instance, contracts worth hundreds of billions of dollars that are 20x their current revenue scale. Philosophically, how do you evaluate the ability of those companies to follow through on these commitments?

[Answer] It’s great to have the hit first-party apps in the beginning because you can build scale that then if it’s a fungible and that’s where the key is. You don’t want to build for a digital native in — as if you’re just doing hosting for them. You want to build. That’s where — I think some of the decision-making of ours is probably getting better understood. What do we say yes to, what do we say no to. I think there was a lot of confusion, hopefully by now, anyone who switched on would figure this out. And so that’s, I think, one thing we’re doing on the third party. But the 1 — first party is probably where a lot of our leverage comes and it’s not even about one hit app on our first-party even. Our portfolio of stuff which I just walked through in the earlier answer, gives us, again, the confidence that between that mix, we will be able to use our fleet to the maximum. And remember, these assets, especially the data centers and so on are long assets, right? There will be many refresh cycles for any one of these when it comes to the gear. So I feel that once you think about all those dimensions, the concentration risk gets mitigated by being thoughtful about how you really ensure the build is for the broad customer base…

…When you think about concentration risk or delivering to any customer, you have to remember that because we’re talking about this very large flexible fleet that can be used for anyone and for any purpose, 1P, 3P, and including our commercial cloud, by the way, which I should be quite clear on, it is pretty flexible in every regard…

…[Question] There’s talk that another hyperscaler came in and took away the business that was rightfully Microsoft’s. I’m sure that there is a different point of view here. I’m wondering if you could offer some perspective.

[Answer] Just always goes back to, I think, the core principle, which is build a fleet that is fungible across the planet and works for third-party and first-party and research. So that’s essentially what we have done. And so when some demand comes in shape, that don’t fit that goal, where it’s too concentrated, not just by customer, by location, by type of skewing, right? I think Amy mentioned some very key things. When you think about the margin profile of a hyperscaler, you’ve got to remember this, the AI accelerator piece, but there’s compute, there’s storage. And so if all of the demand just comes for just one [ meter ] that’s really not a long-term business we want to be in. That’s even from a third party. We have to balance it with all of our first-party stuff because that’s after all a different margin stack for us. And then we have to fund our own R&D and model capability because in the long run, that’s what’s going to differentiate us. And so I look at all of those. We sort of use all of that to make sure we are saying yes to all the demand that we want, we say no to some of the demand that may be something that we could serve, but it’s not in our long-term interest. And so that’s sort of the decision-making we have done, and we feel very, very good about the decisions. In some sense, I feel even each time we say no to, the day after, I feel better.

Netflix (NASDAQ: NFLX)

Netflix has been using ML (machine learning) and AI (artificial intelligence) for years to recommend titles to viewers; management thinks Netflix’s data, products, and business processes, gives the company a great position to leverage AI; Netflix is beta-testing a conversational search experience for titles that is powered by GenAI (generative AI); Netflix is using GenAI to localise promotional assets; Netflix productions are starting to use GenAI tools when creating content; management has created guidelines for content producers when using AI tool; Netflix is using AI to test new ad formats

For many years now, ML and AI have been powering our title recommendations as well as production and promotion technology. Given our significant data assets and at-scale products and business processes, we are very well positioned to effectively leverage ongoing advances in AI…

…We’re leveraging GenAI to further enhance the member experience by improving the quality of our recommendations and content discovery features. One example is our beta testing of a conversational search experience that allows members to use natural language to explore the catalog and discover the perfect title for that moment. Another is the way we’re using GenAI to localize promotional assets in a variety of languages so titles can more easily travel to audiences who will love them around the globe…

…For example, in Happy Gilmore 2, filmmakers used GenAI coupled with ML and Eyeline’s proprietary volumetric capture technologies to de-age characters 6 during the opening flashback scene. And the producers of Billionaires’ Bunker used various GenAI tools during pre-production, including for pre-visualization to explore wardrobe and set designs. To help our creative partners use these new technologies responsibly, we recently released production guidance for creators…

…In Q4, we are using AI to test new ad formats, to generate the most relevant ad creative and placement for members, and for faster development of media plans. With these advancements, we’ll be able to test, iterate, and innovate on dozens of ad formats by 2026. 

Netflix’s management thinks that video-generating AI apps such as Sora will mostly impact UGC (user-generated content) platforms in the near term; management thinks AI will mostly help great story-tellers better tell their stories, but it will not make lousy story-tellers great, just like how listeners still gravitate largely towards human-created music rather than AI-created music

[Question] What are your thoughts on the impact from Sora 2 and other new AI content creation apps in terms of increased competition from short-form video, do you think it creates new competition from an engagement standpoint?

[Answer] What we’ve seen so far from these content creation apps is that it’s likely to have a lot more impact on UGC creators the most in the near term. In other words, AI content replacing viewing of existing user-generated content, that starts to make sense. Before we do, it takes a great artist to make something great. Writing and making shows and films well is a rare commodity, and it’s only done successfully by very few people. So AI can give creatives better tools to enhance their overall TV movie experience for our members. But it doesn’t automatically make you a great storyteller if you’re not. So if music is a leading indicator of all this, AI-generated music has been around for a long time, and there’s a lot of it. And it’s a pretty small part of total listening and established artists like Taylor Swift continue to be more popular than ever. So even in a world filled with AI music, AI seems to be mostly a tool for musicians to take — to make — to take their sound in new directions. And so we’re confident that AI is going to help us and help our creative partners tell stories better, faster in new ways, we’re all in on that. But we’re not chasing novelty for novelty sake here, and we’re investing in what we believe delivers value for creators and members alike. So we’re not worried about AI replacing creativity, but we’re very excited about AI creating tools to help creativity.

PayPal (NASDAQ: PYPL)

PayPal has partnerships with Perplexity, Google, and OpenAI for agentic commerce; PayPal has its own agentic commerce service where they can access consumers through multiple LLMs (large language models) with one integration; management thinks agentic commerce will take time but that consumer behaviour will shift; the presence of agentic commerce has not changed any of PayPal’s priorities; management thinks PayPal is well positioned to win in payments for agentic commerce from the merchant perspective (with the agentic commerce service), the consumer perspective (with the largest wallet ecosystems), and the LLM perspective (it would take a long time for LLMs to build the merchant ecosystem that PayPal has already built); some investment from PayPal would be needed for the agentic commerce partnerships

We continue to partner with leaders across the agentic space, including Perplexity earlier this year. And in September, we announced our expansive multiyear partnership with Google to create new AI shopping experiences. This morning, we announced a significant partnership with OpenAI to expand payments and commerce in ChatGPT, including adding PayPal branded checkout for shoppers and payment processing for merchants using Instant Checkout. This is a big win for PayPal and our customers. Today, we also announced our own agentic commerce services, which help merchants sell through multiple AI platforms, including Google, OpenAI and Perplexity. Merchants will have one integration to access consumers through multiple LLMs. Agentic commerce will take time, but we do believe consumer behavior will shift. PayPal is building for that future…

…Our strategy we’ve laid out very clearly is that we want PayPal to be available anywhere and everywhere that consumers want to pay. And we want merchants to be able to sell to consumers anywhere and everywhere. And we’ve talked about this even back at Investor Day where we laid out we want it to be online. We want it to be in-person and we want it to be agentic. And so agentic is just an evolution of this strategy…

…We actually think we’re extremely well positioned to win here. Let me just lay out a couple of the different components. So first, on the merchant side, merchants are going to need to figure out how to integrate with each of these LLMs. And that’s hard because there’s multiple LLMs that are out there. And whether you’re a large enterprise or a small business, you really don’t have the bandwidth to go figure out how to integrate with each and every one of these LLMs, make your catalog available, understand the identity and fraud protection that comes with each of these different elements. And so what we announced today was our PayPal agentic commerce services… We give them seller protection. We give them the ability to scale across all the different LLMs.

From the consumer standpoint, we’re, again, very well positioned. We’ve got the largest wallet ecosystems that are out there and our ability to give consumers the trust, the safety, the buyer protection and the ability to get access and make purchases on any of the LLMs they want to is a huge win. They get to use the wallet that they know and love and have a great end-to-end experience, which includes not only the purchase through the LLM, but also then all the things that happen afterwards, whether it’s package tracking or customer service or returns. So that’s again, a big win for consumers…

…For the LLMs themselves, it would take over a decade if they wanted to go and try to build the same kind of merchant ecosystem of the head, the torso and tail of merchants that PayPal has established over the last couple of decades. And so instead, they get to partner once with us and get access to tens of millions of merchants with identity, authentication, fraud protection and payment processing on a global scale…

…These partnerships do entail some level of investment, whether that’s in product and tech or around co-marketing, things that really drive usage and habituation around the product. And I mentioned in my prepared remarks that we would be reinvesting — begin reinvesting some of our margin dollars in the fourth quarter to really amplify some of our product initiatives. And between the push into agentic and that some of those investments are likely to be a near-term headwind to how fast TM dollars or earnings grow next year.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

Demand from AI continues to be very strong and management wants to invest to support TSMC’s customers’ growth; management now expects capex for 2025 to be US$40 billion to US$42 billion, slightly higher than previous expectation for US$38 billion to US$42 billion (2024’s capex was US$29.8 billion); most of the capex for 2025 will be for advanced process technologies; TSMC’s capital expenditure is always in anticipation of growth in future years

As the structural AI-related demand continues to be very strong, we continue to invest to support our customers’ growth. We are narrowing the range of our 2025 CapEx to be between USD 40 billion and USD 42 billion as compared to USD 38 billion to USD 42 billion previously. About 70% of the capital budget will be allocated for advanced process technologies, about 10% to 20% will be spent for specialty technologies, and about 10% to 20% will be spent for advanced packaging, testing, mask making and others.

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years…

TSMC’s management thinks recent developments in the AI market are very positive; management sees explosive growth in token volume, and they think this shows increasing consumer AI model adoption and thus more leading-edge silicon demand; TSMC is using AI internally to improve productivity, and management thinks enterprise AI is another source of demand; management is seeing the emergence of sovereign demand for AI; management has received very strong demand signals from TSMC’s customers and the customers’ customers; management’s conviction in the AI megatrend is strengthening

Recent developments in AI market continue to be very positive. The explosive growth in token volume demonstrated increasing consumer AI model adoption which means more and more computation is needed, leading to more leading-edge silicon demand. Companies such as TSMC, we are leveraging AI internally to drive greater productivity and efficiency to create more value. As such, enterprise AI is another source of demand. In addition, we continue to observe the rising emergence of sovereign AI. We are also happy to see continued strong outlook from our customers. In addition, we directly received very strong signals from our customers’ customers, requesting the capacity to support their business. Thus, our conviction in the AI megatrend is strengthening, then we believe the demand for semiconductor will continue to be very fundamental.

TSMC’s management is disciplined when planning for capacity; TSMC’s lead-time has now increased to 2-3 years because of heightened complexity in process technologies; management thinks TSMC has the deepest and widest look at demand in the semiconductor industry; when planning for AI capacity, management is talking to TSMC’s customers’ customers, which is different from past capacity-planning exercises for other platforms such as smartphones and PCs, where TSMC would talk to only its customers

In order to raise a structural increase in the long-term market demand profile, TSMC employs a disciplined [ in the ] capacity planning system. Externally, we work closely with our customers and our customers’ customer to plan our capacity. We have more than 500 different customers across all the market segments. In addition, as process technology complexity increases, the engagement lead time with customer is now at least 2 to 3 years in advance. Therefore, we probably get the deepest and widest look possible in the industry…

…[Question] Now cloud AI is granting a lot faster than the prior opportunities like smartphones and PCs. Yes, I think the demand for cloud AI is also may be harder to forecast. So just wanted to maybe get a bit more color from you that now to the prior rounds of capacity expansions, what is TSMC doing differently versus before?

[Answer] I believe we are just in the early stage of the AI application. So very hard to make the right forecast at this moment. What do we do differently? There’s a big difference because right now, we pay a lot of attention to our customers’ customer. We talk to and then discuss with them and look at their applications, maybe in the search engine or in social media application. We talk with them and see how they view the AI application to those functions. And then we make a judgment about what AI going to grow. And so this is quite the difference. As compared with before, we only talk to our customers and have an internal study. This is different.

TSMC’s A16 process technology is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; A16 is scheduled for volume production in 2026 H2

We also introduced A16 feature in our best-in-class super power rail, or SPR. A16 is best suited for specific HPC product with compressed signal route and dense power delivery networks.

TSMC’s management now sees the possibility of the revenue CAGR from AI accelerators in the five years ending 2029 to be higher than previous guidance of mid-40s percent because demand is “insane”

[Question] I think we gave a guidance of mid-40s data center AI growth CAGR earlier this year until 2029. Anything that you see which should kind of change that number?

[Answer] The demand actually continue to be very strong in a more — more stronger than we saw the 3 months ago, okay? So in today’s situation, we have talked to customers and then we talk to customers’ customer. So the CAGR previously we announced is about mid-40s, but it is still it’s a little bit better than that. We will update you probably in beginning of next year. So we have a more clear picture. Today, the number are insane.

TSMC’s management continues to see very strong demand for CoWoS (chip on wafer on substrate), driven by AI; management is working hard to narrow the gap between supply and demand for CoWoS; advanced packaging is already close to 10% of TSMC’s revenue

Talking about the CoWoS capacity, all I can say is continue the 3 months ago, we are working very hard to narrow the gap between the demand and supply. We are still working to increase the capacity in 2026. The real number, we probably update you next year. Today, all I want to say about the AI everything related, frontend and backend capacity is very tight. We are working very hard to make sure that the gap will be narrow, but what I can say is we are working very hard…

…Advanced packaging revenue is approaching close to 10% and is significant in our revenue, and it’s important for our customer.

TSMC’s management thinks AI’s growth will still be very positive for TSMC even without access to the China market

I have confidence on my customers, both in graphic or in ASIC, they are all performing well. And so if the China market is not available, but I still think the AI’s growth will be very dramatically and as I said, very positive, and I have confidence that our customers’ performance, and they will continue to grow, and we will support them…

…[Question] So even with immediate obscurity from China for the time being you are still confident that a 14% CAGR or even higher can be achieved in the coming years?

[Answer] You are right.

The amount of TSMC’s wafer-content in a 1 gigawatt AI center differs according to each project

When customers say that 1 gigawatt, they need about — invest about $50 billion, how much of TSMC’s wafer inside? We are not ready to share with you yet because it’s different from different projects… 

…I just want to say that right now, it’s not only 1 chip. Actually, it’s many chip together to form a system, right?

It makes no difference to TSMC’s revenue and gross margin whether it’s helping its customers manufacture GPUs or ASICs (application specific integrated circuits) for AI

[Question] From a TSMC angle, does it matter whether it’s — that demand is coming through a GPU or an ASIC? Does it have an impact on your revenue or gross margin mix?

[Answer] whether with it’s GPU or it’s an ASIC, it’s all using that our leading-edge technologies. And from our perspective, we are working with our customers, and we all know that they are going to grow strongly in the next several years. So no differentiation in front of TSMC. We support all kinds of types.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks Tesla is the leader in real-world AI; management thinks Tesla vehicles have the highest intelligence density of any car; there’s no other company apart from Tesla that is designing AI chips as well as vehicles

I think it’s important to emphasize that Tesla really is the leader in real-world AI. No one can do what we can do with real-world AI. I have pretty good insight into AI in general. I think that Tesla has the highest intelligence density of any AI out there in the car, and that is only going to get better…

…I don’t think there really isn’t anyone that’s doing this — the entire stack all the way through real world — kind of calibrating against the real world where you’ve got cars and robots in real world that like we know what the chip needs to do, and we know what — just as importantly, we know what the chip doesn’t need to do…

…Obviously, you can do reasoning on the server, that takes whatever. But then in a car, you need to make real-time decisions. So putting all that into the computer that’s in the car, that’s the challenge…

…I’m confident in saying that Tesla has — Tesla AI has the highest intelligence density. When you look at the intelligence per gigabyte, I think like Tesla AI is probably, in order of magnitude better than anyone else. And it doesn’t have any choice because that AI has got to fit in the AI for computer.

Millions of existing Tesla vehicles can become fully autonomous with a software update; management now has clarity on achieving full autonomy; version 14 (v14) of Tesla’s FSD software is broadly available now and current users have been amazed by it; Tesla’s Robotaxi service is now operating in 2 markets; Robotaxi’s coverage area in Austin has expanded by 3x since the initial launch; management thinks Tesla’s Robotaxi fleet bleeds in with other vehicles, unlike those of its competitors with many extra sensors; management thinks that demand for Tesla vehicles will expand significantly as people experience FSD at scale; FSD’s adoption is making decent progress, with the total paid FSD customer base being 12% of the current flee; Tesla groups Robotaxi’s costs within the Services and Other revenue line; management expects to have no safety drivers for Robotaxi in large parts of Austin by end-2025, even when operating with an abundance of caution; management expects Robotaxi to be in 8-10 metro areas by end-2025; management expects Robotaxi to be in Nevada and Florida and Arizona by end-2025; Robotaxis in Austin without anyone in the driver seat have covered more than 0.25 million miles; Robotaxis in Bay Area have crossed more than 1 million miles; customers are happy with Robotaxi and there are no notable issues; total miles driven by supervised FSD has crossed 6 billion and the overall safety remains excellent; Tesla will be working on a V14-light version FSD software that is compatible with Hardware 3; a big reason why autonomy is safer than human driving is because a large part of human driving accidents are caused by texting-while-driving; the autonomous driving software shipped to customers and Robotaxi are very similar; updated editions of V14 FSD will have reasoning capabilities

We have millions of cars out there that with a software update become full self-driving cars…

…We see now as a clarity on achieving full self-driving, unsupervised full self-driving…

…With version 14 of the — of self-driving, which people — you can see the reactions of people online. They’re quite amazed. Actually, anyone in the U.S. can get version 14 if they just go and select, I want the advanced software in their car. So if you’re listening right now and you’d like to try it out, just go in Settings and say, I want the advanced software, and you will get version 14…

…We’re now operating our Robotaxi in 2 markets, Austin and most Bay Area cities. We’ve already expanded our coverage area in Austin 3x since the initial launch and are on pace to continue expanding further.

Unlike our competitors, our Robotaxi fleet blends in the markets we operate in since they don’t have extra sensor sets or peripherals, which make them stick out. This is an underappreciated aspect of our current vehicle offerings, which are all designed for autonomous driving.

We feel that as experience — as people experience the supervised FSD at scale, the demand for our vehicles, like Elon said, would increase significantly.

On the FSD adoption front, we’ve continued to see decent progress. However, note that total paid FSD customer base is still small, around 12% of our current fleet. We’re moving — we’re working with regulators in places like China and EMEA to obtain approvals so that we can get FSD in those regions as well…

…Note that while small, our Robotaxi costs are included within Services and Other, along with our other businesses like paid supercharging, used car, parts and merchandise sales, et cetera…

…We are expecting to have no safety drivers in at least large parts of Austin by the end of this year. So within a few months, we expect to have no safety drivers at all at least in parts of Austin. We’re obviously being very cautious about the deployment. So our goal is to be actually paranoid about deployment because obviously, even one accident will be front page headline news worldwide. So it’s better for us to take a cautious approach here. But we do expect to have no safety drivers in the car in Austin within a few months. I think that’s perhaps the most important data point.

And then we do expect to be operating Robotaxi in, I think, about 8 to 10 metro areas by the end of the year. It depends on various regulatory approvals…

…We expect to be operating in Nevada and Florida and Arizona by the end of the year…

…We continue to operate our fleet in Austin without anyone in the driver seat, and we have covered more than 0.25 million miles with that. And then in the Bay Area, where we still have a person in the driver seat because of the regulations, we crossed more than 1 million miles. So — and we continue to see that the fleet — Robotaxi fleet works really well. Customers are really happy, and there’s no notable issues…

…Customers have used FSD supervised for a total of 6 billion miles as of yesterday. So that’s like a big milestone. And overall, the safety continues to be very good…

…Once the V14 release series is fully done, we are planning on working on a V14 light version for Hardware 3 probably expected in Q2 next year…

…The reason you’ve seen like there’s been an uptick in accidents pretty much worldwide is because people are texting and driving. So Autopilot actually dramatically improves the safety here because if somebody is looking down their phone, they’re not driving very well. So that’s really the game changer…

…In terms of like what we ship to customers versus Robotaxi, it’s mostly the same. Obviously, customers have some more features like they can choose the car wants to park in a spot or drive something like that, which is not super relevant for Robotaxi. But there’s only a few minor changes like those ones. But the majority of the algorithms and the architecture, everything is the same between those 2 platforms…

…We’ll be adding reasoning to — I don’t know, Ashok, is that like reasoning in like 14.3, maybe 14.4, something like that?… Yes, by end of this year for sure.

Tesla’s management is still very optimistic about the potential of Tesla’s Optimus autonomous robot; Tesla will unveil Optimus V3 in 2026 Q1; most of the real-world AI Tesla has developed for fully autonomous driving can be transferred to Optimus; management thinks Optimus can be a great surgeon; management thinks bringing Optimus to market is incredibly difficult; Optimus robots are already walking around Tesla’s offices; it’s really difficult engineering-wise to create the hands and fingers of Optimus that can mimic human hands and figures; it’s hard to manufacture Optimus at scale because the supply chain currently does not exist, so Tesla has had to be very vertically integrated and manufacture very deep into the supply chain; management thinks Tesla is uniquely positioned to win in autonomous robots because success in autonomous robots depends on 3 things, namely, scaled manufacturing technology, real-world AI, and a dextrous hand, and Tesla is the only company that can achieve all 3; management thinks Optimus can be 5x more productive than humans; many of the people working on Optimus in Tesla now were working on Tesla vehicles in the past; Optimus’s management reviews involve a tight loop between manufacturing and engineering design so that the overall manufacturing processes for Optimus can be good; Optimus 2 was impossible to manufacture; Tesla will have rolling changes for the Optimus design even after start of production

We’re also on the cusp of something really tremendous with Optimus, which I think is likely to be or has potential to be the biggest product of all time…

…We look forward to unveiling Optimus V3 probably in Q1. I think it will be ready for — to show off…

…The real-world intelligence we’ve developed for the car, most of that transfers to Optimus. So it’s a very good starting point…

…Optimus will be an incredible surgeon, for example, I imagine everyone had access to an incredible surgeon…

…Bringing Optimus to market is an incredibly difficult task to be clear…

…We do have Optimus robots that walk around our offices at our engineering headquarters in Palo Alto, California, basically 24 hours a day, 7 days a week. So any visitors that come by, you actually — you can stop one of the Optimus robots and ask it to take you somewhere, and it will literally take you to that meeting room or that location in the building…

…It’s difficult to create a hand that is as dextrous and capable as the human hand, which is an incredible — the human hand is an incredible thing that the more you study the human hand, the more incredible you realize the human hand is and why you need 5 — 4 fingers and a thumb, why the fingers have certain degrees of freedom, why the various muscles are of different strengths, the fingers are of different lengths. And it turns out actually that those are all there for a reason. And so making the hand and forearm, because most of the actuator — just like the human hand, the muscles that control your hand are actually primarily in your forearm. The Optimus hand and forearm is an incredibly difficult engineering challenge. I’d say it’s more difficult than the rest of — from an electromechanical standpoint, the forearm and hand is more difficult than the entire rest of the robot…

…Trying to make 1 million Optimus robots per year, that manufacturing challenge is immense, considering that the supply chain doesn’t exist. So with cars, you’ve got an existing supply chain. With computers, you’ve got an existing supply chain. With a humanoid robot, there is no supply chain. So in order to manufacture that, Tesla actually has to be very vertically integrated and manufacture very deep into the supply chain, manufacture the parts internally because there just is no supply chain…

…If I put myself in the position of a start-up trying to make a humanoid robot, I’m like, I don’t know how to do it without an immense amount of manufacturing technology. So — that’s why I think like Tesla is in almost a unique — I think a unique position when you consider manufacturing technology scaling, real-world AI and a truly dextrous hand. Those are generally the things that are missing when you read about other robots that just don’t have those 3 things. So I think we can achieve all those things — those 3 things with an immense amount of work. And that is the game plan…

…Optimus at scale is the infinite money glitch. It’s like this is — it’s difficult to express the magnitude of — like if you’ve got something like that — like if Optimus, I think, probably achieve 5x the productivity of a person per year because it can operate 24/7, it doesn’t even need to charge. It can operate tethered. So it’s plugged in the whole time…

…4-plus years back, we were in a finance meeting with Elon and Elon said, hey, our car is a robot on wheels. And that’s where we started developing. In fact, most of the engineering team, which is working on Optimus has come from the vehicle side. And that’s why when we talk about manufacturing prowess, we have the wherewithal because the same engineers who worked back in the day on drive units are working on actuators now. So that’s where we can — if there is any company which can do it at scale, that is going to be us…

…The Optimus reviews at this point are there’s the engineering review and then there’s the manufacturing review being done simultaneously with an iterative loop between engineering design and manufacturing because then we see — we design something and we say like, oh man, that’s really difficult to make. We need to change that design to make it easier to manufacture. So we’ve made radical improvements to the design of Optimus while increasing the functionality but making it actually possible to manufacture. 

Like I’d say, Optimus 2 is almost impossible to manufacture, frankly…

…The hardware design will not actually be frozen even through start of production. There will be continued iteration because a bunch of the things that you discover are very difficult to make. You only find that pretty late in the game. So we’ll be doing rolling changes for the Optimus design even after start of production.

Tesla’s A14 chip is manufactured by Samsung; Tesla is going to manufacture its A15 chip with both TSMC and Samsung; the A15 chip has 40x better performance than A14 because Tesla designed the hardware to address all the pain points in software; the A15 chip deleted a lot of components that were in the A14 chip and this has greatly improved the performance of the A15 chip; management thinks Samsung’s US fab is slightly more advanced than TSMC’s US fab; management wants to have an oversupply of A15 chips because the chips that do not go into vehicles and Optimus can be used for Tesla’s data centers; Tesla uses a combination of its A-series chips and NVIDIA chips for AI training; Tesla is not looking to replace NVIDIA, but management notes that NVIDIA’s chips need to accommodate a wide range of use cases, which disadvantages it against Tesla’s self-designed chips which need to accommodate only Tesla’s use cases; management thinks Tesla’s A15 chip will have the best performance per watt and best performance per dollar for AI

Samsung is worth noting, does manufacture our AI4 computer and does a great job doing that. So now with the AI5, and here’s I need to make a point of clarification relative to some comments I’ve made publicly before, which is we’re actually going to focus both TSMC and Samsung initially on AI5…

…By some metrics, the AI5 chip will be 40x better than the AI4 chip, not 40%, 40x because we have a detailed understanding of the entire software and hardware stack. So we’re designing the hardware to address all of the pain points in software…

…With the AI5, we deleted the legacy GPU or the traditional GPU, which is — it’s in AI4. But AI5 does not have — we just deleted the legacy GPU because it basically is a GPU. So we also deleted the image signal processor. And there’s like a long list actually of deletions that are very important. As a result of these deletions, we can actually fit AI5 in a half reticle and with good margin for the traces from the memory to the Tesla Trip accelerators, the ARM CPU cores and the PCI-X sort of the PCI blocks. So this is a beautiful chip. I’ve hoarded so much life energy into this chip personally. And I’m confident this will be — this is going to be a winner next level…

…Technically, the Samsung fab has slightly more advanced equipment than the TSMC fab. These will both be made in the U.S., one — TSMC in Arizona, Samsung in Texas…

…Our goal — explicit goal is to have an oversupply of AI5 chips because if we have too many AI5 chips for the cars and robots, we can always put them in the data center…

…We already use AI for training in our data centers. So we use a combination of AI5 and NVIDIA hardware. So we’re not about to replace NVIDIA to be clear, but we do use both in combination, AI4 and NVIDIA hardware. And the AI5 excess production, we can always put in our data centers…

…The challenge that they have is that they’ve got to satisfy a large range — a lot of requirements from a lot of customers, but Tesla only has to satisfy requirements from one customer, that’s Tesla. That makes the design job radically easier and means we can delete a lot of complexity from the chip. Like I can’t emphasize how important this is. So like when you look at the various logic blocks in the chip, as you increase the number of logic blocks, you also increase the interconnections between the logic blocks. So you can think of it like there’s highways, like how many highways do you need to connect the various parts of the chip. And especially if you’re not sure how much data is going to go between each logic block on the chip, then you kind of end up having giant highways going all over the place. It’s a very — like it becomes an almost impossibly difficult design problem. And NVIDIA has done an amazing job of dealing with almost an impossibly difficult set of requirements. But in our case, we’re going for radical simplicity…

…I think AI5 will be the best performance per watt, maybe by a factor of 2 or 3 and the best performance per dollar for AI, maybe by a factor of 10.

Tesla has a world simulator for reinforcement learning for autonomous driving that is indistinguishable from actual video; Tesla will be increasing the parameter count for its autonomous driving AI model by an order of magnitude

Our world simulator for reinforcement learning is pretty incredible, like — when you see the Tesla Reality Simulator, it’s — you can’t tell a difference between the video that’s generated by the Tesla Reality Simulator and the actual video, it looks exactly the same. So that allows us to have a very powerful reinforcement learning loop to further improve the Tesla AI.

We’re going to be increasing the parameter count by an order of magnitude. That’s not in 14.1. There are also a number of other improvements to the AI just that are quite radical. So it’s — this car will feel like it is a living creature. That’s how good the AI will get with the AI4 computer just before AI5.

Tesla’s management thinks Tesla vehicles can become a giant distributed AI inference fleet

We could actually have a giant distributed inference fleet and say like, well, if they’re not actively driving, let’s just have a giant distributed inference fleet. At some point, if you’ve got like tens of millions of cars in the fleet or maybe at some point, 100 million cars in the fleet, and let’s say they had at that point, I don’t know, a kilowatt of inference capability of high-performance inference capability, that’s 100 gigawatts of inference distributed with power and cooling — with cooling and power conversion taken care of. So that seems like a pretty significant asset.

The AI models Tesla and xAI are developing are very different, with Tesla’s models being much smaller

So the xAI, Grok is like a giant model that you could not possibly squeeze Grok onto a car. That’s for sure. It is a giant beast of a model. It’s — with Grok is trying to solve for artificial general intelligence with a massive amount of AI training compute and inference compute. So for example, Grok 5 will actually only run effectively on a GB300. That’s how much of a beast that Grok 5 is. So — whereas Tesla’s models are, I don’t know, maybe about less than 10% of the size, maybe closer to 5% the size of Grok. So yes, they’re really coming at the problem from very different angles. xAI and Grok are — they’re competing with Google Gemini and OpenAI ChatGPT and that kind of thing. So — and some of it is complementary. I mean for example, for Grok voice, being able to interact with Grok in the car is cool. Grok — for Optimus voice recognition and voice generation is Grok. So that’s helpful there. But they are coming at it from kind of opposite ends of the spectrum.

Visa (NASDAQ: V)

Visa has begun deployment of the next generation of VisaNet, its core processing platform; more than half of the new code base for VisaNet was written with the help of generative AI

We have begun deployment of the next generation of VisaNet, the core processing platform in our Visa as a Service stack. It offers a cloud-ready micro services distributed modular architecture that uses open languages and technologies, enabling easier scaling, configuration and faster feature deployment. Over half of the new code base was built with the assistance of generative AI, improving development speed, security and maintainability. We have specific modules in market today with plans to roll out additional modules and markets.

The Visa Scam Disruption product detects scam activity at the network level and uses AI to monitor merchants; Visa Scam Disruption has been launched for only a year, but it has helped law enforcers to dismantle more than $1 billion in fraud attempts from 25,000 scam merchants

We continue to enhance our risk management capabilities, including Visa scam disruption, which proactively detects scam activity at the network level that no single issuer, acquirer or a merchant could see alone and leverages AI-enhanced merchant monitoring, external intelligence feeds and our global expertise. Just a year since launch, we have worked closely with our clients and law enforcement to dismantle more than 25,000 scam merchants representing more than $1 billion in fraud attempts.

Visa is now powering live agentic transactions; Visa recently released the Visa Trusted Agent Protocol to help merchants verify agents and avoid malicious bots; there is minimal integration required from merchants to utilise Visa Trusted Agent Protocol; Visa recently launched its MCP (model context protocol) server, which allows AI systems to interface with the Visa Intelligent Commerce APIs; management thinks Visa is leading in setting standards for agentic commerce; with Visa Intelligent Commerce, management has put out a set of capabilities for AI-ready cards to allow consumers to easily set spending limits and conditions for agentic transactions; the Visa Trusted Agent protocol is an open standard; management thinks agentic commerce will accelerate adoption of traditional e-commerce and mobile commerce, and be a net positive for Visa in both the transactions-driven business, and the value-added services business; management thinks there will be 3 phases to agentic commerce, (1) consumers using agents for discovery then making purchases on merchant sites, (2) consumers using agents for discovery and making purchases with the agents, and (3) consumers empowering agents to search for things on their behalf and buy; management thinks Visa Trusted Agent Protocol can be the base layer for everyone in the agentic commerce ecosystem to leverage on ;see Point 32 for more on agentic commerce; 

I’m pleased to announce that we are now powering live agentic transactions and recently released a merchant agent toolkit to make it easy for developers to embed our solutions into workflows and agentic processes. Just 2 weeks ago, we announced the Visa Trusted Agent Protocol, a framework that enables safer agent-driven checkout by helping merchants verify agents and avoid malicious bots. And since it’s built on existing messaging standards, minimal integration is required for merchants…

…We recently launched our MCP server, providing access for AI systems to interface with our Visa Intelligent Commerce APIs…

…In this third wave of agentic commerce, we’ve been leading in terms of our role of setting the standards. I think one great example of that is Visa Intelligent Commerce, where we put out a set of capabilities for AI-ready cards, leveraging tokenization, AI-powered personalization, leveraging our data token service. We put out a set of standards with payment instructions that are going to allow customers like you and I to easily set spending limits and conditions to provide clear guidance for agent transactions and also our payment signals, which are going to share those data payloads in real time with Visa, enabling us to help set transaction controls, manage disputes and Chargebacks and those types of things…

…I think what differentiates the Visa Trusted Agent Protocol is 2 things. One is it’s open. It’s an open set of standards, and we think that an open framework is critical to drive mass adoption in the way that’s needed for agentic commerce. And the second is it’s easy to integrate. We built it on existing web infrastructure so that it’s going to be easy for merchants to integrate into existing messaging standards and get up and running quickly…

…[Question] What extent you see agentic as more of a substitute for traditional e-commerce versus being additive to the TAM of the overall payments industry.

[Answer] I think the base case is it continues to accelerate the adoption of e-commerce and mobile commerce as we all know it. I think there’s an upside case on that where you could actually see users buying from a much larger and more diverse set of merchants than they do today in traditional e-commerce given the power of these agents and their ability to go out and search the world’s inventory based on whatever it is that you prefer for your agent. That might be value. That might be price. That might be inventory. That might be speed of delivery and so on and so forth. I think that could ultimately result in consumers buying more things from more merchants, which ultimately means more transactions on Visa. I also think there’s a significant upside in the delivery and the relevance of our portfolio of value-added services for the entire ecosystem, especially as you said, they have to work through a number of things that involve potential fraud and disputes and chargebacks and things like that…

…It’s still early days. And I think what you’re likely to see in the evolution of agentic commerce is not different or dissimilar to what we saw in e-commerce. I think early on, you’re seeing consumers use these agents and these platforms for discovery. They’re shopping. They’re looking for what might be available for any given gift I’m trying to buy or any clothing item that I might try to buy. But then I might jump to the actual merchant site to make the purchase.

Then the next step of what you’re starting to see is the integration of the buy capabilities into that shopping journey. We’re just starting to see that in the marketplace today. We’ve been working on that for many, many months with the ecosystem.

And then I think the ultimate kind of user experience and the promise of agentic commerce will be truly empowering agents to go out to search for things on our behalf and ultimately make purchases and buy things without human intervention. That, we haven’t really seen in the marketplace today, but we’re working very hard with the platform players to ensure that the capabilities are in place to enable that…

…I think it’s where the Visa Trusted Agent Protocol can form a base layer for everyone to build on and everyone to ultimately leverage.

Visa Protect for A2A (Account-to-Account), which enables consumers to pay businesses directly from their bank accounts, is using AI to reduce fraud in Brazil; Visa Protect for A2A’s pilot in Brazil scored nearly $500 billion of Pix volume from Visa’s bank partner over a 6-month period and identified over $90 million of fraud; the fraud could have been prevented with a detection rate of more than 80% with Visa Protect for A2A

Our award-winning product, Visa Protect for A2A, is delivering value with AI. Our pilot in Brazil scored nearly $500 billion of our bank partner’s Pix volume over a 6-month period and identified over $90 million of fraud, which could have been prevented with a detection rate of more than 80%. We believe Visa Protect for A2A can play an important role in Brazil by providing real-time fraud monitoring on Pix, helping to reduce fraud for our bank partners and ensure a safer payment experience for buyers and sellers.

Visa’s management thinks tokenisation is the critical building block for agentic commerce

Tokenization, I think, is the critical building block that ultimately will help Agentic commerce reach its promise. And if you go back — I know you asked about the Trusted Agent Protocol, but if you go back to the Visa Intelligent Commerce set of products and standards that we put out, tokenization as a platform is what enables the bulk of that functionality and ultimately is what’s going to enable us all to have safe, secure, trusted transactions with agents on our behalf. So tokenization, critical building block of that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Netflix, PayPal, TSMC, Tesla, and Visa. Holdings are subject to change at any time.

The View On Consumer Spending From The Largest Payments Companies

Mastercard and Visa can feel the pulse of consumer spending – what are they seeing now?

Mastercard (NYSE: MA) and Visa (NYSE: V) are two of the largest payments companies in the world. As a result, they have a great view on consumer spending that’s taking place. With both companies reporting their earnings results for the third quarter of 2025 earlier last week, the bottom line is that consumer spending remains strong in the USA and other parts of the world. Here’s what they are seeing.

*What’s shown in italics between the two horizontal lines below are quotes from Mastercard and Visa’s management teams that I picked up from their earnings conference calls.


From Mastercard

1. Management sees consumer and business spending remaining healthy, supported by steady inflation, a balanced labour market, wage growth, and rising financial markets, although there remains macro uncertainty; management remains positive about Mastercard’s growth outlook

We continue to see healthy consumer and business spending in the quarter with the macroeconomic environment still generally supportive. Inflation levels have remained fairly steady and labor markets remain well balanced. Financial markets were near record highs, further contributing to the wealth effect, which helps stimulate spend. Given this backdrop and our diversified business, we are positioned well for ongoing success…

…The macroeconomic environment is supportive with balanced unemployment rates, wage growth continuing to outpace the rate of inflation for the most part and the wealth effect remaining intact. That said, there continues to be some ongoing geopolitical and economic uncertainty.

2. Worldwide GDV (gross dollar volume) was up 9% year-on-year in constant-currency basis; cross-border volume was up 15% globally in constant-currency, driven by both travel and non-travel cross-border spending (cross-border volume growth was 15% in 2025 Q2); switched transactions was up 10% year-on-year; card growth was 6% in 2025 Q3, with Mastercard ending the quarter with 3.6 billion cards in circulation (there were 3.6 billion cards in 2025 Q2, and year-on-year growth was 6% then); on currency-neutral basis, domestic assessments were up 6%, cross-border assessments were up 16% and transaction processing assessments were up 15%

Let’s first look at some of our key volume drivers for the third quarter on a local currency basis. Worldwide gross dollar volume or GDV increased by 9% year-over-year. In the U.S., GDV increased by 7% with credit growth of 7% and debit growth of 7%. Outside of the U.S., volume increased 10% with credit growth of 10% and debit growth of 9%. Overall, cross-border volume increased 15% globally for the quarter, reflecting continued growth in both travel and non-travel related cross-border spending…

…Switched transactions grew 10% year-over-year in Q3…

…Card growth was 6%. Globally, there are 3.6 billion Mastercard and Maestro-branded cards issued…

…Again, all growth rates are described on a currency-neutral basis, unless otherwise noted. Looking quickly at each key metric. Domestic assessments were up 6%, while worldwide GDV grew 9%. The 3 ppt difference is primarily driven by mix. Cross-border assessments increased 16%, while cross-border volumes increased 15%. The 1 ppt difference is driven by pricing in international markets, partially offset by mix. Transaction processing assessments were up 15%, while switch transactions grew 10%. On an unrounded basis, the 4 ppt difference is primarily due to favorable mix as well as some benefit from pricing and revenue from FX volatility.

3. In 2025 Q3, Mastercard’s operating metrics remained strong; in October 2025 so far, Mastercard’s operating metrics continue to be strong with worldwide switched volume growth of 9% (5% in the USA, and 12% outside of the USA), switched transactions growth of 10%, and cross-border volume growth of 15%; US switched volume had a sequential decline in October 2025 compared to 2025 Q3 and September 2025 (5% versus 8% and 7%) because of the expected migration of debit volume by Capital One; management sees consumer and business spending remaining healthy; management is seeing steady growth across both affluent and mass market consumers, although the composition of the spend between discretionary and non-discretionary is different 

Starting with Q3, all our switch metrics are generally in line with Q2 and remained strong. As we look to the first 4 weeks of October, our metrics continue to remain strong, generally in line with the third quarter. Of note, U.S. switched volumes saw a sequential decline, primarily due to the expected Capital One debit migration as well as some tougher comps related to weather impacts in 2024. Overall, we continue to see healthy consumer and business spending…

…When we do our analysis based on looks of the various products we have out in the market, which serve the affluent population versus the mass market population as well as when we look at the amount of spend which is taking place across different categories of products that we have. What we’re seeing is continued steady growth, both across affluent and mass market, true in the U.S., true across the globe. So overall, the consumer continues to spend…

…You can expect that consumers at different income levels make different decisions on their spend, discretionary versus non-discretionary. What matters for us is, it has to be carded and that plays in, and that adds up to the resilient trends that Sachin just talked about…

…When I was talking about the first 4 weeks of October on U.S. volumes, right? It’s certainly the Capital One piece as well as the lapping effect due to weather impacts we had in 2024. So, it’s a combination of both of those, which reflects on the 8% number that you’re seeing in Q3 going to 5%. But it’s important to also look at what the growth rate in September was, because 8% is the average across all of Q3. So, it’s kind of this step change, which takes place as cards migrate that you’re going to start to see the volume come down.

From Visa

1. US payments volume growth was good at 8% in 2025 Q3 (FY2025 Q4), with e-commerce growing faster than physical spend; credit and debit volume were both up 8%, reflecting a resilient consumer; growth across consumer spend bands remained relatively consistent with Q3 with the highest spend band continuing to grow the fastest

U.S. payments volume was up 8%, slightly above Q3 with e-commerce growing faster than face-to-face spend. Credit and debit were both up 8%, reflecting resilience in consumer spending. When we look at quarterly spend category data in the U.S., we saw broad-based strength, including improvements in retail services and goods, travel and fuel. Both discretionary and nondiscretionary spend were up from Q3. And growth across consumer spend bands remained relatively consistent with Q3 with the highest spend band continuing to grow the fastest.

2. Visa’s cross-border volume growth remained strong in 2025 Q3 (FY2025 Q4) compared to 11% year-on-year growth in 2025 Q2; there was a strong performance from e-commerce and travel

Q4 total cross-border volume was up 11% year-over-year relatively stable to last quarter, with e-commerce up 13%, and travel improving sequentially to 10%. eCommerce remains strong as it has for the last 8 quarters now and still represented about 40% of our total cross-border volume. Travel spend continued to grow above pre-COVID levels. The slight step-up from Q3 was led by a combination of factors, including increased commercial volumes, helped by our efforts in virtual card and some improvement in CEMEA outbound due to holiday timing.

3. Payments volume on Visa’s network continues to grow in October 2025, with US payments volume up 7%, cross-border volume up over 12%, and e-commerce volume up 14%

Moving to Q1. Through October 21, with volume growth in constant dollars, U.S. payments volume was up 7%, with credit and debit both up 7%. Process transactions grew 9% year-over-year. For constant dollar cross-border volume, excluding transactions within Europe, total volume grew 12% year-over-year, with eCommerce up 14% and travel up 11%.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Mastercard and Visa. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q3 2025

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the third quarter of 2025.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the third quarter of 2025 – was held earlier this week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy remains resilient, but uncertainty has heightened.

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy remained generally resilient in 2025 Q3 but job growth softened and uncertainty heightened; consumers and small businesses remain resilient and credit delinquencies were stable and better than management expected; a deterioration in the labour market is a risk management is watching

While there have been some signs of a softening, particularly in job growth, the U.S. economy generally remained resilient. However, there continues to be a heightened degree of uncertainty stemming from complex geopolitical conditions, tariffs and trade uncertainty, elevated asset prices and the risk of sticky inflation…

…Consumers and small businesses remain resilient based on our data. While we are closely watching the potentially softening labor market, our credit metrics, including early-stage delinquencies remain stable and slightly better than expected…

…Now talking to our economists, I was struck by something that Mike Farley said about thinking about the current labor market in this moment of what people are describing as a low hiring, low firing moment. You can think of that as potentially explained by employers experiencing high uncertainty. and so if you believe that and you think about This moment as a moment of high uncertainty, I think tipping point is a little bit too strong a word. But certainly, as you look ahead, there are risks. We already have slowing growth. There are a variety of challenges and sources of volatility and uncertainty. And so it’s pretty easy to imagine a world where the labor market deteriorates from here.

2. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose from US$2.1 billion a year ago; the increase is partly related to the case of fraud involving Tricolor

Credit costs were $3.4 billion with net charge-offs of $2.6 billion and a net reserve build of $810 million. In Wholesale, charge-offs were slightly elevated as a result of a couple of instances of apparent fraud in certain secured lending facilities. Otherwise, in both Wholesale and Consumer, credit performance remains in line with our expectations…

…Given the amount of public attention the Tricolor thing has gotten in particular, I think it’s worth just saying that, that’s contributing $170 million of charge-offs in the quarter, which we call out on the wholesale side.

3. JPMorgan’s investment banking fees had good growth in 2025 Q3, with strength in equity underwriting; management sees a robust pipeline for capital markets activities among companies and the outlook continues to be upbeat; management is seeing revived animal spirits among companies for credit

IB fees were up 16% year-on-year, reflecting a pickup in activity across products with particular strength in equity underwriting as the IPO market was active. Our pipeline remains robust and the outlook, along with the market backdrop and client sentiment continues to be upbeat…

…[Question] My question is both of demand and credit fundamentals, what are you seeing in terms of drivers of client demand there on the lending side on the Wholesale front?

[Answer] From the perspective of our franchise, this kind of moment of revived animal spirits, let’s say, is driving demand. We’re seeing very healthy deal flow. We’re seeing acquisition finance come back.

4. Management now expects credit card net charge-offs for 2025 to be 3.3% (was previously expected to be 3.6%) 

On credit, we now expect the 2025 card net charge-off rate to be approximately 3.3% on favorable delinquency trends driven by the continued resilience of the consumer.

5. The savings rate of consumers is currently a little lower than what management expected back in May 2025 because the consumer’s spending is robust even though income is lower

[Question] I wanted to ask about the retail deposit assumptions that were embedded in that. At Investor Day, you discussed an expectation for deposits to grow 3% year-over-year by the fourth quarter and I think accelerating to 6% next year. It looks like they were flat this quarter. So I just wanted to see if you’re still expecting those kind of previously expected growth rates of 3% and 6%.

[Answer] You’re referring specifically to a page that was presented at Investor Day [in May 2025] by Marianne for the CCB with some illustrative scenarios for what we might expect CCB deposit growth to do as a function of some different potential macroeconomic scenarios… So as we sit here right now and we sort of update the macro environment, a few things are true. One is the personal savings rate is a little bit lower than expected. Consumer spending remained robust, while income was a bit lower. So that’s all else equal, decreasing balances per account in CCB.

6. Management thinks subprime auto loans have been very challenging lately for organisations that are lending there

Subprime auto has been a challenging space for people in that industry.

7. The AI theme is overwhelming the US’s financial markets; management thinks the return on investment from AI spending needs to show up in terms of slowing down growth in the bank’s expenses, but it’s hard to measure, and management is seeing some productivity tailwinds

I think the risk is because of how incredibly overwhelming the AI theme is for the whole marketplace right now and all the various effects that it’s having in terms of equity market performance, MAG 7, data center build-out, electricity costs, like it’s an overwhelming thing…

…We’re spending a lot of money on it. We have very deep experts. As Jamie always says, we’ve been doing it for a long time, well before the current generative AI boom. But in the end, the proof is going to be in the pudding in terms of actually slowing the growth of expenses. And so what we’re doing is kind of rather than saying you must prove that you’re generating this much savings from AI, which turns out to be a very hard thing to do, hard to prove and might, at the margin result in people scrambling around to use AI in ways that are actually not efficient and that distract you from doing underlying process reengineering that you need to do. What we’re saying instead is let’s just do old-fashioned expense discipline and constrain people’s growth, constrain people’s headcount growth…

…Even if we can’t always measure it that precisely, there are definitely productivity tailwinds from AI.

8. Management thinks nonbank financial institutions in the USA has higher credit risks than banks

I would just add that it’s a very large category of nonbank financial institutions and probably a number like half of it, we would consider very traditional, not like different. There is a component, which is different today than it was years ago, and there’s a component which isn’t that different. But if you look at like COs, CLOs and lending to leveraged entities that are underwritten with leveraged loans, so there’s kind of a little bit of double leverage in there.

I would say that, yes, there will be additional risk in that category that we will see when we have a downturn. I expect to be a little bit worse than other people expect it to be because we don’t know all the underwriting standards that all of these people did. Jeremy said these are very smart players. They know what they’re doing. They’ve been around a long time but they’re not all very smart. And we don’t even know the standards that other banks underwriting to some of these entities. And I would suspect that some of those standards may not be as good as you think. Hopefully, we are very good, though we make our mistakes, too, obviously.

So yes, I think you’d be a little bit worse. We’ve had a benign credit environment for so long that I think you may see credit in other places deteriorate a little bit more than people think when, in fact, there’s a downturn. And hopefully, it will be a fairly normal credit cycle. What always happens is something is worse than a normal credit cycle than a normal downturn. So we’ll see.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.