The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.
We’re thick in the action of the latest earnings season for the US stock market – for the first quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:
With that, here are the latest commentary, in no particular order:
Airbnb (NASDAQ: ABNB)
Airbnb’s management thinks that designing end-to-end travel is very difficult and travelers often find planning travel to be very complicated, so travelers do it very infrequently; management thinks that a great user interface is the key to designing a great end-to-end travel experience for Airbnb users, and AI will be an important way to do it
I think a lot of companies have tried to like design an end-to-end travel. I think designing end-to-end travel is very, very hard. It’s funny — there’s this funny thing. One of the most common start-up ideas for entrepreneurs is to do a travel planning app. And yet travel planning apps almost always fail. So it’s almost like a riddle, why do travel planning apps fail, and everyone really tries to do it? And the reason why is because to plan travel is very complicated. In fact, it’s so complicated many people have assistants and a big part of their job is to plan travel for them. And yet you use it infrequently. So it’s a very difficult thing to do and you do it infrequently. And so therefore, a lot of companies have failed to design like a so-called connected trip. So I think to do this, a lot of it is to design a really good user experience. And I think that’s one of the things that we’re going to try to do to really design a great end-to-end experience, to able to book your entire trip, and much more. I think the user interface will be important. I think AI will be an important way to do this as well…
…We’re focused on making everything instant book and easy to use. We’re trying to make sure that the end-to-end travel experience is really, really wonderful with great Airbnb design, and we’re going to bring more AI into the application so that Airbnb, you can really solve your own problems with great self-solve through AI customer service agents.
Airbnb’s management recently rolled out an AI customer service agent; 50% of Airbnb’s US users are already using the customer service agent and it will soon be rolled out to 100% of Airbnb’s US users; management thinks Airbnb’s AI customer service agent is the best of its kind in travel, having already led to a 15% reduction in users needing to contact human agents; the AI customer service agent will be more personalised and agentic in the years ahead
We just rolled out our AI customer service agent this past month. 50% U.S. users are now using the agent, and we’ll roll it out to 100% of U.S. users this month. We believe this is the best AI-supported customers travel agent in travel. It’s already led to a 15% reduction in people needing to contact live human agents and it’s going to get significantly more personalized and agentic over the years to come.
Alphabet (NASDAQ: GOOG)
AI Overviews in Search now has more than 1.5 billion monthly users; AI Mode has received early positive reaction; usage growth of AI Overviews continues to increase nearly a year after its launch; management is leaning heavily into AI Overviews; management released the AI Mode in March as an experiment; AI Mode searches are twice as long as traditional search queries; AI Mode is getting really positive feedback from early users; the volume of commercial queries on Google Search has increased with the launch of AI Overviews; AI Overviews is now available in 15 languages and 140 countries; AI Overviews continues to monetise at a similar rate to traditional Search; reminder that ads within AI Overviews was launched in mobile in the USA in late-2024; an example of longer search queries in AI Mode is product comparisons; management thinks AI Overviews in Search and Gemini as 2 distinct consumer experiences; management thinks of AI Mode as a way to discover how the most advanced users are using AI-powered search
AI Overviews is going very well with over 1.5 billion users per month, and we are excited by the early positive reaction to AI Mode…
…Nearly a year after we launched AI Overviews in the U.S., we continue to see that usage growth is increasing as people learn that Search is more useful for more of their queries. So we are leaning in heavily here, continuing to roll the feature out in new countries to more users and to more queries. Building on the positive feedback for AI Overviews, in March, we released AI Mode, an experiment in labs. It expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities to help with questions that need further exploration and comparisons. On average, AI Mode queries are twice as long as traditional search queries. We’re getting really positive feedback from early users about its design, fast response time and ability to understand complex, nuanced questions…
…As we’ve mentioned before, with the launch of AI Overviews, the volume of commercial queries has increased. Q1 marked our largest expansion to date for AI Overviews, both in terms of launching to new users and providing responses for more questions. The feature is now available in more than 15 languages across 140 countries. For AI Overviews, overall, we continue to see monetization at approximately the same rate, which gives us a strong base in which we can innovate even more…
…On the ads of — in AI Overviews, last — late last year, actually, we launched them within the AI Overviews on mobile in the U.S. And this builds on our previous rollout of ads above and below. So this was a change that we have…
…I mentioned people typing in longer queries. There’s a lot more complex, nuanced questions. People are following through more. People are appreciating the clean design, the fast response time and the fact that they can kind of be much more open-ended, can undertake more complicated tasks. Product comparisons, for example, has been a positive one, exploring how tos, planning a trip…
…On AI-powered search and how do we see our consumer experience. Look, I do think Search and Gemini, obviously, will be 2 distinct efforts, right? I think there are obviously some areas of overlap, but they’re also — like expose very, very different use cases. And so for example, in Gemini, we see people iteratively coding and going much deeper on a coding workflow, as an example. So I think both will be around…
…AI Mode is the tip of the tree for us pushing forward on an AI-forward experience. There will be things which we discover there, which will make sense in the context of AI Overviews, so I think will flow through to our user base. But you almost want to think of what are the most advanced 1 million people using Search for, the most advanced 10 million people and then how do 1.5 billion people use Search for.
Alphabet’s management rolled out Alphabet’s latest foundation model, Gemini 2.5, in 2025 Q1; Gemini 2.5 is widely recognised as the best model in the industry; Gemini 2.5 Pro debuted at No.1 on the Chatbot Arena in 2025 Q1 by a significant margin; activer users in AI Studio and Gemini API is up 200% since the start of 2025; Alphabet introduced Gemini 2.5 Flash in April 2025; Gemini models are now found in all of Alphabet’s 15 products with at least 0.5 billion users each; Alphabet is upgrading Google Assistant on mobile devices to Gemini, and will also upgrade tablets, cars, and devices that connect to phones later this year; the Pixel 9a phone with Gemini integration was launched to strong reviews; the Gemini Live camera feature, among others, will soon be rolled out to all Android devices
This quarter was super exciting as we rolled out Gemini 2.5, our most intelligent AI model, which is achieving breakthroughs in performance, and it’s widely recognized as the best model in the industry…
…We released Gemini 2.5 Pro last month, receiving extremely positive feedback from both developers and consumers. 2.5 Pro is state-of-the-art on a wide range of benchmarks and debuted at #1 on the Chatbot Arena by a significant margin. 2.5 Pro achieved big leaps in reasoning, coding, science and math capabilities, opening up new possibilities for developers and customers. Active users in AI Studio and Gemini API have grown over 200% since the beginning of the year…
…Last week, we introduced 2.5 Flash, which enables developers to optimize quality and cost…
…All 15 of our products with 0.5 billion users now use Gemini models…
…We are upgrading Google Assistant on mobile devices to Gemini. And later this year, we’ll upgrade tablets, cars and devices that connect to your phones such as headphones and watches. The Pixel 9a launched very strong reviews, providing the best of Google’s AI offerings like Gemini Live and AI-powered camera features. And Gemini Live camera and screen sharing is now rolling out to all Android devices, including Pixel and Samsung S25.
Google Cloud is offering the industry’s widest range of TPUs and GPUs; Alphabet’s 7th generation TPU, Ironwood, has 10x better compute power and 2x better power efficiency than the previous generation TPU; Google Cloud is the first cloud provider to offer NVIDIA’s Blackwell family of GPUs; Google Cloud will be offering NVIDIA’s upcoming Rubin family of GPUs
Complementing this, we offer the industry’s widest range of TPUs and GPUs and continue to invest in next-generation capabilities. Ironwood, our seventh-generation TPU and most powerful to date, is the first designed specifically for inference at scale. It delivers more than 10x improvement in compute power or a recent high-performance TPU while being nearly twice as power efficient. Our strong relationship with NVIDIA continues to be a key advantage for us and our customers. We were the first cloud provider to offer NVIDIA’s groundbreaking B200 and GB200 Blackwell GPUs, and we’ll be offering their next-generation Vera Rubin GPUs.
Alphabet’s management is rolling out the company’s latest image and video generation models; Alphabet has launched its open-sourced Gemma 3 model in March 2025; Gemma models have been downloaded more than 140 million times; Alphabet is developing robotics AI models; Alphabet has launched a multi-agent AI research system called AI Co-Scientist; the AlphaFold model has been used by more than 2.5 million researchers
Our latest image and video generation models, Imagen 3 and Veo 2, are rolling out broadly and are powering incredible creativity. Turning to open models. We launched Gemma 3 last month, delivering state-of-the-art performance for its size. Gemma models have been downloaded more than 140 million times. Lastly, we are developing AI models in new areas where there’s enormous opportunity, for example, our new Gemini Robotics models. And in health, we launched AI Co-Scientist, a multi-agent AI research system, while AlphaFold has now been used by over 2.5 million researchers.
Google Cloud’s AI developer platform, Vertex AI, now has more than 200 foundation models available, including Alphabet’s in-house models and third-party models
Our Vertex AI platform makes over 200 foundation models available, helping customers like Lowe’s integrate AI. We offer industry-leading models, including Gemini 2.5 Pro, 2.5 Flash, Imagen 3, Veo 2, Chirp and Lyria, plus open-source and third-party models like Llama 4 and Anthropic.
Google Cloud is the leading cloud platform for building AI agents; Google Cloud has an open source framework for building AI agents and multi-agent systems called Agent Development Kit; Google Cloud has a low-code agent-building tool called Agent Designer; KPMG is using Google Cloud to deploy AI agents to employees; Google Cloud has the Google Agentspace product that helps employees in organisations use AI agents widely; Google Cloud offers pre-packaged AI agents across various functions including coding and customer engagement; Alphabet is working on agentic experiences internally and deploying it across the company; Alphabet’s customer service teams have deployed AI agents to dramatically enhance the user experience and is teaching Google Cloud customers how to do so
We are the leading cloud solution for companies looking to the new era of AI agents, a big opportunity. Our Agent Development Kit is a new open-source framework to simplify the process of building sophisticated AI agents and multi-agent systems. And Agent Designer is a low-code tool to build AI agents and automate tasks in over 100 enterprise applications and systems.
We are putting AI agents in the hands of employees at major global companies like KPMG. With Google Agentspace, employees can find and synthesize information from within their organization, converse with AI agents and take action with their enterprise applications. It combines enterprise search, conversational AI or chat and access to Gemini and third-party agents. We also offer pre-packaged agents across customer engagement, coding, creativity and more that are helping to provide conversational customer experiences, accelerate software development, and improve decision-making…
…Particularly with the newer models, I think we are working on early agentic workflows and how we can get those coding experiences to be much deeper. We are deploying it across all parts of the company. Our customer service teams are deeply leading the way there. We’ve both dramatically enhanced our user experience as well as made it much more efficient to do so. And we are actually bringing all our learnings and expertise in our solutions through cloud to our other customers. But beyond that, all the way from the finance team preparing for this earnings call to everything, it’s deeply embedded in everything we do.
Waymo is now serving 250,000 trips per week (was 150,000 in 2024 Q4), up 5x from a year ago; Waymo launched its paid service in Silicon Valley in 2025 Q1; Waymo has expanded in Austin, Texas, and will launch in Atlanta later this year; Waymo will launch in Washington DC and Miami in 2026; Waymo continues to make progress in airport access and freeway driving; management thinks Alphabet will not be able to scale Waymo by themselves, so partners are needed
Waymo is now safely serving over 0.25 million paid passenger trips each week. That’s up 5x from a year ago. This past quarter, Waymo opened up paid service in Silicon Valley. Through our partnership with Uber, we expanded in Austin and are preparing for our public launch in Atlanta later this summer. We recently announced Washington, D.C. as a future ride-hailing city, going live in 2026 alongside Miami. Waymo continues progressing on 2 important capabilities for riders, airport access and freeway driving…
More businesses are adopting Alphabet’s AI-powered campaigns; Alphabet’s recent work with AI is helping advertisers reach customers and searches where advertising would previously not be showed; Alphabet is infusing AI at every step of the marketing process for advertisers, for example, (1) advertisers can now generate a broader variety of lifestyle imagery customized to their business, (2) in PMax, advertisers can automatically source images from their landing pages and crop them, (3) on media buying, AI-powered campaigns continue to help advertisers find new customers, (4) in Demand Gen, advertisers can more precisely manage ad placements and understand which assets work best at a channel level; users of Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend; when Demand Gen is paired with Product Feed, advertisers see double the conversion per dollar spend year-over-year on average; Royal Canin used Demand Gen and PMax campaigns and achieved a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases, a 8% higher value per user
More businesses, big and small, are adopting AI-powered campaigns, and the deployment of AI across our Ads business is driving results for our customers and for our business. Throughout 2024, we launched several features that leverage LLMs to enhance advertiser value, and we’re seeing this work pay off. The combination of these launches now allows us to match ads to more relevant search queries. And this helps advertisers reach customers and searches where we would not previously have shown their ads.
Focusing on our customers, we continue to solve advertisers’ pain points and find opportunities to help them create, distribute and measure more performant ads, infusing AI at every step of the marketing process. On Audience Insights, we released new asset audience recommendations, which tell businesses the themes that resonate most with their top audiences. On creatives, advertisers can now generate a broader variety of lifestyle imagery customized to their business to better engage their customers and use them across PMax, demand gen, display and app campaigns. Additionally, in PMax, advertisers can automatically source images from their landing pages and crop them, increasing the variety of their assets. On media buying, advertisers continue to see how AI-powered campaigns help them find new customers. In Demand Gen, advertisers can more precisely manage ad placements across YouTube, Gmail, Discover and Google Display Network globally and understand which assets work best at a channel level. Thanks to dozens of AI-powered improvements launched in 2024, businesses using Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend for goals like purchases and leads. And when using Demand Gen with Product Feed, on average, they see more than double the conversion per dollar spend year-over-year…
…Royal Canin combined Demand Gen and PMax campaigns to find more customers for its cat and dog food products. The integration resulted in a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases and increased the value per user by 8%.
Google Cloud still has more AI demand than capacity in 2025 Q1 (as it did in 2024 Q4)
Recall I’ve stated on the Q4 call that we exited the year in Cloud specifically with more customer demand than we had capacity. And that was the case this quarter as well.
30% of new code at Alphabet is now generated by AI (it was 25% in 2024 Q3)
We’re continuing to make a lot of progress there in terms of people using coding suggestions. I think the last time I had said, the number was like 25% of code that’s checked in. It involves people accepting AI-suggested solutions. That number is well over 30% now. But more importantly, we have deployed more deeper flows.
Amazon (NASDAQ: AMZN)
AWS grew 17% year-on-year in 2025 Q1, and is now at a US$117 billion annualised revenue run rate (was US$115 billion in 2024 Q4); management used to think AWS could be a multi-hundred billion dollar revenue run rate business without AI and now that there’s AI, they think AWS could be even bigger; AWS’s AI business is now at a multi-billion annual revenue run rate and is growing triple-digits year-on-year; the shifting from on-premise to the cloud is still a huge tailwind for AWS, and now even more so as companies that want realize the full potential of AI will need to shift to the cloud; AWS is currently still supply constrained and it will be on a lot more new chips in the coming months; management thinks that the supply chain issues with chips will get better as the year progresses
AWS grew 17% year-over-year in Q1 and now sits at a $117 billion annualized revenue run rate…
…Before this generation of AI, we thought AWS had the chance to ultimately be a multi-hundred billion dollar revenue run rate business. We now think it could be even larger…
…Our AI business has a multibillion-dollar annual revenue run rate, continues to grow triple-digit year-over-year percentages and is still in its very early days…
…Infrastructure modernization is much less sexy to talk about than AI, but fundamental to any company’s technology and invention capabilities, developer productivity, speed and cost structure. And for companies to realize the full potential of AI, they’re going to need their infrastructure and data in the cloud…
…During the first quarter, we continued to see growth in both generative AI business and non-generative AI offerings as companies turn their attention to newer initiatives, bring more workloads to the cloud, restart or accelerate existing migrations from on-premises to the cloud and tap into the power of Generative AI…
…We — as fast as we actually put the capacity in, it’s being consumed. So I think we could be driving — we could be helping more customers driving more revenue for the business if we had more capacity. We have a lot more Trainium2 instances and the next generation of NVIDIA’s instances landing in the coming months…
…I do believe that the supply chain issues and the capacity issues will continue to get better as the year proceeds.
Management is directing Amazon to invest aggressively in AI; Amazon is building 1000-plus AI applications across the company; the next generation of Alexa is Alexa+; Amazon is using AI in its fulfilment network, robotics, shopping, and more
If you believe your mission is to make customers’ lives easier and better every day, and you believe that every customer experience will be reinvented with AI, you’re going to invest very aggressively in AI, and that’s what we’re doing. You can see that in the 1,000-plus AI applications we’re building across Amazon. You can see that with our next generation of Alexa, named Alexa+. You can see that in how we’re using AI in our fulfillment network, robotics, shopping, Prime Video and advertising experiences. And you can see that in the building blocks AWS is constructing for external and internal builders to build their own AI solutions.
AWS’s in-house AI chip, Trainium 2, is starting to lay in capacity in larger quantities with significant appeal and demand; AWS will always be offering AI chips from multiple providers, but Trainium 2 offers a compelling option with 30%-40% better price performance; management believes that the price of inference needs to be much lower for AI to be successful, and they think the price of inference will go down; Anthropic is still building its next few models with Trainium 2
Our new custom AI chip Trainium2 is starting to lay in capacity in larger quantities with significant appeal and demand. While we offer customers the ability to do AI in multiple chip providers and will for as long as I can foresee, customers doing AI at any significant scale realize that it can get expensive quickly. So the 30% to 40% better price performance that Trainium2 offers versus other GPU-based instances is compelling. For AI to be as successful as we believe it can be, the price of inference needs to come down significantly…
…I would say that we’ve been bringing on a lot of P5, which is a form of NVIDIA chip instances, as well as landing more and more Trainium2 instances as fast as we can…
…Anthropic is running — building the next few training models on top of our Trainium2 chip on AWS…
…As they’re waiting to see the cost of inference continue to go down, which it will.
The latest premier Amazon Nova model was launched yesterday and it delivers frontier intelligence and industry-leading price performance; thousands of customers are already using Amazon Nova models; Amazon Nova Sonic, a speech-to-speech foundation model, was recently released and it enables developers to build voice-based AI applications; Amazon Nova Sonic has lower word error rates and higher win rates over other comparable models; AWS recently released a research preview of Amazon Nova Act, a new AI model that can perform actions within a web browser; Amazon Nova Act aims to move the current state-of-the-art accuracy of multi-step agentic actions from 30%-60% to 90%-plus
We offer our own Amazon Nova state-of-the-art foundation models in Bedrock with the latest premier model launching yesterday. They deliver frontier intelligence and industry-leading price performance, and we have thousands of customers already using them, including Slack, Siemens, Sumo Logic, Coinbase, FanDuel, Glean and Blue Origin. A few weeks ago, we released Amazon Nova Sonic, a new speech-to-speech foundation model that enables developers to build voice-based AI applications that are highly accurate, expressive and human-like. Nova Sonic has lower word error rates and higher win rates over other comparable models for speech interactions…
…We’ve just released a research preview of Amazon Nova Act, a new AI model trained to perform actions within a web browser. It enables developers to break down complex workflows into reliable atomic commands like search or checkout or answer questions about the screen. It also enables them to add more detailed instructions to these commands where needed, like don’t accept the insurance upsell. Nova Act aims to move the current state-of-the-art accuracy of multistep agentic actions from 30% to 60% to 90-plus percent with the right set of building blocks to build these action-oriented agents.
Amazon’s management sees question-and-answer being the only current use-case for AI agents, but they want AI agents to be be capable of performing a wide variety of complex tasks and they have built Alexa+ to be such an agent; management launched a new lightning fast AI agent coding experience in Amazon Q in 2025 Q1 and customers are loving it; management has made generally available GitLab Duo with Amazon Q, which enables AI agents to assist multi-step tasks; Alexa+ is meaningfully smarter and more capable than the previous Alexa; Alexa+ is free with Prime and available for non-Prime customers at $19.99 per month; Alexa+ is just starting to be rolled out in the USA and will be introduced to other countries later in 2025; users really like Alexa+ thus far; Alexa+ is now with more than 100,000 users; Amazon already has 0.5 billion devices in people’s homes and cars that can easily distribute Alexa+; management thinks users will have to relearn a little on how to communicate with Alexa+, but the communication experience is now much better; management asked Alexa+ about good Italian restaurants in New York and Alexa+ helped to make a reservation
To date, virtually all of the agentic use cases have been of the question-answer variety. Our intention is for agents to perform wide-ranging complex multistep tasks by organizing a trip or setting the lighting, temperature and music ambience in your house for dinner guests or handling complex IT tasks to increase business productivity. There haven’t been action-oriented agents like this until Alexa+…
…This past quarter, Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging your own data, launched a lightning fast new agent coating experience within the command line interface that can execute complex workflows autonomously. Customers are loving this. We also made generally available GitLab Duo with Amazon Q, enabling AI agents to assist multi-step tasks such as new feature development, code-based upgrades for Java 8 and 11, while also offering code review and unit testing, all within the same familiar GitLab platform…
…We introduced Alexa+, our next-generation Alexa personal assistant, who is meaningfully smarter and more capable than our prior self can both answer virtually any question and take actions and is free with Prime or available to non-Prime customers for $19.99 a month. We’re just starting to roll this out in the U.S., and we’ll be expanding to additional countries later this year. People are really liking Alexa+ this far…
…So we’ve worked hard on that in Alexa+. We’ve been — we started rolling out over the last several weeks. It’s with now over 100,000 users with more rolling out in the coming months. And so far, the response from our customers has been very, very positive…
…We’re very fortunate in that we have over 0.5 billion devices out there in people’s homes and offices and cars. So we have a lot of distribution already…
…To some degree, there will be a little bit of rewiring for people on what they can do because you get used to patterns. I mean even the simple thing of not having to speak, Alexa speak anymore, we we’re all used to saying, Alexa, before we want every action to happen. And what you find is you really don’t have to do it the first time, and then really the conversation is ongoing where you don’t have to say Alexa anymore. And I’ve been lucky enough to have the alpha and the beta that I’ve been playing with for several months, and it took me a little bit of time to realize they didn’t have to keep saying Alexa, it’s very freeing when you don’t have to do that…
…When I was in New York, when we were announcing, I asked her, what were the — we did the event way downtown. I asked her what was great Italian restaurants or pizza restaurants, she gave me a list and she asked me if she wanted me to make a reservation. I said yes. And she made the reservation and confirmed the time, like that. When you get into those types of routines and you have those types of experience, they’re very, very useful.
The majority of Amazon’s capital expenditure (capex) in 2025 Q1 was for AWS’s technology infrastructure, including the Trainium chips
Turning to our cash CapEx, which was $24.3 billion in Q1. The majority of this spend is to support the growing need for technology infrastructure. It primarily relates to AWS as we invest to support demand for our AI services and increasingly in custom silicon like Trainium as well as tech infrastructure to support our North America and International segments. We’re also investing in our fulfillment and transportation network to support future growth and improve delivery speeds and our cost structure. This investment will support growth for many years to come.
The vast majority of successful startups are built on AWS; high-profile startups building AI coding agents are on AWS
If you look at the start-up space, the vast majority of successful start-ups over the last 10 to 15 years have run on top of AWS…
…If you just look at the growth of these coding agents in the last few months, these are companies like Cursor or Vercel, both of them run significantly on AWS.
Amazon’s management thinks that current AI apps have yet to really tackle customer experiences that are going to be reinvented and many other agents that are going to be built
What’s interesting in AI is that we still haven’t gotten to all the other customer experiences that are going to be reinvented and all the other agents that are going to be built. They’re going to take the role of a lot of different functions today. And those are — they’re — even though we have a lot of combined inference in those areas, I would say we’re not even at the second strike of the first batter in the first inning. It is so early right now.
AWS operating margin improved from 37.6% in 2024 Q1 to 39.5% in 2025 Q1, but margins will fluctuate from time to time; AWS’s margin strength is from the business’s strong growth, the impact of some continued investments, and AWS’s custom chips; the investments include software optimisations for server capacity, low-cost custom networking equipment, and power usage in data centers
AWS operating income was $11.5 billion and reflects our continued growth coupled with our focus on driving efficiencies across the business. As we said before, we expect AWS operating margins to fluctuate over time, driven in part by the level of investments we’re making at any point in time…
…We had a strong quarter in AWS, as you mentioned, the margin performance. I would attribute it to the strong growth that we’re seeing, coupled with the impact of some continued investment we’re making in innovation and technology. I’ll give you some examples. So we invest in software and process improvements and ends up optimizing our server capacity, which helps our infrastructure cost. We’ve been developing more efficient network using our low-cost custom networking gear. We’re working to maximize the power usage in our existing data centers, which both lowers our costs and also reclaims power for other newer workloads. And we’re also seeing the impact of advancing custom silicon like Graviton. It provides lower cost not only for us, but also for our customers, better price performance for them.
Apple (NASDAQ: AAPL)
Apple is currently shipping an LLM (large language model) on the iPhone 16 where some of the queries are being handled on the device itself
As you know, we’re shipping an LLM on the iPhone 16 today. And there are — some of the queries that are being used by our customers are on-device, and then others go to the private cloud where we’ve essentially mimicked the security and privacy of the device into the cloud. nd then others, for world knowledge, are with the integration with ChatGPT.
The new Mac Studio has Apple’s M4 Max and M3 Ultra chips, and it can run large language models with over 600 billion parameters entirely in memory
The new Mac Studio is the most powerful Mac we’ve ever shipped, equipped with M4 Max and our new M3 Ultra chip. It’s a true AI powerhouse capable of running large language models with over 600 billion parameters entirely in memory.
Apple has released VisionOS 2.4 which unlocks the first set of Apple Intelligence features for Vision Pro users
VisionOS 2.4 unlocks the first set of Apple Intelligence features for Vision Pro users while inviting them to explore a curated and regularly updated collection of spatial experiences with the Spatial Gallery app.
Apple’s management has released iOS 18.4, which brings Apple Intelligence to more languages (including Singlish); Apple has built its own foundation models for everyday tasks; new Apple Intelligence features in iOS 18 include, Writing Tools, Genmoji, Image Playground, Image Wand, Clean Up, Visual Intelligence, and a seamless connection to ChatGPT
Turning to software. We just released iOS 18.4, which brought Apple Intelligence to more languages, including French, German, Italian, Portuguese, Spanish, Japanese, Korean and simplified Chinese as well as localized English to Singapore and India…
…At WWDC24, we announced Apple Intelligence and shared our vision for integrating generative AI across our ecosystem into the apps and features our users rely on every day. To achieve this goal, we built our own highly capable foundation models that are specialized for everyday tasks. We designed helpful features that are right where our users need them and are easy to use. And we went to great lengths to build a system that protects user privacy whether requests are processed on-device or in the cloud with Private Cloud Compute, an extraordinary step forward for privacy and AI.
Since we launched iOS 18, we’ve released a number of Apple Intelligence features from helpful Writing Tools to Genmoji, Image Playground, Image Wand, Clean Up, visual intelligence and a seamless connection to ChatGPT. We made it possible for users to create movies of their memories with a simple prompt and added AI-powered photo search, smart replies, priority notifications, summaries for mail, messages and more. We’ve also expanded these capabilities to more languages and regions.
Apple’s in-house chips are designed with a neural engine that powers AI features across Apple’s products and 3rd-party apps; management thinks the neural engine makes Apple products the best devices for generative AI
AI and machine learning are core to so many profound features we’ve rolled out over the years to help our users live a better day. It’s why we designed Apple silicon with a neural engine that powers so many AI features across our products and third-party apps. It’s also what makes Apple products the best devices for generative AI.
Apple still needs more time to work on the more personalised Siri that was unveiled by management recently
With regard to the more personal Siri features we announced, we need more time to complete our work on these features so they meet our high-quality bar. We are making progress and we look forward to getting these features into customers’ hands.
Apple has low capital expenditures for AI relative to other US technology giants because it uses 3rd-party data centers so they are mostly operating expenses; Apple’s new $500 billion investment in the USA could signal more capital expenditures and data center investments
On the data center side, we have a hybrid strategy. And so we utilize third parties in addition to the data center investments that we’re making. And as I’ve mentioned in the $500 billion, there’s a number of states that we’re expanding in. Some of those are data center investments. And so we do plan on making investments in that area
Arista Networks (NYSE: ANET)
Arista Networks’ management remains confident of reaching $750 million in back-end AI revenue in 2025 even with the uncertainty surrounding US tariffs; the 1:1 ratio between front-end and back-end AI spending for Arista Networks’ products still remains, but management thinks it’s increasingly hard to parse between front-end and back-end
Our cloud and AI momentum continues as we remain confident of our $750 million front-end AI goal in 2025…
…Just a quick clarification before we go into Q&A. Jayshree meant we were reiterating our back-end goal of $750 million, not front-end AI…
…[Question] Is that 1:1 ratio for the front-end back is still intact in your perspective?
[Answer] On the front-end ratio, yes, we’ve said it’s generally 1:1. It’s getting harder and harder to measure front end and back end. Maybe we’ll look at the full AI cluster differently next year. But I think 1:1 is still a good ratio. It varies. Some of them just build a cluster and don’t worry about the front end and others worry about it entirely holistically. So it does vary, but I think the 1:1 is still a good ratio…
…[Question] You reiterated the $750 million back-end target, but you’ve kind of had this $1.5 billion kind of AI target for 2025. And just wondering, is the capability of that more dependent on kind of the tariffs given kind of some of the front-end spend?
[Answer] Regarding tariffs, I don’t think it will have a material difference on the $750 million number or the $1.5 billion. We got the demand. So unless we have some real trouble shipping it or customers change their mind, I think we’re good with both those targets for the year.
Arista Networks is progressing well with its 4 major AI customers; 1 of the 4 customers have been in NVIDIA’s Infiniband solution for a long time, so they’ll be small for Arista Networks; 2 of the 4 are heading towards 50,000 GPU deployments by end-2025, maybe even 100,000 GPUs; 3 of the 4 customers are already in production with the 4th progressing well towards production; management has a lot of visibility from the 4 major AI customers for 2025 and 2026 and it’s looking good; the 4 major AI customers are mostly deploying Arista Networks’ 800-gig switches
We are progressing well in all 4 customers and continue to add smaller ones as well…
…Let me start with the 4 customers. All of them are progressing well. One of them is still new to us. They’ve been in Infiniband for a long time, so they’ll be small. I would say 2 of them are heading towards 50,000 GPU deployments by end of the year, maybe they’ll be at 100 but I can be most certainly sure of 50,000, heading to 100,000. And then the other one is also in production. So I had talked about all 4 going into production. Three are already in production, the fourth one is well underway…
…[Question] If I can go back to the 4 Tier 1s that you’re working with on the AI back end and the progress that you updated on that front. Are these customers now giving you more visibility just given the tariff landscape and that you would need to sort of build inventory for some of the finished codes? And can you just update us how they’re handling the situation on that front? And particularly then, as you think about — I think the investor focus is a lot about sort of 2026 and potential sort of changes in the CapEx landscape from these customers at that point. Are you getting any forward visibility from them? Any sort of early signs for 2026 on these customers?
[Answer] We definitely have all the visibility in the world for this year, and we’re feeling good. We’re getting unofficial visibility because they all know our lead times are tied to some fairly long lead times from our partners and suppliers. So I would say 2026 is looking good. And based on our execution of 2025 and plans we’re putting together, we should have a great year in 2026 as well for AI sector specifically…
…[Question] Do you see the general cadence of hyperscalers deploying 800-gig switch ports this year? I ask because I believe your Etherlink family of switches became generally available in late 2024.
[Answer] I alluded to this earlier in 2024, the majority of our AI trials were on 400 gig at that time. So you’re right to observe that with our Etherlink portfolio really getting introduced in the second half of ’24 that a lot of our 800-gig activity has picked up in 2025, some of which will be reflected in shipments and some of it which will be part of our deferred. So it’s a good observation and an accurate one that this is the year of 800, like last year it was the year of 400.
Arista Networks’ management plans for the company to be the premier and preferred network for NVIDIA’s next-generation GPUs; Arista Networks’ Etherlink portfolio makes it easy to identify and localise performance issues in accelerated compute AI clusters
At the GTC event in March of 2025, we heard all about NVIDIA’s planned GPU road map every 12 to 18 months, and Arista intends to be the premier and preferred scale-out network for all of those GPUs and AI accelerators. Traditional GPUs have a collective communication libraries or CCL, as they’re known, that try to discover the underlying network topology using localization techniques. With this accelerated compute approach, the discrepancies between the discovered topology and the one that actually happens can impact AI job completion times. Arista’s ethylene portfolio highlights the accelerated networking approach, bringing that single point of network control and visibility as a differentiation. This makes it extremely crisp to identify and localize performance issues especially as the size of the AI cluster grows to 50,000 and 100,000 XPUs with the Arista AI Spine and leaf network designs.
Arista Networks’ campus portfolio provides cost-effective access points for agentic AI applications
Arista’s cognitive campus portfolio features our advanced spine with power or Ethernet-wired lease capabilities, along with a wide range of cost-effective wireless or 7 indoor and outdoor access points for the newer IoT and agentic applications.
The data center ecosystem is still somewhat new to AI and the suppliers are figuring things out together
But everybody is new to AI, they’ve never really put together a network design for 4-rail or 8-rail or how does it connect into the GPUs and what is the NIC [network interface card] attachment? What is the accessories in terms of cables or optics that connect? So this movement from trials to production causes us to bring a whole ecosystem together for the first time.
Arista Networks’ management thinks that when it comes to AI use-cases, Arista Networks’ products will play a far bigger role than whitebox networking manufacturers, even though whiteboxes will always be around and management is even happy to help customers build networking solutions that encompass both Arista Networks’ products and whiteboxes; Arista Networks was able to help a small AI customer build a network for a cluster of a few hundred GPUs very quickly after the customer struggled to do so with whiteboxes
I’ve always said, that white box is not new. It’s been with us since the beginning of time. In fact, when Arista got started, a couple of our customers had already implemented internally various implementations of white box. So there is a class of customers who will make the investments in engineering and operations to build their own network and manage it. And it’s a very different business model. It operates typically at 10% gross margins. I don’t think you want Arista to go there. And it’s very hardware-centric and doesn’t require the rich software foundation and investments that we’ve made. So first, I’ll start by saying we will always and will continue to coexist with white box. There are times that you’ve noticed this, too, that because Arista builds some very superior hardware, that even if they don’t use our EOS, they like to have our blue box, as I often call it, the Arista hardware that’s engineered much better than any others with a more open OS like Sonic or FBOSS or at least the attributes of running both EOS and an open-source networking system. So I think we view this as a natural part of selection in a customer base where if it’s a simple use case, they’re going to use something cost effective. But if it’s a really complex use case, like the AI spine or roles that require and demand more mission-critical features, Arista always plays a far bigger role in premium, highly scalable, highly valued software and hardware combinations than we do in a stand-alone white box. So we’ll remain coexistent peacefully, and we’re not in any way threatened by it. In fact, I would say we work with our customers to make sure as they’re building permutations and combinations of the white box, that we can work with that and build the right complement to that with our Etherlink portfolio…
…We had a customer, again, not material. We said, “I can’t get these boxes. I can’t make them run. I cannot get an AI network.” And one of my most technical sales leaders said, hey, we got a chance to build an AI cluster here for a few hundred GPUs. We jumped on it. Obviously, that customer is small and have been largely using white boxes and is now about to install an AI leaf and an AI spine, and we had to get it to him before the tariff deadline. So as an example of not material, but how quickly these decisions get made when you have the right product, right performance, right quality, right mission-critical nature and you can deal with that traffic pattern better than anyone else can. So it happens. It’s not big because we’ve got so much commitment in a given quarter from a customer, but when it is, we ask with great deal of nimbleness and agility to do that.
Arista Networks’ management is happy to support any kind of advanced packaging technologies – such as co-packaged optics or co-packaged copper – for back-end AI networks in the company’s products; management has yet to see any major adoption of co-packaged optics for back-end AI networks
[Question] I’d love to get your latest views around co-packaged optics. NVIDIA introduced its first CPO switches, GCC, for scale-out. And I was wondering whether that had any impact on your views regarding CPO adoption in back-end AI networks in coming years.
[Answer] It’s had no impact. It’s very early days. I think you’ve seen — Arista doesn’t build optics, but Arista enables optics and we’ve always been at the forefront, especially with Andy Bechtolsheim and his team of talented tech individuals that whether it is pluggable optics with LPO or how we define the OSFP connector for MSAs or 100 gig, 400 gig, it’s something we take seriously. And our views on CPOs, it’s not a new idea. It’s been demonstrated in prototype for, I don’t know, 10 to 20 years. The fundamental lack of adoption to date on CPO, it’s relatively high failure rates and it’s mostly been in the labs. So what are some of the advantages of CPO? Well, it has a linear interface. It has lower power than DSP for long-haul optics. It has a higher channel count. And I think if pluggable optics can achieve some of that in the best of both worlds, then you can overcome that with pluggable optics or even co-packaged copper. So Arista has no religion. We will do co-package copper. We’ll do co-package optics. We will do pluggable optics, but it’s too early to call this a real production-ready product that’s still in very early experiments and trials.
Arista Networks’ management is not seeing any material pull-forward in demand for its products because of US tariffs
[Question] We know tariffs are coming later in the year. Whether the strength you’re seeing is the result of early purchases of customers ahead of tariffs in order to save some dollars?
[Answer] Even if our customers try to pull it in and get it all by July, we would be unable to supply it. So that would be the first thing. So I’m not seeing the pull-ins that are really material in any fashion. I am seeing a few customers trying to save $1 here, $1 there to try and ship it before the tariff date but nothing material. Regarding pull-ins for 4 to 6 quarters, again, our best visibility is near term. And if we saw that kind of behavior, we would see a lot of inventory sitting in our customers, which we don’t. In fact, that’s long enough to ship faster and ship more.
2 years ago, Arista Networks’ management saw all its Cloud Titan customers pivot to AI and slow down their cloud spending; management is seeing more balanced spending now, with a more surgical focus on AI
2 years ago, I was very nervous because the entire cloud titans pivoted to AI and slowed down their cloud. Now we see a more balanced spend. And while we can’t measure how much of this cloud and how much of it is AI, if they’re kind of cobbled together, we are seeing less of a pivot, more of a surgical focus on AI and then a continued upgrade of the cloud networks as well. So compared to ’23, I would say the environment is much more balanced between AI and cloud.
Arista Networks’ management sees competitive advantages in the company’s hardware design, development, and operation that are hard to replicate even for its Cloud Titan customers
[Question] What functionality about the blue box actually makes it defensible versus what hyperscalers can kind of self-develop?
[Answer] Let me give you a few attributes of what I call the blue box, and I’m not saying others don’t have it, but Arista has built this as a mission, although we’re known for our software. We’re just as well known for our hardware. When you look at everything from a form factor of a one RU that we build to a chassis, we’ve got a tremendous focus on signal integrity, for example, all the way from layer 1, multilayer PCB boards, a focus on quality, a focus on driving distances, a focus on integrating optics for longer distances, a focus on driving MACsec, et cetera. So that’s a big focus. The second is hardware diagnostics. Internal to the company, we call it Arista boot. We’ve got a dedicated team focused on not just the hardware but the firmware to make it all possible in terms of troubleshooting because when these boards get super complex, you know where the failure is and you’re running at high-speed 200 [indiscernible] 30s. So things are very complex. So the ability to pinpoint and troubleshoot is a big part of what we do. And then there’s additional focus on the mechanical, the power supplies, the cooling, all of which translate to better power characteristics. Along with our partners and chip vendors, there’s a maniacal focus on not just high performance but low power. So some of the best attributes come from our blue boxes, not only for 48 ports, but all the way up to 576 ports of an AI spine or double that if you’re looking for dual capabilities. So well-designed, high-quality hardware is a thing of beauty, but also think of complexity that not everyone can do.
With neo AI cloud customers, Arista Networks’ management is observing that they are very willing to forsake NVIDIA’s GPUs and networking solutions and try other AI accelerators and Ethernet; management thinks that the establishment of the Ultra Ethernet Consortium in 2024 has a role to play in the increasing adoption of Ethernet for AI networking; with the Cloud Titans, management is also observing that they are shifting towards Ethernet; management thinks that the shift from Infiniband to Ethernet is faster than the the shift from NVIDIA’s GPUs to other companies’ GPUs
[Question] There’s a general perception that most of them are buying NVIDIA-defined clusters and networking. So I wonder if you could comment on those trends, their interest in moving past InfiniBand? And also are there opportunities developing with some of these folks to kind of multi-source their AI connectivity to different providers?
[Answer] We’re seeing more adventurous spirit in the neo-cloud customers because they want to try alternatives. So some of them are absolutely trying other AI accelerators like Lisa and AMD and my friends there. Some of them are absolutely looking at Ethernet, not InfiniBand as a scale-out. And that momentum has really shifted in the last year with the Ultra Ethernet Consortium and the spec coming out in May. I just want to give a shout-out to that team and what we have done. So I think Ethernet is a given that there’s an awful lot of legacy of InfiniBand that will obviously sort itself out. And a new class of AI accelerators we are seeing more niche players, more internal developments from the cloud titans, all of which is mandating more Ethernet. So I think between your 2 questions, I would say the progress from InfiniBand to Ethernet is faster, the progress from the ones they know and the high-performance GPU from NVIDIA versus the others is still taking time.
ASML (NASDAQ: ASML)
ASML’s management still sees AI (artificial intelligence) as the key growth driver; ASML will hit upper range of guidance for 2025 if AI demand continues to be strong, while ASML will hit the lower range of guidance if there is uncertainty among its customers
Consistent with our view from last quarter, the growth in artificial intelligence remains the key driver for growth in our industry. If AI demand continues to be strong and customers are successful in bringing on additional capacity to support the demand, there is a potential opportunity towards the upper end of our range. On the other hand, there is still quite some uncertainty for a number of our customers that can lead to the lower end of our range.
ASML’s management is still positive on the long-term outlook for ASML, with AI being a driver for growth
Looking longer term, the semiconductor market remains strong with artificial intelligence, creating growth in recent quarters, and we see some of the future demand for AI solidifying, which is encouraging.
ASML’s management thinks inference will become a larger part of AI demand going forward
I think there has been a lot of emphasis in the past quarters on the training side of life. I think more and more, which I think is logical, that you also see more and more emphasis being put on the inferencing side of the equation. So I think you will see the inferencing part becoming a larger component of AI demand on a go-forward basis.
ASML’s management is unable to tell what 2027 will look like for AI demand, but the commitment to AI chips in the next 2 years is very strong
You are looking at major investment, investment has been committed, investment that a lot of company believe they have to make in order to basically enter this AI race, I think the threshold to change this behavior is pretty high. And this is why — this is what our customers are telling us. And that’s also why we mentioned that, based on those conversations, we still see ’25, ’26 as growth years. That’s largely driven by AI and by that dynamic. Now ’27 start to be a bit further away, so you’re asking us too much, I think, to be able to answer basically what AI may look like in ’27. But if you look at the next couple of year, so far, the commitment to the AI investment and, therefore, the commitment also to deliver the chips for AI has been very solid.
Coupang (NYSE: CPNG)
Coupang’s management is investing in automation (such as automated picking, packing and sorting) and machine learning to deploy inventory more precisely to improve the customer experience and reduce costs
This quarter, we saw benefits from advances in our automated picking, packing and sorting systems and machine learning utilization that deploys inventory with more precise prediction of demand. This, coupled with our focus on operational excellence, enables us to continually improve the customer experience while also lowering their cost of service.
Datadog (NASDAQ: DDOG)
Existing customer usage growth in 2025 Q1 was in line with management’s expectations; management is seeing high growth in Datadog’s AI cohort, and stable growth in the other cohorts
Overall, we saw trends for usage growth from existing customers in Q1 that were in line with our expectations. We are seeing high growth in our AI cohort as well as consistent and stable growth in the rest of the business.
Datadog’s ,anagement continues to see increase in interest in next-gen AI capabilities and analysis; 4,000 Datadog customers at the end of 2025 Q1 used 1 or more Datadog AI integrations (was 3,500 in 2024 Q4), up 100% year-on-year; companies using end-to-end data observability to manage model performance, security, and quality, has more than doubled in the past 6 months; management has observed that data observability has become a big enabler of building AI workloads; the acquisition of Metaplane helps Datadog build towards a comprehensive data observability suite; management thinks data observability will be a big opportunity for Datadog
We continue to see rising customer for next-gen AI capabilities and analysis. At the end of Q1, more than 4,000 customers used one or more Datadog AI integrations, and this number has doubled year-over-year. With end-to-end data observability, we are seeing continued growth in customers and usage as they seek to manage end-to-end model performance, security and quality. I’ll call out the fact that the number of companies using end-to-end data observability has more than doubled in the past 6 months…
…[Question] What the vision is about moving into data observability and how consequential an opportunity it could be for Datadog?
[Answer] The field is evolving into a big enabler or it can be positive enabler, if you don’t do it right, for building enterprise workloads — for AI workloads, sorry. So in other words, making sure the data is being extracted from the the right place, transformed the right way and is being fed into the right AI models on the other hand…
…We only had some building blocks for data observability. We built data streams monitoring product for streaming data that comes out of few, such as Kafka, for example. We built their job monitoring product that monitors back jobs and large transformation jobs. We have a database monitoring product that looks at the way you optimize queries and optimize base performance and cost. And by adding data quality and data pipelines, with Metaplane, we have a full suite basically that allows our customers to manage everything from getting the data from their core data storage into all of the products and AI workloads and reports they need to go populate that data. And so we think it’s a big opportunity for us.
Datadog’s management has improved Bits AI, and is using next-gen AI to help solve customer issues quickly and move towards auto remediation
We are adding to Bits AI, with capabilities for customers to take action with workflow automation and App Builder, using next GenAI to help our customers immediate issues more quickly and move towards auto remediation in the future.
Datadog has made 2 recent acquisitions; Eppo is a feature management and experimentation platform; management sees automated experimentation as an important part of modern application development because of the use of AI in coding; Metaplane is a data observability platform that works well for new enterprise AI workloads; management is seeing more AI-written code in both its customers and the company itself; management thinks that as AI writes more code, more value will come from being able to observe and understand the AI-written code in production environments, which is Datadog’s expertise; the acquisitions of Eppo and Metaplane are to position Datadog for the transition towards a world of AI-written code
We recently announced a couple of acquisitions.
First, we acquired Eppo, a next-generation feature management and experimentation platform. The Eppo platform helps increase the velocity of releases, while also lowering risk by helping customers to release and validate features in a controlled manner. Eppo augments our efforts in product analytics, helping customers improve the variance and tie feature performance to business outcomes. More broadly, we see automated experimentation as a key part of modern application development, with the rapid adoption of the agent generative code, as well as more and more of the application logic itself being implemented with nondeterministic AI models.
Second, we also acquired Metaplane, the data observability platform built for modern data teams. Metaplane helps prevent, detect and resolve their availability and quality issues across the company’s data warehouses and data pipelines. We’ve seen for several years now that better freshness and quality were critical for applications and business analytic. And we believe that they are becoming key enablers of the creation of new enterprise AI workloads, which is why we intend to integrate the Metaplane capabilities into our end-to-end dataset offerings…
…There is definitely a big transition that is happening right now, like we see the rise of AI written code. We see it across our customers. We also see it inside of Datadog, where we’ve had very rapid adoption of this technology as well…
…The way we see it is that it means that there’s a lot less value in writing the code itself, like everybody can do it pretty quickly, can do a lot of it. You can have the machine to do a lot of it, and you complement it with a little bit of your own work. But the real difficulty is in validating that code, making sure that it’s safe, making sure it runs well, that it’s performing and that it does what it’s supposed to do for the business. Also making sure that when 15 different people are changing the code at the same time, all of these different changes come together and work the right way, and you understand the way these different pieces interact in the way. So the way we see it is this move out a lot of their value from writing the code to observing it and understanding it in production environments, which is what we do. So a lot of the investments we’re making right now, including some of the acquisitions we’ve announced, build towards that, and making sure that we’re in the right spot.
Datadog signed a 7-figure expansion deal with a leading generative AI company; the generative AI company needs to reduce tool fragmentation; the generative AI company is replacing commercial tools for APM (application performance monitoring) and log management with Datadog, and is expanding to 5 Datadog products
We signed a 7-figure expansion as an annualized contract with a leading next GenAI company. This customer needs to reduce tool fragmentation to keep on top of its hyper growth in usage and employee headcount. With this expansion, the customer will use 5 Datadog products and will replace commercial tool for APM and log management.
AI-native customers accounted for 8.5% of Datadog’s ARR in 2024 Q4 (was 6% in 2024 Q4); AI-native customers contributed 6 percentage points to Datadog’s year-on-year growth in 2025 Q1, compared to 2 percentage points in 2024 Q1; management thinks AI-native customers will continue to optimise cloud and observability usage in the future; AI-native contracts that come up for renewal are healthy; Datadog has huge customer concentration with the AI-native cohort; Datadog has more than 10 AI-native customers that are spending $1 million or more with Datadog; the strong performance of the AI-native cohort in 2025 Q1 is fairly broad-based; Datadog is helping the AI-native customers mostly with inference, and not training; when Datadog sees growth among AI-native customers, that’s growth of AI adoption because the AI-native customers’ workloads are mostly customer-facing
We saw a continued rise in contribution from AI-native customers who represented about 8.5% of Q1 ARR, up from about 6% of ARR last quarter and up from about 3.5% of ARR in the year ago quarter. AI-native customers contributed about 6 points of year-over-year revenue growth in Q1 versus about 5 points last quarter and about 2 points in the year ago quarter. We continue to believe that adoption of AI will benefit Datadog in the long term, but we remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms and as they may choose to optimize cloud and observability usage…
…[Question] Could you talk about what you’re seeing from some of those AI-native contracts that have already come up for renewal and just how those conversations have been trending?
[Answer] All the contracts that come up for renewal, they are healthy. The trick with the cohort is that it’s growing fast. There’s also a revenue concentration there. We now have our largest customer in the cohort, and they’re growing very fast. And on the flip side of that, we also have a larger number of large customers that are also growing. So we — I think we mentioned more than 10 customers now that are spending $1 million or more with us in that AI-native cohort and that are also growing fast…
…On the AI side, we do have, as I mentioned, one customer large and the others there, they’re contributing more of the new revenue than the others. But we see growth in the rest of the cohort as well. So again, it’s fairly typical…
…For the AI natives, actually, what we help them with mostly is not training. It’s running their applications and their inference workloads as customer-facing. Because what’s training for the AI natives tends to be largely homegrown one-off and different from — between each and every one of them. We expect that as and if most other companies and enterprises do significant training, that this will not be the case. This will not be one-off and homegrown. But right now, it is still the AI natives that do most of the training, and they still do it in a way that’s largely homegrown. So when we see growth on the AI-native cohorts, that’s growth of AI adoption because that’s growth of customer-facing workloads by and large.
Datadog’s management sees the trend of cloud migration as being steady; management sees cloud migration being partly driven by customers’ desires to adopt AI, because migrating to the cloud is a prerequisite for AI
[Question] What are the trend lines on the cloud migration side?
[Answer] It’s consistent with what we’ve seen before. It’s also consistent with what you’ve heard from the hyperscalers over the past couple of weeks. So I would say it’s steady, unremarkable. It’s not really trending up nor trending down right now. But we see the same desire from customers to move more into the cloud and to lay the groundwork so they can also add up AI, because digital transformation and cloud migrations are prerequisites for that.
Datadog’s management thinks there will be more products for Datadog to build as AI workloads shift towards inferencing; management is seeing its LLM Observability product getting increasing usage as customers move AI workloads into production; management wants to build more products across the stack, from closer to the GPU to AI agents;
On the workloads turning more towards inference, so there’s definitely more product to build there. So we have a — so we built an LLM Observability product that is being — that is getting increasing usage from customers as they move into production. And we think there’s more that we need to build both down the stack closer to the GPUs and up the stack closer to the agents that are being built on top of these models.
Datadog’s management is already seeing returns on Datadog’s internal investments in AI in terms of employee productivity; in the long-term, there’s the possibility that Datadog may need lesser headcount because of AI
[Question] Internally, how do you think about AI from an efficiency perspective?
[Answer] For right now, I think we’re seeing the returns in productivity, whether that be salespeople getting more information or R&D. We’re essentially trying to create an environment where we’re encouraging the various departments to use it and learning from it. Long term, there might well be efficiency gains — there may be efficiency gains that can be manifested in headcount.
Mastercard (NYSE: MA)
Mastercard’s management sees contactless payments and tokenised transactions as important parts of agentic AI digital commerce; Mastercard has announced Mastercard Agent Pay, which will facilitate safe, frictionless and programmable transactions across AI platforms; Mastercard is working with important AI companies such as Microsoft and OpenAI to deliver agentic payments
Today, 73% of all in-person switched transactions are contactless and approximately 35% of all our switch transactions are tokenized. These technologies will continue to play an important role as we move forward into the next phase of digital commerce, such as Agentic AI. We announced Mastercard Agent Pay to leverage our Agentic tokens as well as franchise rules, fraud and cybersecurity solutions. Combined, these will help partners like Microsoft to facilitate safe, frictionless and programmable transactions across AI platforms. We will also work with companies like OpenAI to deliver smarter, more secure and more personalized agentic payments. The launch of Agent Pay is an important step in redefining commerce in the AI era.
Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future just unveiled the AI-powered Malware Intelligence; Malware Intelligence enables proactive threat prevention
On the cybersecurity front, Recorded Future just unveiled malware intelligence. It’s a new capability enabling proactive threat prevention for any business using real-time AI-powered intelligence insights.
Mastercard’s management sees AI as being deeply ingrained in Mastercard’s business; Mastercard’s access to an enormous amount of data is an advantage for Mastercard in deploying AI; in 2024, a third of Mastercard’s products in its value-added services and solutions segment was powered by AI
AI is deeply ingrained in our business. We have access to an enormous amount of data, and this uniquely positions us to enhance our AI’s performance, resulting in greater accuracy and reliability. And we’re deploying AI to enable many solutions in market today. In fact, in 2024, AI enabled approximately 1 in 3 of our products within value-added services and solutions.
Meta Platforms (NASDAQ: META)
Meta’s management is focused on 5 opportunities within AI namely, improved advertising, more engaging experiences, business messaging, Meta AI and AI devices; the 5 opportunities are downstream of management’s attempt to build artificial general intelligence and leading AI models and infrastructure in an efficient manner; management thinks the ROI of Meta’s investment in AI will be good even if Meta does not succeed in all the 5 opportunities;
As we continue to increase our investments and focus more of our resources on AI, I thought it would be useful today to lay out the 5 major opportunities that we are focused on. Those are improved advertising, more engaging experiences, business messaging, Meta AI and AI devices. And these are each long-term investments that are downstream from us building general intelligence and leading AI models and infrastructure. Even with our significant investments, we don’t need to succeed in all of these areas to have a good ROI. But if we do, then I think that we will be wildly happy with the investments that we are making…
…We are focused on building full general intelligence. All of the opportunities that I’ve discussed today are downstream of delivering general intelligence and doing so efficiently.
Meta’s management’s goal with the company’s advertising business is for businesses to simply tell Meta their objectives and budget, and for Meta to do all the rest with AI; management thinks that Meta can redefine advertising into an AI agent that delivers measurable business results at scale
Our goal is to make it so that any business can basically tell us what objective they’re trying to achieve like selling something or getting a new customer and how much they’re willing to pay for each result and then we just do the rest. Businesses used to have to generate their own ad creative and define what audiences they wanted to reach, but AI has already made us better at targeting and finding the audiences that will be interested in their products than many businesses are themselves, and that keeps improving. And now AI is generating better creative options for many businesses as well. I think that this is really redefining what advertising is into an AI agent that delivers measurable business results at scale.
Meta tested a new advertising recommendation model for Reels in 2025 Q1 called Generative Ads Recommendation Model, or GEM, that has improved conversion rates by 5%; 30% more advertisers are using Meta’s AI creative tools in 2025 Q1; GEM is twice as efficient at improving ad performance for a given amount of data and compute; GEM’s better efficiency helped Meta significantly scale up the amount of compute used for model training; GEM is now being rolled out to additional surfaces across Meta’s apps; the initial test of Advantage+’s streamlined campaign creation flow for sales, app and lead campaigns is encouraging and will be rolled out globally later in 2025; Advantage+ Creative is seeing strong adoption; all eligible advertisers can now automatically adjust the aspect ratio of their existing videos and generate images; management is testing a feature that uses gen AI to place clothing on virtual models; management has seen a 46% lift in incremental conversions in the testing of the incremental attribution feature and will roll out the feature to all advertisers in the coming weeks; improvements in Meta’s advertising ranking and modeling drove conversion growth that outpaced advertising impressions growth in 2025 Q1
In just the last quarter, we are testing a new ads recommendation model for Reels, which has already increased conversion rates by 5%. We’re seeing 30% more advertisers are using AI creative tools in the last quarter as well…
…In Q1, we introduced our new Generative Ads Recommendation Model, or GEM, for ads ranking. This model uses a new architecture we developed that is twice as efficient at improving ad performance for a given amount of data and compute. This efficiency gain enabled us to significantly scale up the amount of compute we use for model training with GEM trained on thousands of GPUs, our largest cluster for ads training to date. We began testing the new model for ads recommendations on Facebook Reels earlier this year and have seen up to a 5% increase in ad conversions. We’re now rolling it out to additional surfaces across our apps…
…We’re seeing continued momentum with our Advantage+ suite of AI-powered solutions. We’ve been encouraged by the initial test of our streamlined campaign creation flow for sales, app and lead campaigns, which starts with Advantage+ turned on from the beginning for advertisers. In April, we rolled this out to more advertisers and expect to complete the global rollout later this year. We’re also seeing strong adoption of Advantage+ Creative. This week, we are broadening access of video expansion to Facebook Reels for all eligible advertisers, enabling them to automatically adjust the aspect ratio of their existing videos by generating new pixels in each frame to optimize their ads for full screen surfaces. We also rolled out image generation to all eligible advertisers. And this quarter, we plan to continue testing a new virtual try-on feature that uses gen AI to place clothing on virtual models, helping customers visualize how an item may look and fit…
…We continue to evolve our ads platform to drive results that are optimized for each business’ objectives and the way they measure value. One example of this is our incremental attribution feature, which enables advertisers to optimize for driving incremental conversions or conversions we believe would not have occurred without an ad being shown. We’re seeing strong results in testing so far with advertisers using incremental attribution in tests seeing an average 46% lift in incremental conversions compared to their business-as-usual approach. We expect to make this available to all advertisers in the coming weeks…
…Year-over-year conversion growth remains strong. And in fact, we continue to see conversions grow at a faster rate than ad impressions in Q1, so reflecting increased conversion rates. And ads ranking and modeling improvements are a big driver of overall performance gains.
Improvements in the past 6 months to Meta’s content recommendation systems have driven increases of 7% in time spent on Facebook, 6% on Instagram, and 35% on Threads; video consumption in Facebook and Instagram grew strongly in 2025 Q1 because of improvements to Meta’s content recommendation systems; management sees opportunities for further gains in improving the content recommendation systems in 2025; Meta is making progress on longer-term efforts to improve its content recommendation systems in two areas, (1) develop increasingly efficient recommendation systems by incorporating innovations from LLM model architectures, and (2) integrating LLMs into content recommendation systems to better identify what is interesting to a user; management’s testing of Llama in Threads’ recommendation systems has led to a 4% increase in time spent from launch; management is exploring how Llama can be deployed in recommendation systems for photo and video content, which management expects can improve Meta AI’s personalisation by better understanding users’ interests and preferences through their use of Meta’s apps; management launched a new feed in Instagram in the US in 2025 Q1 of content a user’s friends have left a note on or liked and the new feed is producing good results; management has launched the Blend experience that blends a user’s Reels algorithm in direct messages with friends; the increases of 7% in time spent on Facebook and 6% on Instagram seen in the last 6 months is on top of uplift in time spent on Facebook and Instagram that management had already produced in the first 9 months of 2024
In the last 6 months, improvements to our recommendation systems have led to a 7% increase in time spent on Facebook, 6% increase on Instagram and 35% on Threads…
…In the first quarter, we saw strong growth in video consumption across both Facebook and Instagram, particularly in the U.S., where video time spent grew double digits year-over-year. This growth continues to be driven primarily by ongoing enhancements to our recommendation systems, and we see opportunities to deliver further gains this year.
We’re also progressing on longer-term efforts to develop innovative new approaches to recommendations. A big focus of this work will be on developing increasingly efficient recommendation systems so that we can continue scaling up the complexity and compute used to train our models while avoiding diminishing returns. There are promising techniques we’re working on that will incorporate the innovations from LLM model architectures to achieve this. Another area that is showing early promise is integrating LLM technology into our content recommendation systems. For example, we’re finding that LLM’s ability to understand a piece of content more deeply than traditional recommendation systems can help better identify what is interesting to someone about a piece of content, leading to better recommendations.
We began testing using Llama in Threads recommendation systems at the end of last year given the app’s text-based content and have already seen a 4% lift in time spent from the first launch. It remains early here, but a big focus this year will be on exploring how we can deploy this for other content types, including photos and videos. We also expect this to be complementary to Meta AI as it can provide more relevant responses to people’s queries by better understanding their interests and preferences through their interactions across Facebook, Instagram and Threads…
…In Q1, we launched a new experience on Instagram in the U.S. that consists of a feed of content your friends have left a note on or liked, and we’re seeing good results. We also just launched Blend, which is an opt-in experience in direct messages that enables you to blend your Reels algorithm with your friends to spark conversations over each other’s interest…
…We shared on the Q3 2024 call that improvements to our AI-driven feed and video recommendations drove a roughly 8% lift in time spent on Facebook and a 6% lift on Instagram over the first 9 months of last year. Since then, we’ve been able to deliver similar gains in just 6 months’ time with improvements to our AI recommendations delivering 7% and 6% time spent gains on Facebook and Instagram, respectively.
AI is enabling the creation of better content on Meta’s apps; the better content includes AI generating content directly for users and AI helping users produce better content; management thinks that the content created on Meta’s apps will be increasingly interactive over time; management recently launched the stand-alone Edits app that contains an ultra-high resolution, short-form video camera, and generative AI tools to remove backgrounds of video or animate still images; more features on Edits are coming soon;
AI is also enabling the creation of better content as well. Some of this will be helping people produce better content to share themselves. Some of this will be AI generating content directly for people that is personalized for them. Some of this will be in existing formats like photos and videos, and some of it will be increasingly interactive…
…Our feeds started mostly with text and then became mostly photos when we all got mobile phones with cameras and then became mostly video when mobile networks became fast enough to handle that well. We are now in the video era, but I don’t think that this is the end of the line. In the near future, I think that we’re going to have content in our feeds that you can interact with and that it will interact back with you rather than you just watching it…
…Last week, we launched our stand-alone Edits app, which supports the full creative process for video creators from inspiration and creation to performance insights. Edits has an ultra-high resolution, short-form video camera and includes generative AI tools that enable people to remove the background of any video or animate still images with more features coming soon.
Countries like Thailand and Vietnam with low-cost labour actually conduct a lot of business through Meta’s messaging apps but management thinks this phenomena is absent in developed economies because of the high cost of labour; management thinks that AI will allow businesses in developed economies to conduct business through Meta’s messaging apps; management thinks that every business in the future will have AI business agents that are easy to set up and can perform customer support and sales; Meta is currently testing AI business agents with small businesses in the USA and a few countries across Meta’s apps; management has launched a new agent management experience to make it easier for businesses to train their AI; management’s vision is for that to be one agent that’s interacting with a consumer regardless of where he/she is engaging with the business AI; feedback from the tests are that the AI business agents are saving businesses a lot of time and helping them determine which conversations to spend more time on
In countries like Thailand and Vietnam, where there is a low cost of labor, we see many businesses conduct commerce through our messaging apps. There’s actually so much business through messaging that those countries are both in our top 10 or 11 by revenue even though they’re ranked in the 30s in global GDP. This phenomenon hasn’t yet spread to developed countries because the cost of labor is too high to make this a profitable model before AI, but AI should solve this. So in the next few years, I expect that just like every business today has an e-mail address, social media account and website, they’ll also have an AI business agent that can do customer support and sales. And they should be able to set that up very easily given all the context that they’ve already put into our business platforms…
…We are currently testing business AIs with a limited set of businesses in the U.S. and a few additional countries on WhatsApp, Messenger and on ads on Facebook and Instagram. We’ve been starting with small business and focusing first on helping them sell their goods and services with business AIs…
…We’ve launched a new agent management experience and dashboard that makes it easier for businesses to train their AI based on existing information on their website or WhatsApp profile or their Instagram and Facebook pages. And we’re starting with the ability for businesses to activate AI in their chats with customers. We are also testing business AIs on Facebook and Instagram ads that you can ask about product and return policies or assist you in making a purchase within our in-app browser…
…No matter where you engage with the business AI, it should be one agent that recalls your history and your preferences. And we’re hearing encouraging feedback, particularly that adopting these AIs are saving the business that we’re testing with a lot of time and helping to determine which conversations make sense for them to spend more time on.
Meta AI now has nearly 1 billion monthly actives; management’s focus for Meta AI in 2025 is to establish Meta AI as the leading personal AI for personalization, voice conversations, and entertainment; management thinks people will eventually have an AI to talk to throughout the day on smart-glasses and this AI will be one of the most important and valuable services that has ever been created; management recently released the first Meta AI stand-alone app; the Meta AI stand-alone app is personalised to the user’s behaviour on other Meta apps, and it also has a social feed for discovery on how others are using Meta AI; initial feedback on the Meta AI stand-alone app is good; management expects to focus on scaling and deepening engagement on Meta AI for at least the next year before attempting to monetise; management saw engagement on Meta AI improve when testing Meta AI’s ability to personalize responses by remembering people’s prior queries and their usage of Meta’s apps; management has built personalisation into Meta AI across all of Meta’s apps; the top use cases for Meta AI currently include information gathering, writing assistance, interacting with visual content, and seeking help; WhatsApp has the strongest usage of Meta AI, followed by Facebook; a standalone Meta AI app is important for Meta AI to become the leading personal AI assistant because WhatsApp is currently not the primary messaging app used in the USA; management thinks that people are going to use different AI agents for different things; management thinks having memory of a user will be a differentiator for AI agents
Across our apps, there are now almost 1 billion monthly actives using Meta AI. Our focus for this year is deepening the experience and making Meta AI the leading personal AI with an emphasis on personalization, voice conversations and entertainment. I think that we’re all going to have an AI that we talk to throughout the day, while we’re browsing content on our phones, and eventually, as we’re going through our days with glasses. And I think that this is going to be one of the most important and valuable services that has ever been created.
In addition to building Meta AI into our apps, we just released our first Meta AI stand-alone app. It is personalized. So you can talk to it about interests that you’ve shown while browsing Reels or different content across our apps. And we built a social feed into it. So you can discover entertaining ways that others are using Meta AI. And initial feedback on the app has been good so far.
Over time, I expect the business opportunity for Meta AI to follow our normal product development playbook. First, we build and scale the product. And then once it is at scale, then we focus on revenue. In this case, I think that there will be a large opportunity to show product recommendations or ads as well as a premium service for people who want to unlock more compute for additional functionality or intelligence. But I expect that we’re going to be largely focused on scaling and deepening engagement for at least the next year before we’ll really be ready to start building out the business here…
…Earlier this year, we began testing the ability for Meta AI to better personalize its responses by remembering certain details from people’s prior queries and considering what that person engages with on our apps. We are already seeing this lead to deeper engagement with people we’ve rolled it out to, and it is now built into Meta AI across Facebook, Instagram, Messenger and our new stand-alone Meta AI app in the U.S. and Canada…
…The top use case right now for Meta AI from a query perspective is really around information gathering as people are using it to search for and understand and analyze information followed by social interactions from — ranging from casual chatting to more in-depth discussion or debate. We also see people use it for writing assistance, interacting with visual content, seeking help…
…WhatsApp continues to see the strongest Meta AI usage across our Family of Apps. Most of that WhatsApp engagement is in one-on-one Threads, followed by Facebook, which is the second largest driver of Meta AI engagement, where we’re seeing strong engagement from our feed deep dives integration that lets people ask Meta AI questions about the content that’s recommended to them…
…I also think that the stand-alone app is going to be particularly important in the United States because WhatsApp, as Susan said, is the largest surface that people use Meta AI and which makes sense. If you want to text an AI, having that be closely integrated and a good experience in the messaging app that you use makes a lot of sense. But we’re — while we have more than 100 million people use WhatsApp in the United States, we’re clearly not the primary messaging app in the United States at this point. iMessage is. We hope to become the leader over time. But we’re in a different position there than we are in most of the rest of the world on WhatsApp. So I think that the Meta AI app as a stand-alone is going to be particularly important in the United States to establishing leadership in — as the main personal AI that people use…
…I think that there are going to be a number of different agents that people use, just like people use different apps for different things. I’m not sure that people are going to use multiple agents for the same exact things, but I’d imagine that something that is more focused on kind of enterprise productivity might be different from something that is somewhat more optimized for personal productivity. And that might be somewhat different from something that is optimized for entertainment and social connectivity. So I think there will be different experiences…
…Once an AI starts getting to know you and what you care about in context and can build up memory from the conversations that you’ve had with it over time, I think that will start to become somewhat more of a differentiator.
Meta’s management continues to think of glasses as the ideal form factor for an AI device; management thinks that the 1 billion people in the world today who wear glasses will likely all be wearing smart glasses in the next 5-10 years; management thinks that building the devices people use for Meta’s apps lets the company deliver the best AI and social experiences; sales of the Ray-Ban Meta AI glasses have tripled in the last year and usage of the glasses is high; Meta has new launches of smart glasses lined up for later this year; monthly actives of Ray-Ban Meta AI glasses is up 4x from a year ago, with the number of people using voice commands growing even faster; management has rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish; management continues to want to scale the Ray-Ban Meta AI glasses to 10 million units or more for its 3rd generation; management intends to run the same monetisation playbook with the Ray-Ban Meta AI glasses as Meta’s other products
Glasses are the ideal form factor for both AI and the metaverse. They enable you to let an AI see what you see, hear what you hear and talk to you throughout the day. And they let you blend the physical and digital worlds together with holograms. More than 1 billion people worldwide wear glasses today, and it seems highly likely that these will become AI glasses over the next 5 to 10 years. Building the devices that people use to experience our services lets us deliver the highest-quality AI and social experiences…
…Ray-Ban Meta AI glasses have tripled in sales in the last year. The people who have them are using them a lot. We’ve got some exciting new launches with our partner, EssilorLuxottica, later this year as well that should expand that category and add some new technological capabilities to the glasses…
…We’re seeing very strong traction with Ray-Ban Meta AI glasses with over 4x as many monthly actives as a year ago. And the number of people using voice commands is growing even faster as people use it to answer questions and control their glasses. This month, we fully rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish. Now when you are speaking to someone in one of these languages, you’ll hear what they say in your preferred language through the glasses in real time…
…If you look at some of the leading consumer electronics products of other categories, by the time they get to their third generation, they’re often selling 10 million units and scaling from there. And I’m not sure if we’re going to do exactly that, but I think that that’s like the ballpark of the opportunity that we have…
…As a bunch of the products start to hit and start to grow even bigger than the number that I just said is just sort of like the sort of a near-term milestone, then I think we’ll continue scaling in terms of distribution. And then at some point, just like the other products that we build out, we will feel like we’re at a sufficient scale that we’re going to primarily focus on making sure that we’re monetizing and building an efficient business around it.
Meta released the first few Llama 4 models in April 2025 and more Llama 4 models are on the way, including the massive Llama 4 Behemoth model; management thinks leading-edge AI models are critical for Meta’s business, so they want the company to control its own destiny; by developing its own models, Meta is also able to optimise the model to its infrastructure and use-cases; an example of the optimisation is the Llama 4 17-billion model that comes with low latency to suit voice interactions; another example of the optimisation is the models’ industry-leading context window length which helps Meta AI’s personalisation efforts; Llama 4 Behemoth is important for Meta because all the models the company is using internally, and some of the models the company will develop in the future, are distilled from Behemoth
We released the first Llama 4 models earlier this month. They are some of the most intelligent, best multimodal, lowest latency and most efficient models that anyone has built. We have more models on the way, including the massive Llama 4 Behemoth model…
…On the LLM, yes, there’s a lot of progress being made in a lot of different dimensions. And the reason why we want to build this out is — one is that we think it’s important that for kind of how critical this is for our business that we sort of have control of our own destiny and are not depending on another company for something so critical. But two, we want to make sure that we can shape the development to be optimized for our infrastructure and the use cases that we want.
So to that end, Llama 4, the shape of the model with 17 billion parameters per expert was designed specifically for the infrastructure that we have in order to provide the low latency experience to be voice optimized. One of the key things, if you’re having a voice conversation with AI, is it needs to be low latency. So that way, when you’re having a conversation with it, there’s isn’t a large gap between when you stop speaking and it starts. So everything from the shape of the model to the research that we’re doing to techniques that go into it are kind of fit into that.
Similarly, another thing that we focused on was context window length. And in some of our models, we have really — we’re industry-leading on context window length. And part of the reason why we think that that’s important is because we’re very focused on providing a personalized experience. And there are different ways that you can put personalization context into an LLM, but one of the ways to do it is to include some of that context in the context window. And having a long context window that can incorporate a lot of the background that the person has shared across our apps is one way to do that…
…I think it’s also very important to deliver big models like Behemoth, not because we’re going to end up serving them in production, but because of the technique of distilling from larger models, right? The Llama 4 models that we’ve published so far and the ones that we’re using internally and some of the ones that we’ll build in the future are basically distilled from the Behemoth model in order to get the 90%, 95% of the intelligence of the large model in a form factor that is much lower latency and much more efficient.
Meta’s management is accelerating the buildout of Meta’s AI capacity, leading to higher planned investment for 2025; Meta’s capex growth in 2025 is for both generative AI and core business needs with the majority of overall capex supporting Meta’s core business; management continues to build infrastructure in a flexible way where the company can react to how the AI ecosystem develops in the coming years; management is increasing the efficiency of Meta’s workloads and this has helped the company to achieve strong returns from its core AI initiatives
We are accelerating some of our efforts to bring capacity online more quickly this year as well as some longer-term projects that will give us the flexibility to add capacity in the coming years as well. And that has increased our planned investment for this year…
…Our primary focus remains investing capital back into the business with infrastructure and talent being our top priorities…
…Our CapEx growth this year is going toward both generative AI and core business needs with the majority of overall CapEx supporting the core. We expect the significant infrastructure footprint we are building will not only help us meet the demands of our business in the near term but also provide us an advantage in the quality and scale of AI services we can deliver. We continue to build this capacity in a way that grants us maximum flexibility in how and when we deploy it to ensure we have the agility to react to how the technology and industry develop in the coming years…
…The second way we’re meeting our compute needs is by increasing the efficiency of our workloads. In fact, many of the innovations coming out of our ranking work are focused on increasing the efficiency of our systems. This emphasis on efficiency is helping us deliver consistently strong returns from our core AI initiatives.
Meta’s management sees a number of long-term tailwinds that AI can provide for Meta’s business, including making advertising a larger share of global GDP, and freeing up more time for people to engage in entertainment
Over the coming years, I think that the increased productivity from AI will make advertising a meaningfully larger share of global GDP than it is today…
…Over the long term, as AI unlocks more productivity in the economy, I also expect that people will spend more of their time on entertainment and culture, which will create an even larger opportunity to create more engaging experiences across all of these apps.
Meta’s management still expects to develop an AI coding agent sometime in 2025 that can operate as a mid-level engineer; management expects this AI coding agent to be do a substantial part of Meta’s AI research and development in 2026 H2; management is focused on building AI that can run experiments to improve Meta’s recommendation systems
I’d say it’s basically still on track for something around a mid-level engineer kind of starting to become possible sometime this year, scaling into next year. So I’d expect that by the middle to end of next year, AI coding agents are going to be doing a substantial part of AI research and development. So we’re focused on that. Internally, we’re also very focused on building AI agents or systems that can help run different experiments to increase recommendations across our other AI products like the ones that do recommendations across our feeds and things like that.
Microsoft (NASDAQ: MSFT)
Microsoft’s management is seeing accelerating demand across industries for cloud migrations; there are 4 things happening to drive cloud migrations, (1) classic migration, (2) data growth, (3) growth in cloud-native companies’ consumption, and (4) growth in AI consumption, which also requires non-AI consumption
When it comes to cloud migrations, we saw accelerating demand with customers in every industry, from Abercrombie in French, to Coca-Cola and ServiceNow expanding their footprints on Azure…
…[Question] On your comment about accelerating demand for cloud migrations. I’m curious if you could dig in and extrapolate a little more what you’re seeing there.
[Answer] One is, I’ll just say, the classic migration of whether it’s SQL, Windows Server. And so that sort of again got good steady-state progress because the reality is, I think everyone is now, perhaps there’s another sort of kick in the data center migrations just because of the efficiency the cloud provides. So that’s sort of one part.
The second piece is good data growth. You saw some — like Postgres on Azure — I mean, forgetting even SQL server, Postgres on Azure is growing. Cosmos is growing. The analytics stuff I talked about with Fabric. It’s even the others, whether it is Databricks or even Snowflake on Azure are growing. So we feel very good about Fabric growth and our data growth.
Then the cloud-native growth. So this is again before we even get to AI, some of the core compute consumption of cloud-native players is also pretty very healthy. It was healthy throughout the quarter. We projected to go moving forward as well.
Then the thing to notice is the ratio, and I think we mentioned this multiple times before, if you look underneath even ChatGPT, in fact, that team does a fantastic job of thinking about not only their growth in terms of AI accelerators they need, they use Cosmos DB, they use Postgres. They use core compute and storage. And so there’s even a ratio between any AI workload in terms of AI accelerator to others.
So those are the 4 pockets, I’d say, or 4 different trend lines, which all have a relationship with each other.
Foundry is now used by developers in over 70,000 companies, from enterprises to startups, to design, customize and manage their AI apps and agents; Foundry processed more than 100 trillion tokens in 2025 Q1, up 5x from a year ago; Foundry now has industry-leading model fine tuning tools; the latest models from AI heavyweights including OpenAI and Meta are available on Foundry; Microsoft’s Phi family of SLMs (small language model) now has over 38 million downloads (20 million downloads in 2024 Q4); Foundry will soon introduce an LLM (large language model) with 1 billion parameters that can run on just CPUs
Foundry is the agent in AI app factory. It’s now used by developers at over 70,000 enterprises and digital natives from Atomicwork to Epic, Fujitsu and Gainsight to H&R Block and LG Electronics to design, customize and manage their AI apps and agents. We processed over 100 trillion tokens this quarter, up 5x year-over-year, including a record 50 trillion tokens last month alone. And 4 months in, over 10,000 organizations have used our new agent service to build, deploy and scale their agents.
This quarter, we also made a new suite of fine-tuning tools available to customers with industry-leading reliability, and we brought the latest models from OpenAI along with new models from Cohere, DeepSeek, Meta, Mistral, Stability to Foundry. And we’ve expanded our Phi family of SLMs with new multimodal and mini models. All-up, Phi has been downloaded 38 million times. And our research teams are taking it one step further with BitNet b1.58, a billion parameter, large language model that can run on just CPUs coming to the Foundry.
With agent mode in VS Code, Github Copilot can now iterate on code, recognize errors, and fix them automatically; there are other Github agent modes that provide coding support to developers; Microsoft is previewing a first-of-its-kind SWE (software engineering) agent that can execute developer tasks; GitHub Copilot now has 15 million users, up 4x from a year ago; GitHub Copilot is used by a wide range of companies; VS Code has more than 50 million monthly active users
We’re evolving GitHub Copilot from paired to peer programmer with agent mode in VS Code, Copilot can now iterate on code, recognize errors and fix them automatically. This adds to other Copilot agents like Autofix, which helps developers remediate vulnerabilities as well as code review agent, which has already reviewed over 8 million pull requests. And we are previewing a first-of-its-kind SWE-agent capable of asynchronously executing developer tasks. All-up, we now have over 15 million GitHub Copilot users, up over 4x year-over-year. And both digital natives like Twilio and enterprises like Cisco, HPE, Skyscanner and Target continue to choose GitHub Copilot to their developers with AI throughout the entire dev life cycle. With Visual Studio and VS Code, we have the world’s most popular editor with over 50 million monthly active users.
Microsoft 365 Copilot is now used hundreds of thousands of customers, up 3x from a year ago; deal sizes for Microsoft 365 Copilot continue to grow; a record number of customers in 2025 Q1 returned to buy more seats for Microsoft 365 Copilot; new researcher and analyst deep reasoning agents can analyze vast amounts of web and enterprise data on-demand directly within Microsoft 365 Copilot; Microsoft is introducing agents for every role and business process; customers can build their own AI agents with no/low code with Copilot Studio and these agents can handle complex tasks, including taking action across desktop and web apps; 230,000 organisations, including 90% of the Fortune 500, have already used Copilot Studio; customers created more than 1 million custom agents across SharePoint and Copilot Studio, up 130% sequentially
Microsoft 365 Copilot is built to facilitate human agent collaboration, hundreds of thousands of customers across geographies and industries now use Copilot, up 3x year-over-year. Our overall deal size continues to grow. In this quarter, we saw a record number of customers returning to buy more seats. And we’re going further. Just last week, we announced a major update, bringing together agents, notebooks, search and create into a new scaffolding for work. Our new researcher and analyst deep reasoning agents analyze vast amounts of web and enterprise data to deliver highly skilled expertise on demand directly within Copilot…
…We are introducing agents for every role and business process. Our sales agent turns contacts into qualified leads and with sales chat reps can quickly get up to speed on new accounts. And our customer service agent is deflecting customer inquiries and helping service reps resolve issues faster.
With Copilot Studio, customers can extend Copilot and build their own agents with no code, low code. More than 230,000 organizations, including 90% of the Fortune 500 have already used Copilot Studio. With deep reasoning and agent flows in Copilot Studio, customers can build agents that perform more complex tasks and also handle deterministic scenarios like document processing and financial approvals. And they can now build Computer Use Agents that take action on the UI across desktop and web apps. And with just a click, they can turn any SharePoint site into an agent, too. This quarter alone, customers created over 1 million custom agents across SharePoint and Copilot Studio, up 130% quarter-over-quarter.
Azure grew revenue by 33% in 2025 Q1 (was 31% in 2024 Q4), with 16 points of growth from AI services (was 13 points in 2024 Q4); management brought capacity online for Azure AI services faster than expected; Azure’s non-AI business saw accelerated growth in its Enterprise customer segment as well as some improvement in its scale motions; management thinks the real outperfomer within Azure in 2025 Q1 is the non-AI business; the strength in the AI business in 2025 Q1 came because Microsoft was able to match supply and demand somewhat, and also deliver supply early to some customers; management thinks it’s getting harder to separate an AI workload from a non-AI workload
In Azure and other cloud services, revenue grew 33% and 35% in constant currency, including 16 points from AI services. Focused execution drove non-AI services results, where we saw accelerated growth in our Enterprise customer segment as well as some improvement in our scale motions. And in Azure AI services, we brought capacity online faster than expected…
…The real outperformance in Azure this quarter was in our non-AI business. So then to talk about the AI business, really, what was better was precisely what we said. We talked about this. We knew Q3 that we had and hadn’t really match supply and demand pretty carefully and so didn’t expect to do much better than we had guided to on the AI side. We’ve been quite consistent on that. So the only real upside we saw on the AI side of the business was that we were able to deliver supply early to a number of customers…
…[Question] You mentioned that the upside on Azure came from the non-AI services this time around. I was wondering if you could just talk a little bit more about that.
[Answer] In general, we saw better-than-expected performance across our segments, but we saw acceleration in our largest customers. We call that the Enterprise segment in general. And then in what we talked about of our scale motions, where we had some challenges in Q2, things were a little better. And we still have some work to do in our scale motions, and we’re encouraged by our progress. We’re excited to stay focused on that as, of course, we work through the final quarter of our fiscal year…
…It’s getting harder and harder to separate what an AI workload is from a non-AI workload.
Around half of Microsoft’s cloud and AI-related capex in 2025 Q1 (FY2025 Q3) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q2 (FY2025 Q4) to increase sequentially, but the guidance for total capex for FY2025 H2 is unchanged from previous guidance (previously, expectation was for capex for 2025 Q1 and 2025 Q2 to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex is still expected to grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; demand for Azure’s AI services is growing faster than capacity is being brought online and management expects to have some AI capacity constraints beyond June 2025 (or FY2025 Q4); management’s goal with Microsoft’s data center investments is to be positioned for the workload growth of the future; management thinks pretraining plus test-time compute is a big change in terms of model-training workloads; Microsoft is short of power in fulfilling its data center growth plans; Microsoft’s data center builds have very long lead-times; in Microsoft’s 2024 Q4 (FY 2025 Q1) earnings call, management expected Azure to no longer be capacity-constrained by the end of 2025 Q2 (FY2025 Q4) but demand was stronger than expected in 2025 Q1 (FY2025 Q3); management still thinks they can get better and better capital efficiency from the cloud and AI capex; Azure’s margin on the AI business now is far better than what the margin was when the cloud transition was at a similar stage
Roughly half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog of $315 billion…
…We expect Q4 capital expenditures to increase on a sequential basis. H2 CapEx in total remains unchanged from our January H2 guidance. As a reminder, there can be quarterly spend variability from cloud infrastructure build-outs and the timing of delivery of finance leases…
…Our earlier comments on FY ’26 capital expenditures remain unchanged. We expect CapEx to grow. It will grow at a lower rate than FY ’25 and will include a greater mix of short-lived assets, which are more directly correlated to revenue than long-lived assets…
… In our AI services, while we continue to bring data center capacity online as planned, demand is growing a bit faster. Therefore, we now expect to have some AI capacity constraints beyond June…
…the key thing for us is to have our builds and lease be positioned for what is the workload growth of the future, right? So that’s what you have to [ goal ] seek to. So there’s a demand part to it, there is the shape of the workload part to it, and there is a location part to it. So you don’t want to be upside down on having one big data center in one region when you have a global demand footprint. You don’t want to be upside down when the shape of demand changes because, after all, with essentially pretraining plus test-time compute, that’s a big change in terms of how you think about even what is training, right, forget inferencing…
…We will be short power. And so therefore — but it’s not a blanket statement. I need power in specific places so that we can either lease or build at the pace at which we want…
…From land to build to build-outs can be lead times of 5 to 7 years, 2 to 3 years. So we’re constantly in a balancing position as we watch demand curves…
…I did talk about in my comments, we had hoped to be in balance by the end of Q4. We did see some increased demand as you saw through the quarter. So we are going to be a little short still, say, a little tight as we exit the year…
…[Question] You’ve said in the past that you can attain better and better capital efficiency with the cloud business and probably cloud and AI business. Where do you stand today?
[Answer] The way, of course, you’ve seen that historically is right when we went through the prior cloud transitions, you see CapEx accelerate, you build out data center footprint.,, You slowly filled GPU capacity. And over time, you see software efficiencies and hardware efficiencies build on themselves. And you saw that process for us for goodness now quite a long time. And what Satya’s talking about is how quickly that’s happening on the AI side of the business and you add to that model diversity. So think about the same levers plus model efficiency, those compounds. Now the one thing that’s a little different this time is just the pace. And so when you’re seeing that happen, pace in terms of efficiency side, but also pace in terms of the build-out. So it can mask some of the progress… Our margins on the AI side of the business are better than they were at this point by far than when we went through the same transition in the server to cloud transition…
…I think the way to think about this is you can ask the question, what’s the difference between a hosting business and a hyperscale business? It’s software. That’s, I think, the gist of it. Yes, for sure, it’s a capital-intensive business, but capital efficiency comes from that system-wide software optimization. And that’s what makes the hyperscale business attractive and that’s what we want to just keep executing super well on.
Microsoft’s management sees Azure as Microsoft’s largest business; management thinks that the next platform shift in technology, which is AI, is built on the last major platform, which was for cloud computing, so this benefits Microsoft
There’s nothing certain for sure in the future, except for one thing, which is our largest business is our infrastructure business. And the good news here is the next big platform shift builds on that. So it’s not a complete rebuild, having gone through all these platform shifts where you have to come out on the other side with a full rebuild. If there is good news here is that we have a good business in Azure that continues to grow and the new platform depends on that.
It’s possible that software optimizations with AI model development and deployment could lead to even longer useful lives for GPUs, but management wants to observe this for longer
[Question] Could we start to consider the possibility that software enhancements might extend the useful life assumption that you’re using for GPUs?
[Answer] In terms of thinking about the depreciable life of an asset, we like to have a long history before we make any of those changes. So we’re focused on getting every bit of useful life we can, of course, out of assets. But to Satya’s point, that tends to be a software question more than a hardware one.
Netflix (NASDAQ: NFLX)
Netflix’s content talent are already using AI tools to improve the content production process; management thinks AI tools can enable lower-budget projects to access top-grade VFX; Rodrigo Prieto is directing his first feature film with Netflix in 2025, Pedro Paramo, and he’s able to use AI tools for de-aging VFX at a much lower cost than The Irishman film that Prieto worked on 5 years ago; the entire budget for Pedro Paramo is similar to the cost of VFX alone for The Irishman; management’s focus with AI is to find ways for AI to improve the member and creator experience
So our talent today is using AI tools to do set references or previs, VFX sequence prep, shop planning, all kinds of things today that kind of make the process better. Traditionally, only big budget projects would have access to things like advanced visual effects such as de-aging. So today, you can use these AI-powered tools so to enable smaller budget projects to have access to big VFX on screen.
A recent example, I think, is really exciting. Rodrigo Prieto was the DP on The Irishman just 5 years ago. And if you remember that movie, we were using very cutting edge, very expensive de-aging technology that still had massive limitations, still creating a bunch of complexity on set for the actors. It was a giant leap forward for sure, but nowhere near what we needed for that film. So this year, just 5 years later, Rodrigo is directing his first feature film for us, Pedro Páramo in Mexico. Using AI-powered tools he was able to deliver this de-aging VFX to the screen for a fraction of what it cost on The Irishman. In fact, the entire budget of the film was about the VFX cost on The Irishman…
…So our focus is simple, find ways for AI to improve the member and the creator experience.
Netflix’s management is building interactive search into Netflix which is based on generative AI
We’re also building out like new capabilities, an example would be interactive search. That’s based on generative technologies. We expect that will improve that aspect of discovery for members.
Paycom Software (NYSE: PAYC)
Paycom’s GONE is the industry’s first fully automated time-off solution, utilising AI, that automates all time off requests; prior to GONE, 10% of an organisation’s labour cost was unmanaged; GONE can generate ROI of up to 800%, according to Forrester; GONE helped Paycom be named by Fast Company as one of the world’s most innovative companies
Our award-winning solution, GONE, is a perfect example of how Paycom simplifies tests through automation and AI. GONE is the industry’s first fully automated time-off solution that decisions all time-off requests based on customizable guidelines set by the company’s time-off rules. Before GONE, 10% of an organization’s labor cost went substantially unmanaged, creating scheduling errors, increased cost from overpayments, staffing shortages and employee uncertainty over pending time-off requests. According to a Forrester study, GONE’s automation delivers an ROI of up to 800% for clients. GONE continues to receive recognition. Most recently, Fast Company magazine named Paycom, one of the world’s most innovative companies for a second time. This honor specifically recognized GONE and is a testament to how Paycom is shaping our industry by setting new standards for automation across the globe.
PayPal (NASDAQ: PYPL)
PayPal’s management is leaning into agentic commerce; PayPal recently launched the payments industry’s first remote MCP (Model Context Protocol) server to enable AI agent frameworks to integrate with PayPal APIs; the introduction of the MCP allows any business to create an agentic commerce experience; all major AI players are involved with PayPal’s annual Developer Days to engage PayPal’s developer community
At Investor Day, I told you we were leaning into agentic commerce…
…Just a few weeks ago, we launched the industry’s first remote MCP server and enabled the leading AI agent frameworks to seamlessly integrate with PayPal APIs. Now any business can create agentic experience that allow customers to pay, track shipments, manage invoices and more, all powered by PayPal and all within an AI client. As we speak, developers are gathering in our San Jose headquarters for our annual Developer Days. Every major player in AI is represented, providing demos and engaging with our developer community.
Shopify (NASDAQ: SHOP)
Shopify’s management recently launched TariffGuide.ai, an AI-powered tool that provides duty rates based on just a product description and the country of origin, helping merchants source the right products in minutes
And just this past week, we launched TariffGuide.ai. This AI driven tool provides duty rates based on just a product description and the country of origin. Sourcing the right products from the right country can mean the difference between a 0% and a 15% duty rate or higher, And TariffGuide.ai allows merchants to do this in minutes, not days.
Shopify CEO Tobi Lutke penned a memo recently on his vision on how Shopify should be workin with AI; AI is becoming 2nd nature to how Shopify’s employees work, where employees use AI reflexively; before any team requests for additional headcount, they need to first assess if AI can meet their goals; Shopify has built a dozen MCP (model context protocol) servers in the last few weeks to enable anyone in Shopify to ask questions and find resources more easily; management sees AI being a cornerstone of how Shopify delivers value; management is investing more in AI, but the increased investment is not a driver for the lower gross margin in Shopify’s Subscription Solutions segment in 2025 Q1; management does not expect the Subscription Solutions segment’s gross margin to change much in the near term; Shopify has shown strong operating leverage partly because of its growing internal use of AI
AI is at the core of how we operate and is transforming our work processes. For those who have not seen it, I encourage you to check out Toby’s recent company wide email on AI that has now been shared publicly. At Shopify, we take AI seriously. In fact, it’s becoming second nature to how we work. By fostering a culture of reflexive AI usage, our teams default to using AI first, reflexive being the key term here. This also means that before requesting additional headcount or resources, teams are required to start with assessing how they can meet their goals using AI first. This approach is sparking some really fascinating explorations and discussions around the company, challenging the way we think, the way we operate, and pushing us to look ahead as we redefine our decision making processes. In the past couple of weeks, we built a dozen MCP servers that make Shopify’s work legible and accessible. And now anyone within Shopify can ask questions, find resources, and leverage those tools for greater efficiency. This reflexive use of AI goes well beyond internal improvements. It supercharges our team’s capabilities and drives operational efficiencies, keeping us agile. And as we continue to innovate, AI will remain a cornerstone of how we deliver value across the board…
…Gross profit for Subscription Solutions grew 19%, slightly less than the 21% revenue growth for Subscription Solutions. The lower rate was driven primarily by higher cloud and infrastructure hosting costs needed to support higher volumes and geographic expansion. Although we are investing more in AI, it is not a significant factor in this increase. Over the past 5 years, the gross margin for Subscription Solutions has centered around 80%, plus or minus a couple of hundred basis points in any given quarter, and we do not anticipate that trend changing in the near term…
…Our continued discipline on head count across all 3 of R&D, sales and marketing and G&A continued to yield strong operating leverage, all while helping us move even faster on product development aided by our increasing use of AI.
Shopify’s management rearchitected the AI engine of Sidekick, Shopify’s AI merchant assistant, in 2025 Q1; monthly average users of Sidekick has more than doubled since the start of 2025; early results of Sidekick are really strong for both large and small merchants
In Q1, key developments for Sidekick included a complete rearchitecture of the AI engine for deeper reasoning capabilities, enhancing processing of larger business datasets and accessibility in all supported languages, allowing every Shopify merchant to use Sidekick in their preferred language. And these changes, well, they’re working. In fact, our monthly average users of Sidekick continue to climb more than doubling since the start of 2025. Now this is still really early days, but the progress we are making is already yielding some really strong results for merchants, both large and small.
Shopify acquired Vantage Discovery in 2025 Q1; Vantage Discovery works on AI-powered, multi-vector search; management thinks the acquisition will improve the overall consumer search experience delivered by Shopify’s merchants
In March, we closed the acquisition of Vantage Discovery, which helps accelerate the development of AI-powered, multi-vector search across our search, APIs, shop and storefront search offerings. This acquisition is one piece of a broader strategy to ensure that our merchants are able to continue meeting buyers regardless of where they’re shopping or discovering great products…
…The Vantage team coming in who are rock stars in AI are going to help take our search abilities to the next level.
Shopify’s management is seeing more and more commerce searches starting away from a search engine; Shopify is already working with AI chatbot providers on AI shopping; management thinks that AI shopping is a huge opportunity; management thinks AI agents will be a great opportunity for Shopify too
One of the things we think about is that wherever commerce is taking place, Shopify will be there. And obviously, one of the things we are seeing is that more and more searches are starting on places beyond just somebody’s search engine. That’s a huge opportunity whereby more consumers are going to be searching for great products…
…We’ve talked about some of the partnerships in the past. You’ve seen what we’ve done with Perplexity and OpenAI. We will continue doing that. We’re not going to front run our product road map when it comes to anything, frankly. But we do think though that AI shopping, in particular, is a huge opportunity…
…[Question] How does Shopify view the emergence of AI agents in terms of do you guys see this as an opportunity or more of a threat because, on one hand, they could facilitate direct checkout with their own platforms. On the other hand, this may also unlock some new sales channel for Shopify merchants, very similar to sort of what happened with social media commerce
[Answer] We think it’s a great opportunity. Look, the more channels that exist in the world, the more complexity it is for merchants and brands, that’s where the value of Shopify really shines. So if there’s a new surface area, whether it’s through AI agents or through just simply LLMs and AI wrappers, that consumer goes to, to look for a new pair of sneakers or a new cosmetic or a piece of furniture, they want to have access to the most interesting products for the most important brands, and those are all on Shopify. So for us, we think that all of these new areas where commerce is happening is a great thing. It allows Shopify to increase its value.
Taiwan Semiconductor Manufacturing Company (NYSE: TSM)
TSMC’s management continues to expect AI accelerators revenue to double in 2025; management has factored China-bans on US chips into TSMC’s 2025 outlook; AI-related demand outside of China appears to have become even stronger over the last 3 months
We reaffirm our revenue from AI accelerated to double in 2025. The AI accelerators we define as AI GPU, AI ASIC and HPM controllers for AI training and inference in the data center. Based on our customers’ strong demand, we are also working hard to double our CoWoS capacity in 2025 to support their needs…
…[Question] The geopolitical risk, micro concerns is one of the major uncertainty nowadays. Last 2 days, we have like H20 being banned in China, blah, blah, blah. So how does that impact to TSMC’s focus and production planning, right? Do we have enough other customers and demand to keep our advanced node capacity fully utilized? Or how does that change our long-term production planning moving forward?
[Answer] Of course, we do not comment on specific customers or product, but let me assure you that we have taken this into consideration when providing our full year’s growth outlook. Did I answer the question?…
…[Question] AI is still expected to double this year despite the U.S. ban on AI GPUs into China. And I guess, China was a meaningful portion of accelerated shipments well over 10% of volumes. So factoring this in, it would imply your AI outlook this year, still doubling would mean that the AI orders have improved meaningfully outside of China in the last sort of 3 months. Is that how we should interpret your comment about you still expect the business to double?
[Answer] 3 months ago, we are — we just cannot supply enough wafer to our customer. And now it’s a little bit balanced, but still, the demand is very strong. And you are right, other than China, the demand is still very strong, especially in U.S.
TSMC’s management has a disciplined approach when building capacity and management recognises how important the discipline is given the high forecasted demand for AI-related chips
At TSMC, higher level of capital expenditures is always correlated with higher growth opportunities in the following years. We reiterate our 2025 capital budget is expected to be between USD 38 billion and USD 42 billion as we continue to invest to support customers’ growth. About 70% of the capital budget will be allocated for advanced process technologies. About 10% to 20% will be spent for specialty technologies and about 10% to 20% will be spent for advanced packaging, testing, mass-making and others. Our 2025 CapEx also includes a small amount related to our recently announced additional $100 billion investment plan to expand our capacity in Arizona…
…To address the structural increase in the long-term market demand profile, TSMC employed a disciplined and robust capacity planning system. This is especially important when we have such high forecasted demand from AI-related business. Externally, we work closely with our customers and our customers’ customers to plan our capacity. Internally, our planning system involves multiple teams across several functions to assess and evaluate the market demand from both a top-down and bottom-up approach to determine the appropriate capacity build.
TSMC’s management expects the Foundry 2.0 industry to grow 10% year-on-year in 2025, driven by AI-related demand and mild recovery in other end markets; management expects TSMC to outperform the Foundry 2.0 industry in 2025
Looking at the full year of 2025, we expect Foundry 2.0 industry growth to be supported by robust AI-related demand and a mild recovery in other end market segment. In January, we had forecasted a Foundry 2.0 industry to grow 10 points year-over-year in 2025, which is consistent with IDC’s forecast of 11% year-over-year growth for Foundry 2.0…
…We are confident TSMC can continue to outperform the Foundry 2.0 industry growth in 2025.
TSMC’s management thinks impact from recent AI models, including DeepSeek, will lower the barrier to future long-term AI development; TSMC’s management continues to expect mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024
Recent developments are also positive to AI’s long-term demand outlook. In our assessment, the impact from AI recent models, including DeepSeek, will drive greater efficiency and help lower the barrier to future AI development. This will lead to wider usage and greater adoption of AI models, which all require use of leading-edge silicon. These developments only serve to strengthen our conviction in the long-term growth opportunities from the industry megatrend of 5G, AI and HPC…
…Based on our planning framework, we are confident that our revenue growth from AI accelerators will approach a mid-40s percentage CAGR for the next 5 years period starting from 2024.
TSMC’s 2nd fab in Arizona will utilise N3 process technology and is already complete and management wants to speed up volume production schedule to meet AI-related demand
Our first fab in Arizona has already successfully entered high-volume production in 4Q ’24, utilizing N4 process technology with a yield comparable to our fab in Taiwan. The construction of our second fab, which will utilize the 3-nanometer process technology, is already complete and we are working on speeding up the volume production schedule based on the strong AI-related demand from our customers. Our third and fourth fab will utilize N2 and A16 process technologies and with the expectation of receiving all the necessary permits are scheduled to begin construction later this year. Our fifth and sixth fab will use even more advanced technologies. The construction and ramp schedule for this fab will be based on our customers’ demand.
TSMC’s management believes its A16 technology has a best-in-class backside power delivery solution that is also the first in the industry; A16 is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; A16 is scheduled for volume production in 2026 H2
We also introduced a 16 feature in super power rail or SPR as a separate offering. Compared with the N2P, A16 provides a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.
Tesla (NASDAQ: TSLA)
Tesla’s management continues to expect fully autonomous Tesla rides in Austin, Texas in June 2025; management will sell full autonomy software for Model Y in Austin; management now demarcates CyberCab as a separate product, and all of the other models (S, 3, X, Y) that is compatible with autonomous software as being robotaxis; management reiterates that once Tesla can solve for autonomy in 1 city, it can very quickly scale because Tesla’s autonomous solution is a general solution, not a city-specific solution; Tesla’s autonomous solution involves AI and a specific Tesla-designed AI chip, as opposed to expensive sensors and high-precision maps; the fully autonomous Teslas in June 2025 in Austin will be Model Ys; management expects full autonomy in Tesla’s fleet to ramp up very quickly; management is confident that Tesla will have large-scale autonomy by 2026 H2, meaning, millions of fully autonomous Tesla vehicles by 2026 H2; even with the introduction of full autonomy, management thinks there will be some localised parameters – effectively a mixture of experts model – set for safety; management thinks Tesla’s autonomous solution can scale well because when the FSD (Full Self Driving) software was deployed in China, it used very minimal China-specific data and yet could work well in China; validation of Tesla’s autonomous solution will be important in determining its rate of acceptance; there are now convoys of Teslas in Austin running autonomously in testing in order to compress Tesla’s AI’s learning curve; a consumer in China used FSD on a narrow mountain dirt road; management expects FSD unsupervised to be available for personal use by end of 2025; Musk thinks the first Model Y ro drive itself from factory to customer will happen later in 2025; newly-manufactured Model Ys are already driving themselves around in Tesla factories
We expect to have — be selling fully autonomous rides in June in Austin as we’ve been saying for now several months. So that’s continued…
…Unsupervised autonomy will first be sold for the Model Y in Austin, and then actually, should parse out the term for robotic taxi or robotaxi and just generally like what’s the Cybercab because we’ve got a product called the Cybercab. And then any Tesla, which could be an S, 3, X or Y that is autonomous is a robotic taxi or a robotaxi. It’s very confusing. So the vast majority of the Tesla fleet that we’ve made is capable of being a robotaxi or a robotic taxi…
…Once we can make the system work where you can have paid rides, fully autonomously with no one in the car in 1 city, that is a very scalable thing for us to go broadly within whatever jurisdiction allows us to operate. So because what we’re solving for is a general solution to autonomy, not a city-specific solution for autonomy, once we make it work in a few cities, we can basically make it work in all cities in that legal jurisdiction. So if it’s — once we can make the pace to work in a few cities in America, we can make it work anywhere in America. Once we can make it work in a few cities in China, we can make it work anywhere in China, likewise in Europe, limited only by regulatory approvals. So this is the advantage of having a generalized solution using artificial intelligence and an AI chip that Tesla designed specifically for this purpose, as opposed to very expensive sensors and high-precision maps on a particular neighborhood where that neighborhood may change or often changes and then the car stops working. So we have a general solution instead of a specific solution…
…The Teslas that will be fully autonomous in June in Austin are fully Model Ys. So that is — it’s currently on track to be able to do paid rides fully autonomously in Austin in June, and then to be in many other cities in the U.S. by the end of this year.
It’s difficult to predict the exact ramp sort of week by week and month by month, except that it will ramp up very quickly. So it’s going to be like some — basically an S-curve where it’s very difficult to predict the intermediate slope of the S-curve, but you kind of know where the S-curve is going to end up, which is the vast majority of the Tesla fleet being autonomous. So that’s why I feel confident in predicting large-scale autonomy around the middle of next year, certainly the second half next year, meaning I bet that there will be millions of Teslas operating autonomously, fully autonomously in the second half of next year, yes…
…It does seem increasingly likely that there will be a localized parameter set sort of — especially for places that have, say, a very snowy weather, like I say, if you’re in the Northeast or something like this — you can think of — it’s kind of like a human. Like you can be a very good driver in California but are you going to be also a good driver in a blizzard in Manhattan? You’re not going to be as good. So there is actually some value in — you can still drive but your probability of an accident is higher. So the — it’s increasingly obvious that there’s some value to having a localized set of parameters for different regions and localities…
…You can see that from our deployment of FSD supervised in China with this very minimal data that’s China-specific, the model is generalized quite well to completely different driving styles. That just like shows that the AI-based solution that we have is the right one because if you had gone down the previous rule-based solutions, sort of like more hard-coded HD map-based solutions, it would have taken like many, many years to get China to work. You can see those in the videos that people post online themselves. So the generalized solution that we are pursuing is the right one that’s going to scale well…
…You can think of this like location-specific parameters that Elon alluded to as a mixture of experts. And if you are sort of familiar with the AI models, Grok and others, they all use this mixture of experts to sort of specialize the parameters to specific tasks while still being general…
…What are the critical things that need to get right, one thing I would like to note is validation. Self-driving is a long-tail problem where there can be a lot of edge cases that only happen very, very rarely. Currently, we are driving around in Austin using our QA fleet, but then super [ rare ] to get interventions that are critical for robotaxi operation. And so you can go many days without getting a single intervention. So you can’t easily know whether you are improving or regressing in your capacity. And we need to build out sophisticated simulations, including neural network-based video generation…
…There’s just always a convoy of Teslas going — just going all over to Austin in circles. But yes, I just can’t emphasize this enough. In order to get a figure on the long-tail things, it’s 1 in 10,000, that says 1 in 20,000 miles or 1 in 30,000. The average person drives 10,000 miles in a year. So not trying to compress that test cycle into a matter of a few months. It means you need a lot of cars doing a lot of driving in order to compress that to do in a matter of a month what would normally take someone a year…
…I saw one guy take a Tesla on — autonomously on a narrow dirt road across like a mountain. And I’m like, still a very brave person. And I said this driving along the road with no barriers where he makes a mistake, he’s going to plunge to his doom. But it worked…
…[Question] when will FSD unsupervised be available for personal use on personally-owned cars?
[Answer] Before the end of this year… the acid test being you should — can you go to sleep in your car and wait until your destination? And I’m confident that will be available in many cities in the U.S. by the end of this year…
…I’m confident also that later this year, the first Model Y will drive itself all the way to the customer. So from our — probably from a factory in Austin and our one in here in Fremont, California, I’m confident that from both factories, we’ll be able to drive directly to a customer from the factory…
…We have — it has been put to use — it’s doing useful work fully autonomously at the factories, as Ashok was mentioning, the cars drive themselves from end of line to where they supposed to be picked up by a truck to be taken to the customer… It’s important to note in the factories, we don’t have dedicated lengths or anything. People are coming out every day, trucks delivering supplies, parts, construction.
Tesla’s management expects thousands of Optimus robots to be working in Tesla factories by end-2025; management expects Optimus to be the fastest product to get to millions of units per year; management thinks Tesla can get to 1 million units annually in 4-5 years; management expects to make thousands of Optimus robots at the end of this year; there’s no existing supply chain for all of Optimus’s components, so Tesla has to build a supply chain from scratch; the speed of manufacturing of a product is governed by the speed of the slowest item in the supply chain, but in Optimus’s case, there are many, many such items since it’s so new; Optimus production is currently rate-limited by restrictions on rare-earth magnets from China but management is working on it; management still has no idea how Optimus’s supply chain will look like at maturity
Making good progress in Optimus. We expect to have thousands of Optimus robots working in Tesla factories by the end of this year beginning this fall. And we expect to see Optimus faster than any product, I think, in history to get to millions of units per year as soon as possible. I think we feel confident in getting to 1 million units per year in less than 5 years, maybe 4 years. So by 2030, I feel confident in predicting 1 million Optimus units per year. It might be 2029…
…This year, we’ll make a few — we do expect to make thousands of Optimus robots, but most of that production is going to be at the end of the year…
…Almost everything in Optimus is new. There’s not like an existing supply chain for the motors, gearboxes, electronics, actuators, really anything in the Optimus apart from the AI for Tesla, the Tesla AI computer, which is the same as the one in the car. So when you have a new complex manufactured product, it will move as fast as the slowest and the least lucky component in the entire thing. And as a first order approximation, there’s like 10,000 unique things. So that’s why anyone who tells you they can predict with precision, the production ramp of the truly new product is — doesn’t know what they’re talking about. It is literally impossible…
…Now Optimus was affected by the magnet issue from China because the Optimus actuators in the arm to use permanent magnet. Now Tesla, as a whole, does not need to use permanent magnets. But when something is volume constrained like an arm of the robot, then you want to try to make the motor as small as possible. And then — so we did design in permanent magnets for those motors and those were affected by the supply chain by basically China requiring an export license to send out any rare earth magnets. So we’re working through that with China. Hopefully, we’ll get a license to use the rare earth magnets. China wants some assurances that these are not used for military purposes, which obviously they’re not. They’re just going into a humanoid robot. So — and it’s a nonweapon system…
…[Question] Wanted to ask about the Optimus supply chain going forward. You mentioned a very fast ramp-up. What do you envision that supply chain looking like? Is it going to require many more suppliers to be in the U.S. now because of the tariffs?
[Answer] We’ll have to see how things settle out. I don’t know yet. I mean some things we’re doing, as we’ve already talked about, which is that we’ve already taken tremendous steps to localize our supply chain. We’re more localized than any other manufacturer. And we have a lot of things kind of underway that to increase the localization to reduce supply chain risk associated with geopolitical uncertainty.
Tesla’s supervised FSD (full-self driving) software is safer than a human driver; management has been using social media (X, or Twitter) to encourage people to try out Tesla’s FSD software; management did not directly answer a question on FSD pricing once the vehicle can be totally unsupervised
Not only is FSD supervised safer than a human driver, but it is also improving the lifestyle of individuals who experience it. And again, this is something you have to experience and anybody who has experienced just knows it. And we’ve been doing a lot lately to try and get those stories out, at least on X, so that people can see how other people have benefited from this…
…[Question] Can we envision when you launch unsupervised FSD that there could be sort of a multitiered pricing approach to unsupervised versus supervised similar to what you did with autopilot versus FSD in the past?
[Answer] I mean this is something which we’ve been thinking about. I mean just so now for people who have been trying FSD and who’ve been using FSD, they think given the current pricing is too cheap because for $99, basically getting a personal shop… I mean we do need to give people more time to — if they want to look at — like a key breakpoint is, can you read your text messages or not? Can you write a text message or not? Because obviously, people are doing this, by the way, with unautonomous cars all the time. And if you just go over and drive down the highway and you’ll see people texting while driving doing 80-mile an hour… So that value — it will really be profound when you can basically do whatever you want, including sleep. And then that $99 is going to seem like the best $99 you ever spent in your life.
Tesla’s management thinks Waymo vehicles are too expensive compared to Teslas; Waymo has expensive sensor suites; management thinks Tesla will have lion’s share of the robotaxi market; a big difference between Tesla and Waymo is that Tesla is also manufacturing the cars whereas Waymo is retrofitting cars from other parties; management thinks Tesla’s vision-only approach will not have issues with cameras becoming blinded by glare and stuff because the system uses direct photon counting and bypasses image signal processing
The issue with Waymo’s cars is it costs way more money, but that is the issue. The car is very expensive, made in low-volume. Teslas are probably cost 1/4, 20% of what a Waymo costs and made in very high volume. Ironically, like we’re the ones who made the bet that a pure AI solution with cameras and [ already ] what the car actually will listen for sirens and that kind of thing. It’s the right move. And Waymo decided that an expensive sensor suite is the way to go, even though Google is very good at AI. So I’m wondering…
….As far as I’m aware, Tesla will have, I don’t know, 99% market share or something ridiculous…
…The other thing which people forget is that we’re not just developing the software solution, we are also manufacturing the cars. And like you know what like Waymo has, they’re taking cars and then trying to…
…[Question] You’re still sticking with the vision-only approach. A lot of autonomous people still have a lot of concerns about sun glare, fog and dust. Any color on how you anticipate on getting around those issues? Because my understanding, it kind of blinds the camera when you get glare and stuff.
[Answer] Actually, it does not blind the camera. We use an approach which is a direct photon count. So when you see a processed image, so the image that goes from the — with sort of photon counter, the silicon photon counter, that they get — goes through a digital signal processor or image signal processor. That’s normally what happens. And then the image that you see looks all washed out because if it’s — you pointed a camera at the sun, the post-processing of the photon counting washes things out. It actually adds noise. So quite a big breakthrough that we made some time ago was to go with direct photon counting and bypass the image signal processor. And then you can drive pretty much straight at the sun, and you can also see in what appears to be the blackest of night. And then here in fog, we can see as well as people can, probably better, but in fact probably slightly better than people than the average person anyway.
Tesla’s AI software team and chip-design team was built from scratch with no acquisitions; management thinks Tesla’s team is the best
It is worth noting that Tesla has built an incredible AI software team and AI hardware chip design team from scratch, didn’t acquire anyone. We just built it. So yes, it’s really — I mean I don’t see anyone being able to compete with Tesla at present.
Tesla’s management thinks China is ahead of the USA in physical AI with respect to autonomous drones because China has the ability to manufacture autonomous drones, but the USA does not; management thinks Tesla is ahead of any company in the world, even Chinese companies, in terms of humanoid autonomous robots
[Question] Between China and United States, who, in your opinion, is further ahead on the development of physical AI, specifically on humanoid and also drones?
[Answer] A friend of mine posted on X, I reposted it. I think of a prophetic statement, which is any country that cannot manufacture its own drones is going to be the vassal state of any country that can. And we can’t — America cannot currently manufacture its own drones. Let that sink in, unfortunately. So China, I believe manufactures about 70% of all drones. And if you look at the total supply chain, China is almost 100% of drones are — have a supply chain dependency on China. So China is in a very strong position. And here in America, we need to tip more of our people and resources to manufacturing because this is — and I have a lot of respect for China because I think China is amazing, actually. But the United States does have such a severe dependency on China for drones and be unable to make them unless China gives us the parts, which is currently the situation.
With respect to humanoid robots, I don’t think there’s any company and any country that can match as well. Tesla and SpaceX are #1. And then I’m a little concerned that on the leaderboard, ranks 2 through 10 will be Chinese companies. I’m confident that rank 1 will be Tesla.
The Trade Desk (NASDAQ: TTD)
Trade Desk’s industry-leading Koa AI tools are embedded across Kokai; adoption of Kokai is now ahead of schedule, with 2/3 of clients using it; the bulk of spending on Trade Desk now takes place on Kokai; management continues to expect all Trade Desk clients to be using Kokai by end-2025; management is confident that Kokai will be seen as the most powerful buying platform by the industry by end-2025
The injection of our industry-leading Koa AI tools across every aspect of our platform has been a game changer, and we are just getting started…
…The core of Kokai has been delivered and adoption is now ahead of schedule. Around 2/3 of our clients are now using it and the bulk of the spend in our platform is now running through Kokai. We expect all clients to be using it by the end of the year…
…I’m confident that by the end of this year, we will reflect on Kokai as the most powerful buying platform the industry has ever seen, precisely because it combines client needs with the strong point of view on where value is shifting and how to deliver the most efficient return on ad spend.
…Kokai adoption now represents the majority of our spend, almost 2/3, a significant acceleration from where we ended 2024.
Deutsche Telekom used Kokai’s AI tools and saw an 11x improvement in post-click conversions and an 18x improvement in the cost of conversions; Deutsche Telekom is now planning to use Kokai across more campaigns and transition from Trade Desk’s previous platform, Solimar, like many other Trade Desk clients
Deutsche Telekom. They’re running the streaming TV service called Magenta TV, and they use our platform to try to grow their subscriber base…
…Using seed data from their existing customers, Deutsche Telekom was able to use the advanced AI tools in our Kokai platform to find new customers and define the right ad impressions across display and CTV, most relevant to retain those new customers successfully, and the results were very impressive. They saw an 11x improvement in post-click conversions attributed to advertising and an 18x improvement in the cost of those conversions. Deutsche Telekom is now planning to use Kokai across more campaigns, a transition that is fairly typical as clients move from our previous platform, Solimar to our newer, more advanced AI fuel platform, Kokai.
Visa (NASDAQ: V)
Visa recently announced the Authorize.net product that features AI capabilities, including an AI agent; Authorize.net enables all different types of payments;
In Acceptance Solutions, we recently announced 2 new product offerings. The first is a completely new version of Authorize.net, launching in the U.S. next quarter and additional countries next year. It features a streamlined user in base; AI capabilities with an AI agent, Anet; improved dashboards for day-to-day management and support for in-person card readers and Tap to Phone. It will help businesses analyze data, summarize insights and adapt to rapidly changing customer trends…
……I talked about the Authorize.net platform that we’ve relaunched and we’re relaunching. That’s a great example of enabling all different types of payments. And that’s going to be, we think, a really positive impact in the market specifically focused on growing our share in small business checkout.
Visa has an enhanced holistic fraud protection solution known as Adaptive Real-time Individual Change identification (ARIC) Risk Hub; ARIC Risk Hub uses AI to build more accurate risk profiles;
We also now provide an enhanced holistic fraud protection solution from Featurespace called the Adaptive Real-time Individual Change identification, or ARIC, Risk Hub. This solution utilizes machine learning and AI solutions to enable clients to build more accurate risk profiles and more confidently detect and block fraudulent transactions, ultimately helping to increase approvals and stop bad actors in real time.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Mastercard, Meta Platforms, Microsoft, Netflix, Paycom Software, PayPal, Shopify, TSMC, Tesla, The Trade Desk and Visa. Holdings are subject to change at any time.