Potential Bargains In A Niche Corner Of The US Stock Market (Part 2)

An optically expensive thrift can look really cheap under the hood.

In February this year, I wrote Potential Bargains In A Niche Corner Of The US Stock Market where I discussed thrift conversions and why they could be potential bargains. In the article, my focus was mostly on thrifts that have undergone the standard conversion, or the second-step of the two-step conversion process. This was because I thought that only such thrifts could be acquired and most of a thrift’s economic value gets unlocked for shareholders when it is acquired.

Earlier today, courtesy of an article from the experienced US-community-bank investor Phil Timyan, and after more investigation, I learnt that thrifts that have undergone just the first-step conversion process can also be acquired in what’s known as a remutualisation. In this article you’re reading now, I will attempt to explain first-step conversions and remutualisations – and their potential for generating good returns for shareholders – by using Rhinebeck Bancorp (NASDAQ: RBKB) as an example. Rhinebeck Bancorp, which from here on will be referred to as RBKB, was also the subject of the Phil Timyan article I mentioned. 

How a first-step conversion works:

  • RBKB is a public-listed company that owns 100% of Rhinebeck Bank. Rhinebeck Bank is the operating bank that was established in 1860.
  • 57.1% of RBKB is owned by Rhinebeck Bancorp MHC. Rhinebeck Bancorp MHC is a non-stock corporation, so it has no shareholders. 42.9% of RBKB is owned by public shareholders.
  • In January 2019, Rhinebeck Bank completed its first-step conversion process. During the conversion process, 4,787,315 shares of RBKB were sold. Crucially, 6,345,975 shares were also issued to Rhinebeck Bancorp MHC but these shares were never sold, and Rhinebeck Bancorp MHC has no shareholders, as mentioned earlier.
  • Effectively, the 6,345,975 shares of RBKB held by Rhinebeck Bancorp MHC are not trading and can’t claim the economics of Rhinebeck Bank until Rhinebeck Bancorp MHC chooses to convert from its mutual ownership structure to one where it also has stockholders; this is known as the second-step conversion.

How a remutualisation works:

  • A remutualisation occurs when RBKB is acquired by another mutual bank. What happens at the point of acquisition is that the shares of RBKB owned by Rhinebeck Bancorp MHC gets cancelled, so 100% of the economics of Rhinebeck Bank then belongs to shareholders of RBKB, instead of the initial 42.9%.
  • As of 31 March 2025, RBKB has total shares outstanding of 11,094,828. After deducting 6,345,975 shares of RBKB owned by Rhinebeck Bancorp MHC and 302,784 shares of RBKB from unearned ESOP (employee stock ownership plan) shares, the remaining shares of RBKB that will be left in a remutualisation is 4,446,069.
  • As of 31 March 2025, RBKB’s stockholders’ equity is US$125.975 million. RBKB’s stock price is US$12.12 as of 12 June 2025. If the acquiring mutual bank decides to pay, say, US$20 per share for RBKB, it only has to cough up US$88.921 million (US$20 multiplied by 4,446,069 shares) for US$125.975 million in stockholders’ equity. So both the acquiring mutual bank and existing shareholders of RBKB win big.
  • On the surface, RBKB has a book value per share of US$11.35 (US$125.975 million divided by 11,094,828 shares), which gives it a PB ratio of 1.06. But if the remutualisation math is used, RBKB’s true book value per share is US$28.33 (US$125.975 million divided by 4,446,069 shares), which gives it a PB ratio of just 0.43.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q1 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the first quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks that designing end-to-end travel is very difficult and travelers often find planning travel to be very complicated, so travelers do it very infrequently; management thinks that a great user interface is the key to designing a great end-to-end travel experience for Airbnb users, and AI will be an important way to do it

I think a lot of companies have tried to like design an end-to-end travel. I think designing end-to-end travel is very, very hard. It’s funny — there’s this funny thing. One of the most common start-up ideas for entrepreneurs is to do a travel planning app. And yet travel planning apps almost always fail. So it’s almost like a riddle, why do travel planning apps fail, and everyone really tries to do it? And the reason why is because to plan travel is very complicated. In fact, it’s so complicated many people have assistants and a big part of their job is to plan travel for them. And yet you use it infrequently. So it’s a very difficult thing to do and you do it infrequently. And so therefore, a lot of companies have failed to design like a so-called connected trip. So I think to do this, a lot of it is to design a really good user experience. And I think that’s one of the things that we’re going to try to do to really design a great end-to-end experience, to able to book your entire trip, and much more. I think the user interface will be important. I think AI will be an important way to do this as well…

…We’re focused on making everything instant book and easy to use. We’re trying to make sure that the end-to-end travel experience is really, really wonderful with great Airbnb design, and we’re going to bring more AI into the application so that Airbnb, you can really solve your own problems with great self-solve through AI customer service agents.

Airbnb’s management recently rolled out an AI customer service agent; 50% of Airbnb’s US users are already using the customer service agent and it will soon be rolled out to 100% of Airbnb’s US users; management thinks Airbnb’s AI customer service agent is the best of its kind in travel, having already led to a 15% reduction in users needing to contact human agents; the AI customer service agent will be more personalised and agentic in the years ahead

We just rolled out our AI customer service agent this past month. 50% U.S. users are now using the agent, and we’ll roll it out to 100% of U.S. users this month. We believe this is the best AI-supported customers travel agent in travel. It’s already led to a 15% reduction in people needing to contact live human agents and it’s going to get significantly more personalized and agentic over the years to come.

Alphabet (NASDAQ: GOOG)

AI Overviews in Search now has more than 1.5 billion monthly users; AI Mode has received early positive reaction; usage growth of AI Overviews continues to increase nearly a year after its launch; management is leaning heavily into AI Overviews; management released the AI Mode in March as an experiment; AI Mode searches are twice as long as traditional search queries; AI Mode is getting really positive feedback from early users; the volume of commercial queries on Google Search has increased with the launch of AI Overviews; AI Overviews is now available in 15 languages and 140 countries; AI Overviews continues to monetise at a similar rate to traditional Search; reminder that ads within AI Overviews was launched in mobile in the USA in late-2024; an example of longer search queries in AI Mode is product comparisons; management thinks AI Overviews in Search and Gemini as 2 distinct consumer experiences; management thinks of AI Mode as a way to discover how the most advanced users are using AI-powered search

AI Overviews is going very well with over 1.5 billion users per month, and we are excited by the early positive reaction to AI Mode…

…Nearly a year after we launched AI Overviews in the U.S., we continue to see that usage growth is increasing as people learn that Search is more useful for more of their queries. So we are leaning in heavily here, continuing to roll the feature out in new countries to more users and to more queries. Building on the positive feedback for AI Overviews, in March, we released AI Mode, an experiment in labs. It expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities to help with questions that need further exploration and comparisons. On average, AI Mode queries are twice as long as traditional search queries. We’re getting really positive feedback from early users about its design, fast response time and ability to understand complex, nuanced questions…

…As we’ve mentioned before, with the launch of AI Overviews, the volume of commercial queries has increased. Q1 marked our largest expansion to date for AI Overviews, both in terms of launching to new users and providing responses for more questions. The feature is now available in more than 15 languages across 140 countries. For AI Overviews, overall, we continue to see monetization at approximately the same rate, which gives us a strong base in which we can innovate even more…

…On the ads of — in AI Overviews, last — late last year, actually, we launched them within the AI Overviews on mobile in the U.S. And this builds on our previous rollout of ads above and below. So this was a change that we have…

…I mentioned people typing in longer queries. There’s a lot more complex, nuanced questions. People are following through more. People are appreciating the clean design, the fast response time and the fact that they can kind of be much more open-ended, can undertake more complicated tasks. Product comparisons, for example, has been a positive one, exploring how tos, planning a trip…

…On AI-powered search and how do we see our consumer experience. Look, I do think Search and Gemini, obviously, will be 2 distinct efforts, right? I think there are obviously some areas of overlap, but they’re also — like expose very, very different use cases. And so for example, in Gemini, we see people iteratively coding and going much deeper on a coding workflow, as an example. So I think both will be around…

…AI Mode is the tip of the tree for us pushing forward on an AI-forward experience. There will be things which we discover there, which will make sense in the context of AI Overviews, so I think will flow through to our user base. But you almost want to think of what are the most advanced 1 million people using Search for, the most advanced 10 million people and then how do 1.5 billion people use Search for.

Alphabet’s management rolled out Alphabet’s latest foundation model, Gemini 2.5, in 2025 Q1; Gemini 2.5 is widely recognised as the best model in the industry; Gemini 2.5 Pro debuted at No.1 on the Chatbot Arena in 2025 Q1 by a significant margin; activer users in AI Studio and Gemini API is up 200% since the start of 2025; Alphabet introduced Gemini 2.5 Flash in April 2025; Gemini models are now found in all of Alphabet’s 15 products with at least 0.5 billion users each; Alphabet is upgrading Google Assistant on mobile devices to Gemini, and will also upgrade tablets, cars, and devices that connect to phones later this year; the Pixel 9a phone with Gemini integration was launched to strong reviews; the Gemini Live camera feature, among others, will soon be rolled out to all Android devices

This quarter was super exciting as we rolled out Gemini 2.5, our most intelligent AI model, which is achieving breakthroughs in performance, and it’s widely recognized as the best model in the industry…

…We released Gemini 2.5 Pro last month, receiving extremely positive feedback from both developers and consumers. 2.5 Pro is state-of-the-art on a wide range of benchmarks and debuted at #1 on the Chatbot Arena by a significant margin. 2.5 Pro achieved big leaps in reasoning, coding, science and math capabilities, opening up new possibilities for developers and customers. Active users in AI Studio and Gemini API have grown over 200% since the beginning of the year…

…Last week, we introduced 2.5 Flash, which enables developers to optimize quality and cost…

…All 15 of our products with 0.5 billion users now use Gemini models…

…We are upgrading Google Assistant on mobile devices to Gemini. And later this year, we’ll upgrade tablets, cars and devices that connect to your phones such as headphones and watches. The Pixel 9a launched very strong reviews, providing the best of Google’s AI offerings like Gemini Live and AI-powered camera features. And Gemini Live camera and screen sharing is now rolling out to all Android devices, including Pixel and Samsung S25.

Google Cloud is offering the industry’s widest range of TPUs and GPUs; Alphabet’s 7th generation TPU, Ironwood, has 10x better compute power and 2x better power efficiency than the previous generation TPU; Google Cloud is the first cloud provider to offer NVIDIA’s Blackwell family of GPUs; Google Cloud will be offering NVIDIA’s upcoming Rubin family of GPUs

Complementing this, we offer the industry’s widest range of TPUs and GPUs and continue to invest in next-generation capabilities. Ironwood, our seventh-generation TPU and most powerful to date, is the first designed specifically for inference at scale. It delivers more than 10x improvement in compute power or a recent high-performance TPU while being nearly twice as power efficient. Our strong relationship with NVIDIA continues to be a key advantage for us and our customers. We were the first cloud provider to offer NVIDIA’s groundbreaking B200 and GB200 Blackwell GPUs, and we’ll be offering their next-generation Vera Rubin GPUs.

Alphabet’s management is rolling out the company’s latest image and video generation models; Alphabet has launched its open-sourced Gemma 3 model in March 2025; Gemma models have been downloaded more than 140 million times; Alphabet is developing robotics AI models; Alphabet has launched a multi-agent AI research system called AI Co-Scientist; the AlphaFold model has been used by more than 2.5 million researchers

Our latest image and video generation models, Imagen 3 and Veo 2, are rolling out broadly and are powering incredible creativity. Turning to open models. We launched Gemma 3 last month, delivering state-of-the-art performance for its size. Gemma models have been downloaded more than 140 million times. Lastly, we are developing AI models in new areas where there’s enormous opportunity, for example, our new Gemini Robotics models. And in health, we launched AI Co-Scientist, a multi-agent AI research system, while AlphaFold has now been used by over 2.5 million researchers.

Google Cloud’s AI developer platform, Vertex AI, now has more than 200 foundation models available, including Alphabet’s in-house models and third-party models

Our Vertex AI platform makes over 200 foundation models available, helping customers like Lowe’s integrate AI. We offer industry-leading models, including Gemini 2.5 Pro, 2.5 Flash, Imagen 3, Veo 2, Chirp and Lyria, plus open-source and third-party models like Llama 4 and Anthropic.

Google Cloud is the leading cloud platform for building AI agents; Google Cloud has an open source framework for building AI agents and multi-agent systems called Agent Development Kit; Google Cloud has a low-code agent-building tool called Agent Designer; KPMG is using Google Cloud to deploy AI agents to employees; Google Cloud has the Google Agentspace product that helps employees in organisations use AI agents widely; Google Cloud offers pre-packaged AI agents across various functions including coding and customer engagement; Alphabet is working on agentic experiences internally and deploying it across the company; Alphabet’s customer service teams have deployed AI agents to dramatically enhance the user experience and is teaching Google Cloud customers how to do so

We are the leading cloud solution for companies looking to the new era of AI agents, a big opportunity. Our Agent Development Kit is a new open-source framework to simplify the process of building sophisticated AI agents and multi-agent systems. And Agent Designer is a low-code tool to build AI agents and automate tasks in over 100 enterprise applications and systems.

We are putting AI agents in the hands of employees at major global companies like KPMG. With Google Agentspace, employees can find and synthesize information from within their organization, converse with AI agents and take action with their enterprise applications. It combines enterprise search, conversational AI or chat and access to Gemini and third-party agents. We also offer pre-packaged agents across customer engagement, coding, creativity and more that are helping to provide conversational customer experiences, accelerate software development, and improve decision-making…

…Particularly with the newer models, I think we are working on early agentic workflows and how we can get those coding experiences to be much deeper. We are deploying it across all parts of the company. Our customer service teams are deeply leading the way there. We’ve both dramatically enhanced our user experience as well as made it much more efficient to do so. And we are actually bringing all our learnings and expertise in our solutions through cloud to our other customers. But beyond that, all the way from the finance team preparing for this earnings call to everything, it’s deeply embedded in everything we do.

Waymo is now serving 250,000 trips per week (was 150,000 in 2024 Q4), up 5x from a year ago; Waymo launched its paid service in Silicon Valley in 2025 Q1; Waymo has expanded in Austin, Texas, and will launch in Atlanta later this year; Waymo will launch in Washington DC and Miami in 2026; Waymo continues to make progress in airport access and freeway driving; management thinks Alphabet will not be able to scale Waymo by themselves, so partners are needed

Waymo is now safely serving over 0.25 million paid passenger trips each week. That’s up 5x from a year ago. This past quarter, Waymo opened up paid service in Silicon Valley. Through our partnership with Uber, we expanded in Austin and are preparing for our public launch in Atlanta later this summer. We recently announced Washington, D.C. as a future ride-hailing city, going live in 2026 alongside Miami. Waymo continues progressing on 2 important capabilities for riders, airport access and freeway driving…

More businesses are adopting Alphabet’s AI-powered campaigns; Alphabet’s recent work with AI is helping advertisers reach customers and searches where advertising would previously not be showed; Alphabet is infusing AI at every step of the marketing process for advertisers, for example, (1) advertisers can now generate a broader variety of lifestyle imagery customized to their business, (2) in PMax, advertisers can automatically source images from their landing pages and crop them, (3) on media buying, AI-powered campaigns continue to help advertisers find new customers, (4) in Demand Gen, advertisers can more precisely manage ad placements and understand which assets work best at a channel level; users of Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend; when Demand Gen is paired with Product Feed, advertisers see double the conversion per dollar spend year-over-year on average; Royal Canin used Demand Gen and PMax campaigns and achieved a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases, a 8% higher value per user

More businesses, big and small, are adopting AI-powered campaigns, and the deployment of AI across our Ads business is driving results for our customers and for our business. Throughout 2024, we launched several features that leverage LLMs to enhance advertiser value, and we’re seeing this work pay off. The combination of these launches now allows us to match ads to more relevant search queries. And this helps advertisers reach customers and searches where we would not previously have shown their ads.

Focusing on our customers, we continue to solve advertisers’ pain points and find opportunities to help them create, distribute and measure more performant ads, infusing AI at every step of the marketing process. On Audience Insights, we released new asset audience recommendations, which tell businesses the themes that resonate most with their top audiences. On creatives, advertisers can now generate a broader variety of lifestyle imagery customized to their business to better engage their customers and use them across PMax, demand gen, display and app campaigns. Additionally, in PMax, advertisers can automatically source images from their landing pages and crop them, increasing the variety of their assets. On media buying, advertisers continue to see how AI-powered campaigns help them find new customers. In Demand Gen, advertisers can more precisely manage ad placements across YouTube, Gmail, Discover and Google Display Network globally and understand which assets work best at a channel level. Thanks to dozens of AI-powered improvements launched in 2024, businesses using Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend for goals like purchases and leads. And when using Demand Gen with Product Feed, on average, they see more than double the conversion per dollar spend year-over-year…

…Royal Canin combined Demand Gen and PMax campaigns to find more customers for its cat and dog food products. The integration resulted in a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases and increased the value per user by 8%.

Google Cloud still has more AI demand than capacity in 2025 Q1 (as it did in 2024 Q4) 

Recall I’ve stated on the Q4 call that we exited the year in Cloud specifically with more customer demand than we had capacity. And that was the case this quarter as well.

30% of new code at Alphabet is now generated by AI (it was 25% in 2024 Q3)

We’re continuing to make a lot of progress there in terms of people using coding suggestions. I think the last time I had said, the number was like 25% of code that’s checked in. It involves people accepting AI-suggested solutions. That number is well over 30% now. But more importantly, we have deployed more deeper flows.

Amazon (NASDAQ: AMZN)

AWS grew 17% year-on-year in 2025 Q1, and is now at a US$117 billion annualised revenue run rate (was US$115 billion in 2024 Q4); management used to think AWS could be a multi-hundred billion dollar revenue run rate business without AI and now that there’s AI, they think AWS could be even bigger; AWS’s AI business is now at a multi-billion annual revenue run rate and is growing triple-digits year-on-year; the shifting from on-premise to the cloud is still a huge tailwind for AWS, and now even more so as companies that want realize the full potential of AI will need to shift to the cloud; AWS is currently still supply constrained and it will be on a lot more new chips in the coming months; management thinks that the supply chain issues with chips will get better as the year progresses

AWS grew 17% year-over-year in Q1 and now sits at a $117 billion annualized revenue run rate…

…Before this generation of AI, we thought AWS had the chance to ultimately be a multi-hundred billion dollar revenue run rate business. We now think it could be even larger…

…Our AI business has a multibillion-dollar annual revenue run rate, continues to grow triple-digit year-over-year percentages and is still in its very early days…

…Infrastructure modernization is much less sexy to talk about than AI, but fundamental to any company’s technology and invention capabilities, developer productivity, speed and cost structure. And for companies to realize the full potential of AI, they’re going to need their infrastructure and data in the cloud…

…During the first quarter, we continued to see growth in both generative AI business and non-generative AI offerings as companies turn their attention to newer initiatives, bring more workloads to the cloud, restart or accelerate existing migrations from on-premises to the cloud and tap into the power of Generative AI…

…We — as fast as we actually put the capacity in, it’s being consumed. So I think we could be driving — we could be helping more customers driving more revenue for the business if we had more capacity. We have a lot more Trainium2 instances and the next generation of NVIDIA’s instances landing in the coming months…

…I do believe that the supply chain issues and the capacity issues will continue to get better as the year proceeds.

Management is directing Amazon to invest aggressively in AI; Amazon is building 1000-plus AI applications across the company; the next generation of Alexa is Alexa+; Amazon is using AI in its fulfilment network, robotics, shopping, and more

If you believe your mission is to make customers’ lives easier and better every day, and you believe that every customer experience will be reinvented with AI, you’re going to invest very aggressively in AI, and that’s what we’re doing. You can see that in the 1,000-plus AI applications we’re building across Amazon. You can see that with our next generation of Alexa, named Alexa+. You can see that in how we’re using AI in our fulfillment network, robotics, shopping, Prime Video and advertising experiences. And you can see that in the building blocks AWS is constructing for external and internal builders to build their own AI solutions.

AWS’s in-house AI chip, Trainium 2, is starting to lay in capacity in larger quantities with significant appeal and demand; AWS will always be offering AI chips from multiple providers, but Trainium 2 offers a compelling option with 30%-40% better price performance; management believes that the price of inference needs to be much lower for AI to be successful, and they think the price of inference will go down; Anthropic is still building its next few models with Trainium 2

Our new custom AI chip Trainium2 is starting to lay in capacity in larger quantities with significant appeal and demand. While we offer customers the ability to do AI in multiple chip providers and will for as long as I can foresee, customers doing AI at any significant scale realize that it can get expensive quickly. So the 30% to 40% better price performance that Trainium2 offers versus other GPU-based instances is compelling. For AI to be as successful as we believe it can be, the price of inference needs to come down significantly…

…I would say that we’ve been bringing on a lot of P5, which is a form of NVIDIA chip instances, as well as landing more and more Trainium2 instances as fast as we can…

…Anthropic is running — building the next few training models on top of our Trainium2 chip on AWS…

…As they’re waiting to see the cost of inference continue to go down, which it will.

The latest premier Amazon Nova model was launched yesterday and it delivers frontier intelligence and industry-leading price performance; thousands of customers are already using Amazon Nova models; Amazon Nova Sonic, a speech-to-speech foundation model, was recently released and it enables developers to build voice-based AI applications; Amazon Nova Sonic has lower word error rates and higher win rates over other comparable models; AWS recently released a research preview of Amazon Nova Act, a new AI model that can perform actions within a web browser; Amazon Nova Act aims to move the current state-of-the-art accuracy of multi-step agentic actions from 30%-60% to 90%-plus

We offer our own Amazon Nova state-of-the-art foundation models in Bedrock with the latest premier model launching yesterday. They deliver frontier intelligence and industry-leading price performance, and we have thousands of customers already using them, including Slack, Siemens, Sumo Logic, Coinbase, FanDuel, Glean and Blue Origin. A few weeks ago, we released Amazon Nova Sonic, a new speech-to-speech foundation model that enables developers to build voice-based AI applications that are highly accurate, expressive and human-like. Nova Sonic has lower word error rates and higher win rates over other comparable models for speech interactions…

…We’ve just released a research preview of Amazon Nova Act, a new AI model trained to perform actions within a web browser. It enables developers to break down complex workflows into reliable atomic commands like search or checkout or answer questions about the screen. It also enables them to add more detailed instructions to these commands where needed, like don’t accept the insurance upsell. Nova Act aims to move the current state-of-the-art accuracy of multistep agentic actions from 30% to 60% to 90-plus percent with the right set of building blocks to build these action-oriented agents.

Amazon’s management sees question-and-answer being the only current use-case for AI agents, but they want AI agents to be be capable of performing a wide variety of complex tasks and they have built Alexa+ to be such an agent; management launched a new lightning fast AI agent coding experience in Amazon Q in 2025 Q1 and customers are loving it; management has made generally available GitLab Duo with Amazon Q, which enables AI agents to assist multi-step tasks; Alexa+ is meaningfully smarter and more capable than the previous Alexa; Alexa+ is free with Prime and available for non-Prime customers at $19.99 per month; Alexa+ is just starting to be rolled out in the USA and will be introduced to other countries later in 2025; users really like Alexa+ thus far; Alexa+ is now with more than 100,000 users; Amazon already has 0.5 billion devices in people’s homes and cars that can easily distribute Alexa+; management thinks users will have to relearn a little on how to communicate with Alexa+, but the communication experience is now much better; management asked Alexa+  about good Italian restaurants in New York and Alexa+ helped to make a reservation

To date, virtually all of the agentic use cases have been of the question-answer variety. Our intention is for agents to perform wide-ranging complex multistep tasks by organizing a trip or setting the lighting, temperature and music ambience in your house for dinner guests or handling complex IT tasks to increase business productivity. There haven’t been action-oriented agents like this until Alexa+…

…This past quarter, Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging your own data, launched a lightning fast new agent coating experience within the command line interface that can execute complex workflows autonomously. Customers are loving this. We also made generally available GitLab Duo with Amazon Q, enabling AI agents to assist multi-step tasks such as new feature development, code-based upgrades for Java 8 and 11, while also offering code review and unit testing, all within the same familiar GitLab platform…

…We introduced Alexa+, our next-generation Alexa personal assistant, who is meaningfully smarter and more capable than our prior self can both answer virtually any question and take actions and is free with Prime or available to non-Prime customers for $19.99 a month. We’re just starting to roll this out in the U.S., and we’ll be expanding to additional countries later this year. People are really liking Alexa+ this far…

…So we’ve worked hard on that in Alexa+. We’ve been — we started rolling out over the last several weeks. It’s with now over 100,000 users with more rolling out in the coming months. And so far, the response from our customers has been very, very positive…

…We’re very fortunate in that we have over 0.5 billion devices out there in people’s homes and offices and cars. So we have a lot of distribution already…

…To some degree, there will be a little bit of rewiring for people on what they can do because you get used to patterns. I mean even the simple thing of not having to speak, Alexa speak anymore, we we’re all used to saying, Alexa, before we want every action to happen. And what you find is you really don’t have to do it the first time, and then really the conversation is ongoing where you don’t have to say Alexa anymore. And I’ve been lucky enough to have the alpha and the beta that I’ve been playing with for several months, and it took me a little bit of time to realize they didn’t have to keep saying Alexa, it’s very freeing when you don’t have to do that…

…When I was in New York, when we were announcing, I asked her, what were the — we did the event way downtown. I asked her what was great Italian restaurants or pizza restaurants, she gave me a list and she asked me if she wanted me to make a reservation. I said yes. And she made the reservation and confirmed the time, like that. When you get into those types of routines and you have those types of experience, they’re very, very useful.

The majority of Amazon’s capital expenditure (capex) in 2025 Q1 was for AWS’s technology infrastructure, including the Trainium chips

Turning to our cash CapEx, which was $24.3 billion in Q1. The majority of this spend is to support the growing need for technology infrastructure. It primarily relates to AWS as we invest to support demand for our AI services and increasingly in custom silicon like Trainium as well as tech infrastructure to support our North America and International segments. We’re also investing in our fulfillment and transportation network to support future growth and improve delivery speeds and our cost structure. This investment will support growth for many years to come.

The vast majority of successful startups are built on AWS; high-profile startups building AI coding agents are on AWS

If you look at the start-up space, the vast majority of successful start-ups over the last 10 to 15 years have run on top of AWS…

…If you just look at the growth of these coding agents in the last few months, these are companies like Cursor or Vercel, both of them run significantly on AWS.

Amazon’s management thinks that current AI apps have yet to really tackle customer experiences that are going to be reinvented and many other agents that are going to be built

What’s interesting in AI is that we still haven’t gotten to all the other customer experiences that are going to be reinvented and all the other agents that are going to be built. They’re going to take the role of a lot of different functions today. And those are — they’re — even though we have a lot of combined inference in those areas, I would say we’re not even at the second strike of the first batter in the first inning. It is so early right now.

AWS operating margin improved from 37.6% in 2024 Q1 to 39.5% in 2025 Q1, but margins will fluctuate from time to time; AWS’s margin strength is from the business’s strong growth, the impact of some continued investments, and AWS’s custom chips; the investments include software optimisations for server capacity, low-cost custom networking equipment, and power usage in data centers

AWS operating income was $11.5 billion and reflects our continued growth coupled with our focus on driving efficiencies across the business. As we said before, we expect AWS operating margins to fluctuate over time, driven in part by the level of investments we’re making at any point in time…

…We had a strong quarter in AWS, as you mentioned, the margin performance. I would attribute it to the strong growth that we’re seeing, coupled with the impact of some continued investment we’re making in innovation and technology. I’ll give you some examples. So we invest in software and process improvements and ends up optimizing our server capacity, which helps our infrastructure cost. We’ve been developing more efficient network using our low-cost custom networking gear. We’re working to maximize the power usage in our existing data centers, which both lowers our costs and also reclaims power for other newer workloads. And we’re also seeing the impact of advancing custom silicon like Graviton. It provides lower cost not only for us, but also for our customers, better price performance for them.

Apple (NASDAQ: AAPL)

Apple is currently shipping an LLM (large language model) on the iPhone 16 where some of the queries are being handled on the device itself

As you know, we’re shipping an LLM on the iPhone 16 today. And there are — some of the queries that are being used by our customers are on-device, and then others go to the private cloud where we’ve essentially mimicked the security and privacy of the device into the cloud. nd then others, for world knowledge, are with the integration with ChatGPT.

The new Mac Studio has Apple’s M4 Max and M3 Ultra chips, and it can run large language models with over 600 billion parameters entirely in memory

The new Mac Studio is the most powerful Mac we’ve ever shipped, equipped with M4 Max and our new M3 Ultra chip. It’s a true AI powerhouse capable of running large language models with over 600 billion parameters entirely in memory.

Apple has released VisionOS 2.4 which unlocks the first set of Apple Intelligence features for Vision Pro users

VisionOS 2.4 unlocks the first set of Apple Intelligence features for Vision Pro users while inviting them to explore a curated and regularly updated collection of spatial experiences with the Spatial Gallery app.

Apple’s management has released iOS 18.4, which brings Apple Intelligence to more languages (including Singlish); Apple has built its own foundation models for everyday tasks; new Apple Intelligence features in iOS 18 include, Writing Tools, Genmoji, Image Playground, Image Wand, Clean Up, Visual Intelligence, and a seamless connection to ChatGPT

Turning to software. We just released iOS 18.4, which brought Apple Intelligence to more languages, including French, German, Italian, Portuguese, Spanish, Japanese, Korean and simplified Chinese as well as localized English to Singapore and India…

At WWDC24, we announced Apple Intelligence and shared our vision for integrating generative AI across our ecosystem into the apps and features our users rely on every day. To achieve this goal, we built our own highly capable foundation models that are specialized for everyday tasks. We designed helpful features that are right where our users need them and are easy to use. And we went to great lengths to build a system that protects user privacy whether requests are processed on-device or in the cloud with Private Cloud Compute, an extraordinary step forward for privacy and AI.

Since we launched iOS 18, we’ve released a number of Apple Intelligence features from helpful Writing Tools to Genmoji, Image Playground, Image Wand, Clean Up, visual intelligence and a seamless connection to ChatGPT. We made it possible for users to create movies of their memories with a simple prompt and added AI-powered photo search, smart replies, priority notifications, summaries for mail, messages and more. We’ve also expanded these capabilities to more languages and regions.

Apple’s in-house chips are designed with a neural engine that powers AI features across Apple’s products and 3rd-party apps; management thinks the neural engine makes Apple products the best devices for generative AI

AI and machine learning are core to so many profound features we’ve rolled out over the years to help our users live a better day. It’s why we designed Apple silicon with a neural engine that powers so many AI features across our products and third-party apps. It’s also what makes Apple products the best devices for generative AI.

Apple still needs more time to work on the more personalised Siri that was unveiled by management recently

With regard to the more personal Siri features we announced, we need more time to complete our work on these features so they meet our high-quality bar. We are making progress and we look forward to getting these features into customers’ hands.

Apple has low capital expenditures for AI relative to other US technology giants because it uses 3rd-party data centers so they are mostly operating expenses; Apple’s new $500 billion investment in the USA could signal more capital expenditures and data center investments

On the data center side, we have a hybrid strategy. And so we utilize third parties in addition to the data center investments that we’re making. And as I’ve mentioned in the $500 billion, there’s a number of states that we’re expanding in. Some of those are data center investments. And so we do plan on making investments in that area

Arista Networks (NYSE: ANET)

Arista Networks’ management remains confident of reaching $750 million in back-end AI revenue in 2025 even with the uncertainty surrounding US tariffs; the 1:1 ratio between front-end and back-end AI spending for Arista Networks’ products still remains, but management thinks it’s increasingly hard to parse between front-end and back-end

Our cloud and AI momentum continues as we remain confident of our $750 million front-end AI goal in 2025…

…Just a quick clarification before we go into Q&A. Jayshree meant we were reiterating our back-end goal of $750 million, not front-end AI…

…[Question] Is that 1:1 ratio for the front-end back is still intact in your perspective?

[Answer] On the front-end ratio, yes, we’ve said it’s generally 1:1. It’s getting harder and harder to measure front end and back end. Maybe we’ll look at the full AI cluster differently next year. But I think 1:1 is still a good ratio. It varies. Some of them just build a cluster and don’t worry about the front end and others worry about it entirely holistically. So it does vary, but I think the 1:1 is still a good ratio…

…[Question] You reiterated the $750 million back-end target, but you’ve kind of had this $1.5 billion kind of AI target for 2025. And just wondering, is the capability of that more dependent on kind of the tariffs given kind of some of the front-end spend?

[Answer] Regarding tariffs, I don’t think it will have a material difference on the $750 million number or the $1.5 billion. We got the demand. So unless we have some real trouble shipping it or customers change their mind, I think we’re good with both those targets for the year.

Arista Networks is progressing well with its 4 major AI customers; 1 of the 4 customers have been in NVIDIA’s Infiniband solution for a long time, so they’ll be small for Arista Networks; 2 of the 4 are heading towards 50,000 GPU deployments by end-2025, maybe even 100,000 GPUs; 3 of the 4 customers are already in production with the 4th progressing well towards production; management has a lot of visibility from the 4 major AI customers for 2025 and 2026 and it’s looking good; the 4 major AI customers are mostly deploying Arista Networks’ 800-gig switches

We are progressing well in all 4 customers and continue to add smaller ones as well…

…Let me start with the 4 customers. All of them are progressing well. One of them is still new to us. They’ve been in Infiniband for a long time, so they’ll be small. I would say 2 of them are heading towards 50,000 GPU deployments by end of the year, maybe they’ll be at 100 but I can be most certainly sure of 50,000, heading to 100,000. And then the other one is also in production. So I had talked about all 4 going into production. Three are already in production, the fourth one is well underway…

…[Question] If I can go back to the 4 Tier 1s that you’re working with on the AI back end and the progress that you updated on that front. Are these customers now giving you more visibility just given the tariff landscape and that you would need to sort of build inventory for some of the finished codes? And can you just update us how they’re handling the situation on that front? And particularly then, as you think about — I think the investor focus is a lot about sort of 2026 and potential sort of changes in the CapEx landscape from these customers at that point. Are you getting any forward visibility from them? Any sort of early signs for 2026 on these customers?

[Answer] We definitely have all the visibility in the world for this year, and we’re feeling good. We’re getting unofficial visibility because they all know our lead times are tied to some fairly long lead times from our partners and suppliers. So I would say 2026 is looking good. And based on our execution of 2025 and plans we’re putting together, we should have a great year in 2026 as well for AI sector specifically…

…[Question] Do you see the general cadence of hyperscalers deploying 800-gig switch ports this year? I ask because I believe your Etherlink family of switches became generally available in late 2024.

[Answer] I alluded to this earlier in 2024, the majority of our AI trials were on 400 gig at that time. So you’re right to observe that with our Etherlink portfolio really getting introduced in the second half of ’24 that a lot of our 800-gig activity has picked up in 2025, some of which will be reflected in shipments and some of it which will be part of our deferred. So it’s a good observation and an accurate one that this is the year of 800, like last year it was the year of 400.

Arista Networks’ management plans for the company to be the premier and preferred network for NVIDIA’s next-generation GPUs; Arista Networks’ Etherlink portfolio makes it easy to identify and localise performance issues in accelerated compute AI clusters

At the GTC event in March of 2025, we heard all about NVIDIA’s planned GPU road map every 12 to 18 months, and Arista intends to be the premier and preferred scale-out network for all of those GPUs and AI accelerators. Traditional GPUs have a collective communication libraries or CCL, as they’re known, that try to discover the underlying network topology using localization techniques. With this accelerated compute approach, the discrepancies between the discovered topology and the one that actually happens can impact AI job completion times. Arista’s ethylene portfolio highlights the accelerated networking approach, bringing that single point of network control and visibility as a differentiation. This makes it extremely crisp to identify and localize performance issues especially as the size of the AI cluster grows to 50,000 and 100,000 XPUs with the Arista AI Spine and leaf network designs.

Arista Networks’ campus portfolio provides cost-effective access points for agentic AI applications

Arista’s cognitive campus portfolio features our advanced spine with power or Ethernet-wired lease capabilities, along with a wide range of cost-effective wireless or 7 indoor and outdoor access points for the newer IoT and agentic applications.

The data center ecosystem is still somewhat new to AI and the suppliers are figuring things out together

But everybody is new to AI, they’ve never really put together a network design for 4-rail or 8-rail or how does it connect into the GPUs and what is the NIC [network interface card] attachment? What is the accessories in terms of cables or optics that connect? So this movement from trials to production causes us to bring a whole ecosystem together for the first time.

Arista Networks’ management thinks that when it comes to AI use-cases, Arista Networks’ products will play a far bigger role than whitebox networking manufacturers, even though whiteboxes will always be around and management is even happy to help customers build networking solutions that encompass both Arista Networks’ products and whiteboxes; Arista Networks was able to help a small AI customer build a network for a cluster of a few hundred GPUs very quickly after the customer struggled to do so with whiteboxes

I’ve always said, that white box is not new. It’s been with us since the beginning of time. In fact, when Arista got started, a couple of our customers had already implemented internally various implementations of white box. So there is a class of customers who will make the investments in engineering and operations to build their own network and manage it. And it’s a very different business model. It operates typically at 10% gross margins. I don’t think you want Arista to go there. And it’s very hardware-centric and doesn’t require the rich software foundation and investments that we’ve made. So first, I’ll start by saying we will always and will continue to coexist with white box. There are times that you’ve noticed this, too, that because Arista builds some very superior hardware, that even if they don’t use our EOS, they like to have our blue box, as I often call it, the Arista hardware that’s engineered much better than any others with a more open OS like Sonic or FBOSS or at least the attributes of running both EOS and an open-source networking system. So I think we view this as a natural part of selection in a customer base where if it’s a simple use case, they’re going to use something cost effective. But if it’s a really complex use case, like the AI spine or roles that require and demand more mission-critical features, Arista always plays a far bigger role in premium, highly scalable, highly valued software and hardware combinations than we do in a stand-alone white box. So we’ll remain coexistent peacefully, and we’re not in any way threatened by it. In fact, I would say we work with our customers to make sure as they’re building permutations and combinations of the white box, that we can work with that and build the right complement to that with our Etherlink portfolio…

…We had a customer, again, not material. We said, “I can’t get these boxes. I can’t make them run. I cannot get an AI network.” And one of my most technical sales leaders said, hey, we got a chance to build an AI cluster here for a few hundred GPUs. We jumped on it. Obviously, that customer is small and have been largely using white boxes and is now about to install an AI leaf and an AI spine, and we had to get it to him before the tariff deadline. So as an example of not material, but how quickly these decisions get made when you have the right product, right performance, right quality, right mission-critical nature and you can deal with that traffic pattern better than anyone else can. So it happens. It’s not big because we’ve got so much commitment in a given quarter from a customer, but when it is, we ask with great deal of nimbleness and agility to do that.

Arista Networks’ management is happy to support any kind of advanced packaging technologies – such as co-packaged optics or co-packaged copper – for back-end AI networks in the company’s products; management has yet to see any major adoption of co-packaged optics for back-end AI networks

[Question] I’d love to get your latest views around co-packaged optics. NVIDIA introduced its first CPO switches, GCC, for scale-out. And I was wondering whether that had any impact on your views regarding CPO adoption in back-end AI networks in coming years.

[Answer] It’s had no impact. It’s very early days. I think you’ve seen — Arista doesn’t build optics, but Arista enables optics and we’ve always been at the forefront, especially with Andy Bechtolsheim and his team of talented tech individuals that whether it is pluggable optics with LPO or how we define the OSFP connector for MSAs or 100 gig, 400 gig, it’s something we take seriously. And our views on CPOs, it’s not a new idea. It’s been demonstrated in prototype for, I don’t know, 10 to 20 years. The fundamental lack of adoption to date on CPO, it’s relatively high failure rates and it’s mostly been in the labs. So what are some of the advantages of CPO? Well, it has a linear interface. It has lower power than DSP for long-haul optics. It has a higher channel count. And I think if pluggable optics can achieve some of that in the best of both worlds, then you can overcome that with pluggable optics or even co-packaged copper. So Arista has no religion. We will do co-package copper. We’ll do co-package optics. We will do pluggable optics, but it’s too early to call this a real production-ready product that’s still in very early experiments and trials.

Arista Networks’ management is not seeing any material pull-forward in demand for its products because of US tariffs

[Question] We know tariffs are coming later in the year. Whether the strength you’re seeing is the result of early purchases of customers ahead of tariffs in order to save some dollars?

[Answer] Even if our customers try to pull it in and get it all by July, we would be unable to supply it. So that would be the first thing. So I’m not seeing the pull-ins that are really material in any fashion. I am seeing a few customers trying to save $1 here, $1 there to try and ship it before the tariff date but nothing material. Regarding pull-ins for 4 to 6 quarters, again, our best visibility is near term. And if we saw that kind of behavior, we would see a lot of inventory sitting in our customers, which we don’t. In fact, that’s long enough to ship faster and ship more.

2 years ago, Arista Networks’ management saw all its Cloud Titan customers pivot to AI and slow down their cloud spending; management is seeing more balanced spending now, with a more surgical focus on AI

2 years ago, I was very nervous because the entire cloud titans pivoted to AI and slowed down their cloud. Now we see a more balanced spend. And while we can’t measure how much of this cloud and how much of it is AI, if they’re kind of cobbled together, we are seeing less of a pivot, more of a surgical focus on AI and then a continued upgrade of the cloud networks as well. So compared to ’23, I would say the environment is much more balanced between AI and cloud.

Arista Networks’ management sees competitive advantages in the company’s hardware design, development, and operation that are hard to replicate even for its Cloud Titan customers

[Question] What functionality about the blue box actually makes it defensible versus what hyperscalers can kind of self-develop?

[Answer] Let me give you a few attributes of what I call the blue box, and I’m not saying others don’t have it, but Arista has built this as a mission, although we’re known for our software. We’re just as well known for our hardware. When you look at everything from a form factor of a one RU that we build to a chassis, we’ve got a tremendous focus on signal integrity, for example, all the way from layer 1, multilayer PCB boards, a focus on quality, a focus on driving distances, a focus on integrating optics for longer distances, a focus on driving MACsec, et cetera. So that’s a big focus. The second is hardware diagnostics. Internal to the company, we call it Arista boot. We’ve got a dedicated team focused on not just the hardware but the firmware to make it all possible in terms of troubleshooting because when these boards get super complex, you know where the failure is and you’re running at high-speed 200 [indiscernible] 30s. So things are very complex. So the ability to pinpoint and troubleshoot is a big part of what we do. And then there’s additional focus on the mechanical, the power supplies, the cooling, all of which translate to better power characteristics. Along with our partners and chip vendors, there’s a maniacal focus on not just high performance but low power. So some of the best attributes come from our blue boxes, not only for 48 ports, but all the way up to 576 ports of an AI spine or double that if you’re looking for dual capabilities. So well-designed, high-quality hardware is a thing of beauty, but also think of complexity that not everyone can do.

With neo AI cloud customers, Arista Networks’ management is observing that they are very willing to forsake NVIDIA’s GPUs and networking solutions and try other AI accelerators and Ethernet; management thinks that the establishment of the Ultra Ethernet Consortium in 2024 has a role to play in the increasing adoption of Ethernet for AI networking; with the Cloud Titans, management is also observing that they are shifting towards Ethernet; management thinks that the shift from Infiniband to Ethernet is faster than the the shift from NVIDIA’s GPUs to other companies’ GPUs

[Question] There’s a general perception that most of them are buying NVIDIA-defined clusters and networking. So I wonder if you could comment on those trends, their interest in moving past InfiniBand? And also are there opportunities developing with some of these folks to kind of multi-source their AI connectivity to different providers?

[Answer] We’re seeing more adventurous spirit in the neo-cloud customers because they want to try alternatives. So some of them are absolutely trying other AI accelerators like Lisa and AMD and my friends there. Some of them are absolutely looking at Ethernet, not InfiniBand as a scale-out. And that momentum has really shifted in the last year with the Ultra Ethernet Consortium and the spec coming out in May. I just want to give a shout-out to that team and what we have done. So I think Ethernet is a given that there’s an awful lot of legacy of InfiniBand that will obviously sort itself out. And a new class of AI accelerators we are seeing more niche players, more internal developments from the cloud titans, all of which is mandating more Ethernet. So I think between your 2 questions, I would say the progress from InfiniBand to Ethernet is faster, the progress from the ones they know and the high-performance GPU from NVIDIA versus the others is still taking time.

ASML (NASDAQ: ASML)

ASML’s management still sees AI (artificial intelligence) as the key growth driver; ASML will hit upper range of guidance for 2025 if AI demand continues to be strong, while ASML will hit the lower range of guidance if there is uncertainty among its customers

Consistent with our view from last quarter, the growth in artificial intelligence remains the key driver for growth in our industry. If AI demand continues to be strong and customers are successful in bringing on additional capacity to support the demand, there is a potential opportunity towards the upper end of our range. On the other hand, there is still quite some uncertainty for a number of our customers that can lead to the lower end of our range. 

ASML’s management is still positive on the long-term outlook for ASML, with AI being a driver for growth

Looking longer term, the semiconductor market remains strong with artificial intelligence, creating growth in recent quarters, and we see some of the future demand for AI solidifying, which is encouraging. 

ASML’s management thinks inference will become a larger part of AI demand going forward

I think there has been a lot of emphasis in the past quarters on the training side of life. I think more and more, which I think is logical, that you also see more and more emphasis being put on the inferencing side of the equation. So I think you will see the inferencing part becoming a larger component of AI demand on a go-forward basis.

ASML’s management is unable to tell what 2027 will look like for AI demand, but the commitment to AI chips in the next 2 years is very strong

You are looking at major investment, investment has been committed, investment that a lot of company believe they have to make in order to basically enter this AI race, I think the threshold to change this behavior is pretty high. And this is why — this is what our customers are telling us. And that’s also why we mentioned that, based on those conversations, we still see ’25, ’26 as growth years. That’s largely driven by AI and by that dynamic. Now ’27 start to be a bit further away, so you’re asking us too much, I think, to be able to answer basically what AI may look like in ’27. But if you look at the next couple of year, so far, the commitment to the AI investment and, therefore, the commitment also to deliver the chips for AI has been very solid.

Coupang (NYSE: CPNG)

Coupang’s management is investing in automation (such as automated picking, packing and sorting) and machine learning to deploy inventory more precisely to improve the customer experience and reduce costs

This quarter, we saw benefits from advances in our automated picking, packing and sorting systems and machine learning utilization that deploys inventory with more precise prediction of demand. This, coupled with our focus on operational excellence, enables us to continually improve the customer experience while also lowering their cost of service.

Datadog (NASDAQ: DDOG)

Existing customer usage growth in 2025 Q1 was in line with management’s expectations; management is seeing high growth in Datadog’s AI cohort, and stable growth in the other cohorts

Overall, we saw trends for usage growth from existing customers in Q1 that were in line with our expectations. We are seeing high growth in our AI cohort as well as consistent and stable growth in the rest of the business.

Datadog’s ,anagement continues to see increase in interest in next-gen AI capabilities and analysis; 4,000 Datadog customers at the end of 2025 Q1 used 1 or more Datadog AI integrations (was 3,500 in 2024 Q4), up 100% year-on-year; companies using end-to-end data observability to manage model performance, security, and quality, has more than doubled in the past 6 months; management has observed that data observability has become a big enabler of building AI workloads; the acquisition of Metaplane helps Datadog build towards a comprehensive data observability suite; management thinks data observability will be a big opportunity for Datadog

We continue to see rising customer for next-gen AI capabilities and analysis. At the end of Q1, more than 4,000 customers used one or more Datadog AI integrations, and this number has doubled year-over-year. With end-to-end data observability, we are seeing continued growth in customers and usage as they seek to manage end-to-end model performance, security and quality. I’ll call out the fact that the number of companies using end-to-end data observability has more than doubled in the past 6 months…

…[Question] What the vision is about moving into data observability and how consequential an opportunity it could be for Datadog?

[Answer] The field is evolving into a big enabler or it can be positive enabler, if you don’t do it right, for building enterprise workloads — for AI workloads, sorry. So in other words, making sure the data is being extracted from the the right place, transformed the right way and is being fed into the right AI models on the other hand…

…We only had some building blocks for data observability. We built data streams monitoring product for streaming data that comes out of few, such as Kafka, for example. We built their job monitoring product that monitors back jobs and large transformation jobs. We have a database monitoring product that looks at the way you optimize queries and optimize base performance and cost. And by adding data quality and data pipelines, with Metaplane, we have a full suite basically that allows our customers to manage everything from getting the data from their core data storage into all of the products and AI workloads and reports they need to go populate that data. And so we think it’s a big opportunity for us.

Datadog’s management has improved Bits AI, and is using next-gen AI to help solve customer issues quickly and move towards auto remediation

We are adding to Bits AI, with capabilities for customers to take action with workflow automation and App Builder, using next GenAI to help our customers immediate issues more quickly and move towards auto remediation in the future.

Datadog has made 2 recent acquisitions; Eppo is a feature management and experimentation platform; management sees automated experimentation as an important part of modern application development because of the use of AI in coding; Metaplane is a data observability platform that works well for new enterprise AI workloads; management is seeing more AI-written code in both its customers and the company itself; management thinks that as AI writes more code, more value will come from being able to observe and understand the AI-written code in production environments, which is Datadog’s expertise; the acquisitions of Eppo and Metaplane are to position Datadog for the transition towards a world of AI-written code

We recently announced a couple of acquisitions.

First, we acquired Eppo, a next-generation feature management and experimentation platform. The Eppo platform helps increase the velocity of releases, while also lowering risk by helping customers to release and validate features in a controlled manner. Eppo augments our efforts in product analytics, helping customers improve the variance and tie feature performance to business outcomes. More broadly, we see automated experimentation as a key part of modern application development, with the rapid adoption of the agent generative code, as well as more and more of the application logic itself being implemented with nondeterministic AI models. 

Second, we also acquired Metaplane, the data observability platform built for modern data teams. Metaplane helps prevent, detect and resolve their availability and quality issues across the company’s data warehouses and data pipelines. We’ve seen for several years now that better freshness and quality were critical for applications and business analytic. And we believe that they are becoming key enablers of the creation of new enterprise AI workloads, which is why we intend to integrate the Metaplane capabilities into our end-to-end dataset offerings…

…There is definitely a big transition that is happening right now, like we see the rise of AI written code. We see it across our customers. We also see it inside of Datadog, where we’ve had very rapid adoption of this technology as well…

…The way we see it is that it means that there’s a lot less value in writing the code itself, like everybody can do it pretty quickly, can do a lot of it. You can have the machine to do a lot of it, and you complement it with a little bit of your own work. But the real difficulty is in validating that code, making sure that it’s safe, making sure it runs well, that it’s performing and that it does what it’s supposed to do for the business. Also making sure that when 15 different people are changing the code at the same time, all of these different changes come together and work the right way, and you understand the way these different pieces interact in the way. So the way we see it is this move out a lot of their value from writing the code to observing it and understanding it in production environments, which is what we do. So a lot of the investments we’re making right now, including some of the acquisitions we’ve announced, build towards that, and making sure that we’re in the right spot.

Datadog signed a 7-figure expansion deal with a leading generative AI company; the generative AI company needs to reduce tool fragmentation; the generative AI company is replacing commercial tools for APM (application performance monitoring) and log management with Datadog, and is expanding to 5 Datadog products

We signed a 7-figure expansion as an annualized contract with a leading next GenAI company. This customer needs to reduce tool fragmentation to keep on top of its hyper growth in usage and employee headcount. With this expansion, the customer will use 5 Datadog products and will replace commercial tool for APM and log management.

AI-native customers accounted for 8.5% of Datadog’s ARR in 2024 Q4 (was 6% in 2024 Q4); AI-native customers contributed 6 percentage points to Datadog’s year-on-year growth in 2025 Q1, compared to 2 percentage points in 2024 Q1; management thinks AI-native customers will continue to optimise cloud and observability usage in the future; AI-native contracts that come up for renewal are healthy; Datadog has huge customer concentration with the AI-native cohort; Datadog has more than 10 AI-native customers that are spending $1 million or more with Datadog; the strong performance of the AI-native cohort in 2025 Q1 is fairly broad-based; Datadog is helping the AI-native customers mostly with inference, and not training; when Datadog sees growth among AI-native customers, that’s growth of AI adoption because the AI-native customers’ workloads are mostly customer-facing

We saw a continued rise in contribution from AI-native customers who represented about 8.5% of Q1 ARR, up from about 6% of ARR last quarter and up from about 3.5% of ARR in the year ago quarter. AI-native customers contributed about 6 points of year-over-year revenue growth in Q1 versus about 5 points last quarter and about 2 points in the year ago quarter. We continue to believe that adoption of AI will benefit Datadog in the long term, but we remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms and as they may choose to optimize cloud and observability usage…

…[Question] Could you talk about what you’re seeing from some of those AI-native contracts that have already come up for renewal and just how those conversations have been trending?

[Answer] All the contracts that come up for renewal, they are healthy. The trick with the cohort is that it’s growing fast. There’s also a revenue concentration there. We now have our largest customer in the cohort, and they’re growing very fast. And on the flip side of that, we also have a larger number of large customers that are also growing. So we — I think we mentioned more than 10 customers now that are spending $1 million or more with us in that AI-native cohort and that are also growing fast…

…On the AI side, we do have, as I mentioned, one customer large and the others there, they’re contributing more of the new revenue than the others. But we see growth in the rest of the cohort as well. So again, it’s fairly typical…

…For the AI natives, actually, what we help them with mostly is not training. It’s running their applications and their inference workloads as customer-facing. Because what’s training for the AI natives tends to be largely homegrown one-off and different from — between each and every one of them. We expect that as and if most other companies and enterprises do significant training, that this will not be the case. This will not be one-off and homegrown. But right now, it is still the AI natives that do most of the training, and they still do it in a way that’s largely homegrown. So when we see growth on the AI-native cohorts, that’s growth of AI adoption because that’s growth of customer-facing workloads by and large.

Datadog’s management sees the trend of cloud migration as being steady; management sees cloud migration being partly driven by customers’ desires to adopt AI, because migrating to the cloud is a prerequisite for AI

[Question] What are the trend lines on the cloud migration side?

[Answer] It’s consistent with what we’ve seen before. It’s also consistent with what you’ve heard from the hyperscalers over the past couple of weeks. So I would say it’s steady, unremarkable. It’s not really trending up nor trending down right now. But we see the same desire from customers to move more into the cloud and to lay the groundwork so they can also add up AI, because digital transformation and cloud migrations are prerequisites for that.

Datadog’s management thinks there will be more products for Datadog to build as AI workloads shift towards inferencing; management is seeing its LLM Observability product getting increasing usage as customers move AI workloads into production; management wants to build more products across the stack, from closer to the GPU to AI agents; 

On the workloads turning more towards inference, so there’s definitely more product to build there. So we have a — so we built an LLM Observability product that is being — that is getting increasing usage from customers as they move into production. And we think there’s more that we need to build both down the stack closer to the GPUs and up the stack closer to the agents that are being built on top of these models.

Datadog’s management is already seeing returns on Datadog’s internal investments in AI in terms of employee productivity; in the long-term, there’s the possibility that Datadog may need lesser headcount because of AI

[Question] Internally, how do you think about AI from an efficiency perspective?

[Answer] For right now, I think we’re seeing the returns in productivity, whether that be salespeople getting more information or R&D. We’re essentially trying to create an environment where we’re encouraging the various departments to use it and learning from it. Long term, there might well be efficiency gains — there may be efficiency gains that can be manifested in headcount.

Mastercard (NYSE: MA)

Mastercard’s management sees contactless payments and tokenised transactions as important parts of agentic AI digital commerce; Mastercard has announced Mastercard Agent Pay, which will facilitate safe, frictionless and programmable transactions across AI platforms; Mastercard is working with important AI companies such as Microsoft and OpenAI to deliver agentic payments

Today, 73% of all in-person switched transactions are contactless and approximately 35% of all our switch transactions are tokenized. These technologies will continue to play an important role as we move forward into the next phase of digital commerce, such as Agentic AI. We announced Mastercard Agent Pay to leverage our Agentic tokens as well as franchise rules, fraud and cybersecurity solutions. Combined, these will help partners like Microsoft to facilitate safe, frictionless and programmable transactions across AI platforms. We will also work with companies like OpenAI to deliver smarter, more secure and more personalized agentic payments. The launch of Agent Pay is an important step in redefining commerce in the AI era.

Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future just unveiled the AI-powered Malware Intelligence; Malware Intelligence enables proactive threat prevention

On the cybersecurity front, Recorded Future just unveiled malware intelligence. It’s a new capability enabling proactive threat prevention for any business using real-time AI-powered intelligence insights.

Mastercard’s management sees AI as being deeply ingrained in Mastercard’s business; Mastercard’s access to an enormous amount of data is an advantage for Mastercard in deploying AI; in 2024, a third of Mastercard’s products in its value-added services and solutions segment was powered by AI

AI is deeply ingrained in our business. We have access to an enormous amount of data, and this uniquely positions us to enhance our AI’s performance, resulting in greater accuracy and reliability. And we’re deploying AI to enable many solutions in market today. In fact, in 2024, AI enabled approximately 1 in 3 of our products within value-added services and solutions.

Meta Platforms (NASDAQ: META)

Meta’s management is focused on 5 opportunities within AI namely, improved advertising, more engaging experiences, business messaging, Meta AI and AI devices; the 5 opportunities are downstream of management’s attempt to build artificial general intelligence and leading AI models and infrastructure in an efficient manner; management thinks the ROI of Meta’s investment in AI will be good even if Meta does not succeed in all the 5 opportunities;

As we continue to increase our investments and focus more of our resources on AI, I thought it would be useful today to lay out the 5 major opportunities that we are focused on. Those are improved advertising, more engaging experiences, business messaging, Meta AI and AI devices. And these are each long-term investments that are downstream from us building general intelligence and leading AI models and infrastructure. Even with our significant investments, we don’t need to succeed in all of these areas to have a good ROI. But if we do, then I think that we will be wildly happy with the investments that we are making…

…We are focused on building full general intelligence. All of the opportunities that I’ve discussed today are downstream of delivering general intelligence and doing so efficiently.

Meta’s management’s goal with the company’s advertising business is for businesses to simply tell Meta their objectives and budget, and for Meta to do all the rest with AI; management thinks that Meta can redefine advertising into an AI agent that delivers measurable business results at scale

Our goal is to make it so that any business can basically tell us what objective they’re trying to achieve like selling something or getting a new customer and how much they’re willing to pay for each result and then we just do the rest. Businesses used to have to generate their own ad creative and define what audiences they wanted to reach, but AI has already made us better at targeting and finding the audiences that will be interested in their products than many businesses are themselves, and that keeps improving. And now AI is generating better creative options for many businesses as well. I think that this is really redefining what advertising is into an AI agent that delivers measurable business results at scale.

Meta tested a new advertising recommendation model for Reels in 2025 Q1 called Generative Ads Recommendation Model, or GEM, that has improved conversion rates by 5%; 30% more advertisers are using Meta’s AI creative tools in 2025 Q1; GEM is twice as efficient at improving ad performance for a given amount of data and compute; GEM’s better efficiency helped Meta significantly scale up the amount of compute used for model training; GEM is now being rolled out to additional surfaces across Meta’s apps; the initial test of Advantage+’s streamlined campaign creation flow for sales, app and lead campaigns is encouraging and will be rolled out globally later in 2025; Advantage+ Creative is seeing strong adoption; all eligible advertisers can now automatically adjust the aspect ratio of their existing videos and generate images; management is testing a feature that uses gen AI to place clothing on virtual models; management has seen a 46% lift in incremental conversions in the testing of the incremental attribution feature and will roll out the feature to all advertisers in the coming weeks; improvements in Meta’s advertising ranking and modeling drove conversion growth that outpaced advertising impressions growth in 2025 Q1

In just the last quarter, we are testing a new ads recommendation model for Reels, which has already increased conversion rates by 5%. We’re seeing 30% more advertisers are using AI creative tools in the last quarter as well…

…In Q1, we introduced our new Generative Ads Recommendation Model, or GEM, for ads ranking. This model uses a new architecture we developed that is twice as efficient at improving ad performance for a given amount of data and compute. This efficiency gain enabled us to significantly scale up the amount of compute we use for model training with GEM trained on thousands of GPUs, our largest cluster for ads training to date. We began testing the new model for ads recommendations on Facebook Reels earlier this year and have seen up to a 5% increase in ad conversions. We’re now rolling it out to additional surfaces across our apps…

…We’re seeing continued momentum with our Advantage+ suite of AI-powered solutions. We’ve been encouraged by the initial test of our streamlined campaign creation flow for sales, app and lead campaigns, which starts with Advantage+ turned on from the beginning for advertisers. In April, we rolled this out to more advertisers and expect to complete the global rollout later this year. We’re also seeing strong adoption of Advantage+ Creative. This week, we are broadening access of video expansion to Facebook Reels for all eligible advertisers, enabling them to automatically adjust the aspect ratio of their existing videos by generating new pixels in each frame to optimize their ads for full screen surfaces. We also rolled out image generation to all eligible advertisers. And this quarter, we plan to continue testing a new virtual try-on feature that uses gen AI to place clothing on virtual models, helping customers visualize how an item may look and fit…

…We continue to evolve our ads platform to drive results that are optimized for each business’ objectives and the way they measure value. One example of this is our incremental attribution feature, which enables advertisers to optimize for driving incremental conversions or conversions we believe would not have occurred without an ad being shown. We’re seeing strong results in testing so far with advertisers using incremental attribution in tests seeing an average 46% lift in incremental conversions compared to their business-as-usual approach. We expect to make this available to all advertisers in the coming weeks…

…Year-over-year conversion growth remains strong. And in fact, we continue to see conversions grow at a faster rate than ad impressions in Q1, so reflecting increased conversion rates. And ads ranking and modeling improvements are a big driver of overall performance gains.

Improvements in the past 6 months to Meta’s content recommendation systems have driven increases of 7% in time spent on Facebook, 6% on Instagram, and 35% on Threads; video consumption in Facebook and Instagram grew strongly in 2025 Q1 because of improvements to Meta’s content recommendation systems; management sees opportunities for further gains in improving the content recommendation systems in 2025; Meta is making progress on longer-term efforts to improve its content recommendation systems in two areas, (1) develop increasingly efficient recommendation systems by incorporating innovations from LLM model architectures, and (2) integrating LLMs into content recommendation systems to better identify what is interesting to a user; management’s testing of Llama in Threads’ recommendation systems has led to a 4% increase in time spent from launch; management is exploring how Llama can be deployed in recommendation systems for photo and video content, which management expects can improve Meta AI’s personalisation by better understanding users’ interests and preferences through their use of Meta’s apps; management launched a new feed in Instagram in the US in 2025 Q1 of content a user’s friends have left a note on or liked and the new feed is producing good results; management has launched the Blend experience that blends a user’s Reels algorithm in direct messages with friends; the increases of 7% in time spent on Facebook and 6% on Instagram seen in the last 6 months is on top of uplift in time spent on Facebook and Instagram that management had already produced in the first 9 months of 2024

In the last 6 months, improvements to our recommendation systems have led to a 7% increase in time spent on Facebook, 6% increase on Instagram and 35% on Threads…

…In the first quarter, we saw strong growth in video consumption across both Facebook and Instagram, particularly in the U.S., where video time spent grew double digits year-over-year. This growth continues to be driven primarily by ongoing enhancements to our recommendation systems, and we see opportunities to deliver further gains this year.

We’re also progressing on longer-term efforts to develop innovative new approaches to recommendations. A big focus of this work will be on developing increasingly efficient recommendation systems so that we can continue scaling up the complexity and compute used to train our models while avoiding diminishing returns. There are promising techniques we’re working on that will incorporate the innovations from LLM model architectures to achieve this. Another area that is showing early promise is integrating LLM technology into our content recommendation systems. For example, we’re finding that LLM’s ability to understand a piece of content more deeply than traditional recommendation systems can help better identify what is interesting to someone about a piece of content, leading to better recommendations.

We began testing using Llama in Threads recommendation systems at the end of last year given the app’s text-based content and have already seen a 4% lift in time spent from the first launch. It remains early here, but a big focus this year will be on exploring how we can deploy this for other content types, including photos and videos. We also expect this to be complementary to Meta AI as it can provide more relevant responses to people’s queries by better understanding their interests and preferences through their interactions across Facebook, Instagram and Threads…

…In Q1, we launched a new experience on Instagram in the U.S. that consists of a feed of content your friends have left a note on or liked, and we’re seeing good results. We also just launched Blend, which is an opt-in experience in direct messages that enables you to blend your Reels algorithm with your friends to spark conversations over each other’s interest…

…We shared on the Q3 2024 call that improvements to our AI-driven feed and video recommendations drove a roughly 8% lift in time spent on Facebook and a 6% lift on Instagram over the first 9 months of last year. Since then, we’ve been able to deliver similar gains in just 6 months’ time with improvements to our AI recommendations delivering 7% and 6% time spent gains on Facebook and Instagram, respectively.

AI is enabling the creation of better content on Meta’s apps; the better content includes AI generating content directly for users and AI helping users produce better content; management thinks that the content created on Meta’s apps will be increasingly interactive over time; management recently launched the stand-alone Edits app that contains an ultra-high resolution, short-form video camera, and generative AI tools to remove backgrounds of video or animate still images; more features on Edits are coming soon; 

AI is also enabling the creation of better content as well. Some of this will be helping people produce better content to share themselves. Some of this will be AI generating content directly for people that is personalized for them. Some of this will be in existing formats like photos and videos, and some of it will be increasingly interactive…

…Our feeds started mostly with text and then became mostly photos when we all got mobile phones with cameras and then became mostly video when mobile networks became fast enough to handle that well. We are now in the video era, but I don’t think that this is the end of the line. In the near future, I think that we’re going to have content in our feeds that you can interact with and that it will interact back with you rather than you just watching it…

…Last week, we launched our stand-alone Edits app, which supports the full creative process for video creators from inspiration and creation to performance insights. Edits has an ultra-high resolution, short-form video camera and includes generative AI tools that enable people to remove the background of any video or animate still images with more features coming soon.

Countries like Thailand and Vietnam with low-cost labour actually conduct a lot of business through Meta’s messaging apps but management thinks this phenomena is absent in developed economies because of the high cost of labour; management thinks that AI will allow businesses in developed economies to conduct business through Meta’s messaging apps; management thinks that every business in the future will have AI business agents that are easy to set up and can perform customer support and sales; Meta is currently testing AI business agents with small businesses in the USA and a few countries across Meta’s apps; management has launched a new agent management experience to make it easier for businesses to train their AI; management’s vision is for that to be one agent that’s interacting with a consumer regardless of where he/she is engaging with the business AI; feedback from the tests are that the AI business agents are saving businesses a lot of time and helping them determine which conversations to spend more time on

In countries like Thailand and Vietnam, where there is a low cost of labor, we see many businesses conduct commerce through our messaging apps. There’s actually so much business through messaging that those countries are both in our top 10 or 11 by revenue even though they’re ranked in the 30s in global GDP. This phenomenon hasn’t yet spread to developed countries because the cost of labor is too high to make this a profitable model before AI, but AI should solve this. So in the next few years, I expect that just like every business today has an e-mail address, social media account and website, they’ll also have an AI business agent that can do customer support and sales. And they should be able to set that up very easily given all the context that they’ve already put into our business platforms…

…We are currently testing business AIs with a limited set of businesses in the U.S. and a few additional countries on WhatsApp, Messenger and on ads on Facebook and Instagram. We’ve been starting with small business and focusing first on helping them sell their goods and services with business AIs…

…We’ve launched a new agent management experience and dashboard that makes it easier for businesses to train their AI based on existing information on their website or WhatsApp profile or their Instagram and Facebook pages. And we’re starting with the ability for businesses to activate AI in their chats with customers. We are also testing business AIs on Facebook and Instagram ads that you can ask about product and return policies or assist you in making a purchase within our in-app browser…

…No matter where you engage with the business AI, it should be one agent that recalls your history and your preferences. And we’re hearing encouraging feedback, particularly that adopting these AIs are saving the business that we’re testing with a lot of time and helping to determine which conversations make sense for them to spend more time on.

Meta AI now has nearly 1 billion monthly actives; management’s focus for Meta AI in 2025 is to establish Meta AI as the leading personal AI for personalization, voice conversations, and entertainment; management thinks people will eventually have an AI to talk to throughout the day on smart-glasses and this AI will be one of the most important and valuable services that has ever been created; management recently released the first Meta AI stand-alone app; the Meta AI stand-alone app is personalised to the user’s behaviour on other Meta apps, and it also has a social feed for discovery on how others are using Meta AI; initial feedback on the Meta AI stand-alone app is good; management expects to focus on scaling and deepening engagement on Meta AI for at least the next year before attempting to monetise; management saw engagement on Meta AI improve when testing Meta AI’s ability to personalize responses by remembering people’s prior queries and their usage of Meta’s apps; management has built personalisation into Meta AI across all of Meta’s apps; the top use cases for Meta AI currently include information gathering, writing assistance, interacting with visual content, and seeking help; WhatsApp has the strongest usage of Meta AI, followed by Facebook; a standalone Meta AI app is important for Meta AI to become the leading personal AI assistant because WhatsApp is currently not the primary messaging app used in the USA; management thinks that people are going to use different AI agents for different things; management thinks having memory of a user will be a differentiator for AI agents

Across our apps, there are now almost 1 billion monthly actives using Meta AI. Our focus for this year is deepening the experience and making Meta AI the leading personal AI with an emphasis on personalization, voice conversations and entertainment. I think that we’re all going to have an AI that we talk to throughout the day, while we’re browsing content on our phones, and eventually, as we’re going through our days with glasses. And I think that this is going to be one of the most important and valuable services that has ever been created.

In addition to building Meta AI into our apps, we just released our first Meta AI stand-alone app. It is personalized. So you can talk to it about interests that you’ve shown while browsing Reels or different content across our apps. And we built a social feed into it. So you can discover entertaining ways that others are using Meta AI. And initial feedback on the app has been good so far.

Over time, I expect the business opportunity for Meta AI to follow our normal product development playbook. First, we build and scale the product. And then once it is at scale, then we focus on revenue. In this case, I think that there will be a large opportunity to show product recommendations or ads as well as a premium service for people who want to unlock more compute for additional functionality or intelligence. But I expect that we’re going to be largely focused on scaling and deepening engagement for at least the next year before we’ll really be ready to start building out the business here…

…Earlier this year, we began testing the ability for Meta AI to better personalize its responses by remembering certain details from people’s prior queries and considering what that person engages with on our apps. We are already seeing this lead to deeper engagement with people we’ve rolled it out to, and it is now built into Meta AI across Facebook, Instagram, Messenger and our new stand-alone Meta AI app in the U.S. and Canada…

…The top use case right now for Meta AI from a query perspective is really around information gathering as people are using it to search for and understand and analyze information followed by social interactions from — ranging from casual chatting to more in-depth discussion or debate. We also see people use it for writing assistance, interacting with visual content, seeking help…

…WhatsApp continues to see the strongest Meta AI usage across our Family of Apps. Most of that WhatsApp engagement is in one-on-one Threads, followed by Facebook, which is the second largest driver of Meta AI engagement, where we’re seeing strong engagement from our feed deep dives integration that lets people ask Meta AI questions about the content that’s recommended to them…

…I also think that the stand-alone app is going to be particularly important in the United States because WhatsApp, as Susan said, is the largest surface that people use Meta AI and which makes sense. If you want to text an AI, having that be closely integrated and a good experience in the messaging app that you use makes a lot of sense. But we’re — while we have more than 100 million people use WhatsApp in the United States, we’re clearly not the primary messaging app in the United States at this point. iMessage is. We hope to become the leader over time. But we’re in a different position there than we are in most of the rest of the world on WhatsApp. So I think that the Meta AI app as a stand-alone is going to be particularly important in the United States to establishing leadership in — as the main personal AI that people use…

…I think that there are going to be a number of different agents that people use, just like people use different apps for different things. I’m not sure that people are going to use multiple agents for the same exact things, but I’d imagine that something that is more focused on kind of enterprise productivity might be different from something that is somewhat more optimized for personal productivity. And that might be somewhat different from something that is optimized for entertainment and social connectivity. So I think there will be different experiences…

…Once an AI starts getting to know you and what you care about in context and can build up memory from the conversations that you’ve had with it over time, I think that will start to become somewhat more of a differentiator.

Meta’s management continues to think of glasses as the ideal form factor for an AI device; management thinks that the 1 billion people in the world today who wear glasses will likely all be wearing smart glasses in the next 5-10 years; management thinks that building the devices people use for Meta’s apps lets the company deliver the best AI and social experiences; sales of the Ray-Ban Meta AI glasses have tripled in the last year and usage of the glasses is high; Meta has new launches of smart glasses lined up for later this year; monthly actives of Ray-Ban Meta AI glasses is up 4x from a year ago, with the number of people using voice commands growing even faster; management has rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish; management continues to want to scale the Ray-Ban Meta AI glasses to 10 million units or more for its 3rd generation; management intends to run the same monetisation playbook with the Ray-Ban Meta AI glasses as Meta’s other products

Glasses are the ideal form factor for both AI and the metaverse. They enable you to let an AI see what you see, hear what you hear and talk to you throughout the day. And they let you blend the physical and digital worlds together with holograms. More than 1 billion people worldwide wear glasses today, and it seems highly likely that these will become AI glasses over the next 5 to 10 years. Building the devices that people use to experience our services lets us deliver the highest-quality AI and social experiences…

…Ray-Ban Meta AI glasses have tripled in sales in the last year. The people who have them are using them a lot. We’ve got some exciting new launches with our partner, EssilorLuxottica, later this year as well that should expand that category and add some new technological capabilities to the glasses…

…We’re seeing very strong traction with Ray-Ban Meta AI glasses with over 4x as many monthly actives as a year ago. And the number of people using voice commands is growing even faster as people use it to answer questions and control their glasses. This month, we fully rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish. Now when you are speaking to someone in one of these languages, you’ll hear what they say in your preferred language through the glasses in real time…

…If you look at some of the leading consumer electronics products of other categories, by the time they get to their third generation, they’re often selling 10 million units and scaling from there. And I’m not sure if we’re going to do exactly that, but I think that that’s like the ballpark of the opportunity that we have…

…As a bunch of the products start to hit and start to grow even bigger than the number that I just said is just sort of like the sort of a near-term milestone, then I think we’ll continue scaling in terms of distribution. And then at some point, just like the other products that we build out, we will feel like we’re at a sufficient scale that we’re going to primarily focus on making sure that we’re monetizing and building an efficient business around it.

Meta released the first few Llama 4 models in April 2025 and more Llama 4 models are on the way, including the massive Llama 4 Behemoth model; management thinks leading-edge AI models are critical for Meta’s business, so they want the company to control its own destiny; by developing its own models, Meta is also able to optimise the model to its infrastructure and use-cases; an example of the optimisation is the Llama 4 17-billion model that comes with low latency to suit voice interactions; another example of the optimisation is the models’ industry-leading context window length which helps Meta AI’s personalisation efforts; Llama 4 Behemoth is important for Meta because all the models the company is using internally, and some of the models the company will develop in the future, are distilled from Behemoth

We released the first Llama 4 models earlier this month. They are some of the most intelligent, best multimodal, lowest latency and most efficient models that anyone has built. We have more models on the way, including the massive Llama 4 Behemoth model…

…On the LLM, yes, there’s a lot of progress being made in a lot of different dimensions. And the reason why we want to build this out is — one is that we think it’s important that for kind of how critical this is for our business that we sort of have control of our own destiny and are not depending on another company for something so critical. But two, we want to make sure that we can shape the development to be optimized for our infrastructure and the use cases that we want.

So to that end, Llama 4, the shape of the model with 17 billion parameters per expert was designed specifically for the infrastructure that we have in order to provide the low latency experience to be voice optimized. One of the key things, if you’re having a voice conversation with AI, is it needs to be low latency. So that way, when you’re having a conversation with it, there’s isn’t a large gap between when you stop speaking and it starts. So everything from the shape of the model to the research that we’re doing to techniques that go into it are kind of fit into that.

Similarly, another thing that we focused on was context window length. And in some of our models, we have really — we’re industry-leading on context window length. And part of the reason why we think that that’s important is because we’re very focused on providing a personalized experience. And there are different ways that you can put personalization context into an LLM, but one of the ways to do it is to include some of that context in the context window. And having a long context window that can incorporate a lot of the background that the person has shared across our apps is one way to do that…

…I think it’s also very important to deliver big models like Behemoth, not because we’re going to end up serving them in production, but because of the technique of distilling from larger models, right? The Llama 4 models that we’ve published so far and the ones that we’re using internally and some of the ones that we’ll build in the future are basically distilled from the Behemoth model in order to get the 90%, 95% of the intelligence of the large model in a form factor that is much lower latency and much more efficient.

Meta’s management is accelerating the buildout of Meta’s AI capacity, leading to higher planned investment for 2025; Meta’s capex growth in 2025 is for both generative AI and core business needs with the majority of overall capex supporting Meta’s core business; management continues to build infrastructure in a flexible way where the company can react to how the AI ecosystem develops in the coming years; management is increasing the efficiency of Meta’s workloads and this has helped the company to achieve strong returns from its core AI initiatives

We are accelerating some of our efforts to bring capacity online more quickly this year as well as some longer-term projects that will give us the flexibility to add capacity in the coming years as well. And that has increased our planned investment for this year…

…Our primary focus remains investing capital back into the business with infrastructure and talent being our top priorities…

…Our CapEx growth this year is going toward both generative AI and core business needs with the majority of overall CapEx supporting the core. We expect the significant infrastructure footprint we are building will not only help us meet the demands of our business in the near term but also provide us an advantage in the quality and scale of AI services we can deliver. We continue to build this capacity in a way that grants us maximum flexibility in how and when we deploy it to ensure we have the agility to react to how the technology and industry develop in the coming years…

…The second way we’re meeting our compute needs is by increasing the efficiency of our workloads. In fact, many of the innovations coming out of our ranking work are focused on increasing the efficiency of our systems. This emphasis on efficiency is helping us deliver consistently strong returns from our core AI initiatives.

Meta’s management sees a number of long-term tailwinds that AI can provide for Meta’s business, including making advertising a larger share of global GDP, and freeing up more time for people to engage in entertainment

Over the coming years, I think that the increased productivity from AI will make advertising a meaningfully larger share of global GDP than it is today…

…Over the long term, as AI unlocks more productivity in the economy, I also expect that people will spend more of their time on entertainment and culture, which will create an even larger opportunity to create more engaging experiences across all of these apps.

Meta’s management still expects to develop an AI coding agent sometime in 2025 that can operate as a mid-level engineer; management expects this AI coding agent to be do a substantial part of Meta’s AI research and development in 2026 H2; management is focused on building AI that can run experiments to improve Meta’s recommendation systems

I’d say it’s basically still on track for something around a mid-level engineer kind of starting to become possible sometime this year, scaling into next year. So I’d expect that by the middle to end of next year, AI coding agents are going to be doing a substantial part of AI research and development. So we’re focused on that. Internally, we’re also very focused on building AI agents or systems that can help run different experiments to increase recommendations across our other AI products like the ones that do recommendations across our feeds and things like that.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is seeing accelerating demand across industries for cloud migrations; there are 4 things happening to drive cloud migrations, (1) classic migration, (2) data growth, (3) growth in cloud-native companies’ consumption, and (4) growth in AI consumption, which also requires non-AI consumption 

When it comes to cloud migrations, we saw accelerating demand with customers in every industry, from Abercrombie in French, to Coca-Cola and ServiceNow expanding their footprints on Azure…

…[Question] On your comment about accelerating demand for cloud migrations. I’m curious if you could dig in and extrapolate a little more what you’re seeing there.

[Answer] One is, I’ll just say, the classic migration of whether it’s SQL, Windows Server. And so that sort of again got good steady-state progress because the reality is, I think everyone is now, perhaps there’s another sort of kick in the data center migrations just because of the efficiency the cloud provides. So that’s sort of one part.

The second piece is good data growth. You saw some — like Postgres on Azure — I mean, forgetting even SQL server, Postgres on Azure is growing. Cosmos is growing. The analytics stuff I talked about with Fabric. It’s even the others, whether it is Databricks or even Snowflake on Azure are growing. So we feel very good about Fabric growth and our data growth.

Then the cloud-native growth. So this is again before we even get to AI, some of the core compute consumption of cloud-native players is also pretty very healthy. It was healthy throughout the quarter. We projected to go moving forward as well.

Then the thing to notice is the ratio, and I think we mentioned this multiple times before, if you look underneath even ChatGPT, in fact, that team does a fantastic job of thinking about not only their growth in terms of AI accelerators they need, they use Cosmos DB, they use Postgres. They use core compute and storage. And so there’s even a ratio between any AI workload in terms of AI accelerator to others.

So those are the 4 pockets, I’d say, or 4 different trend lines, which all have a relationship with each other.

Foundry is now used by developers in over 70,000 companies, from enterprises to startups, to design, customize and manage their AI apps and agents; Foundry processed  more than 100 trillion tokens in 2025 Q1, up 5x from a year ago; Foundry now has industry-leading model fine tuning tools; the latest models from AI heavyweights including OpenAI and Meta are available on Foundry;  Microsoft’s Phi family of SLMs (small language model) now has over 38 million downloads (20 million downloads in 2024 Q4); Foundry will soon introduce an LLM (large language model) with 1 billion parameters that can run on just CPUs

Foundry is the agent in AI app factory. It’s now used by developers at over 70,000 enterprises and digital natives from Atomicwork to Epic, Fujitsu and Gainsight to H&R Block and LG Electronics to design, customize and manage their AI apps and agents. We processed over 100 trillion tokens this quarter, up 5x year-over-year, including a record 50 trillion tokens last month alone. And 4 months in, over 10,000 organizations have used our new agent service to build, deploy and scale their agents.

This quarter, we also made a new suite of fine-tuning tools available to customers with industry-leading reliability, and we brought the latest models from OpenAI along with new models from Cohere, DeepSeek, Meta, Mistral, Stability to Foundry. And we’ve expanded our Phi family of SLMs with new multimodal and mini models. All-up, Phi has been downloaded 38 million times. And our research teams are taking it one step further with BitNet b1.58, a billion parameter, large language model that can run on just CPUs coming to the Foundry.

With agent mode in VS Code, Github Copilot can now iterate on code, recognize errors, and fix them automatically; there are other Github agent modes that provide coding support to developers; Microsoft is previewing a first-of-its-kind SWE (software engineering) agent that can execute developer tasks; GitHub Copilot now has 15 million users, up 4x from a year ago; GitHub Copilot is used by a wide range of companies; VS Code has more than 50 million monthly active users

We’re evolving GitHub Copilot from paired to peer programmer with agent mode in VS Code, Copilot can now iterate on code, recognize errors and fix them automatically. This adds to other Copilot agents like Autofix, which helps developers remediate vulnerabilities as well as code review agent, which has already reviewed over 8 million pull requests. And we are previewing a first-of-its-kind SWE-agent capable of asynchronously executing developer tasks. All-up, we now have over 15 million GitHub Copilot users, up over 4x year-over-year. And both digital natives like Twilio and enterprises like Cisco, HPE, Skyscanner and Target continue to choose GitHub Copilot to their developers with AI throughout the entire dev life cycle. With Visual Studio and VS Code, we have the world’s most popular editor with over 50 million monthly active users.

Microsoft 365 Copilot is now used hundreds of thousands of customers, up 3x from a year ago; deal sizes for Microsoft 365 Copilot continue to grow; a record number of customers in 2025 Q1 returned to buy more seats for Microsoft 365 Copilot; new researcher and analyst deep reasoning agents can analyze vast amounts of web and enterprise data on-demand directly within Microsoft 365 Copilot; Microsoft is introducing agents for every role and business process; customers can build their own AI agents with no/low code with Copilot Studio and these agents can handle complex tasks, including taking action across desktop and web apps; 230,000 organisations, including 90% of the Fortune 500, have already used Copilot Studio; customers created more than 1 million custom agents across SharePoint and Copilot Studio, up 130% sequentially

Microsoft 365 Copilot is built to facilitate human agent collaboration, hundreds of thousands of customers across geographies and industries now use Copilot, up 3x year-over-year. Our overall deal size continues to grow. In this quarter, we saw a record number of customers returning to buy more seats. And we’re going further. Just last week, we announced a major update, bringing together agents, notebooks, search and create into a new scaffolding for work. Our new researcher and analyst deep reasoning agents analyze vast amounts of web and enterprise data to deliver highly skilled expertise on demand directly within Copilot…

…We are introducing agents for every role and business process. Our sales agent turns contacts into qualified leads and with sales chat reps can quickly get up to speed on new accounts. And our customer service agent is deflecting customer inquiries and helping service reps resolve issues faster.

With Copilot Studio, customers can extend Copilot and build their own agents with no code, low code. More than 230,000 organizations, including 90% of the Fortune 500 have already used Copilot Studio. With deep reasoning and agent flows in Copilot Studio, customers can build agents that perform more complex tasks and also handle deterministic scenarios like document processing and financial approvals. And they can now build Computer Use Agents that take action on the UI across desktop and web apps. And with just a click, they can turn any SharePoint site into an agent, too. This quarter alone, customers created over 1 million custom agents across SharePoint and Copilot Studio, up 130% quarter-over-quarter.

Azure grew revenue by 33% in 2025 Q1 (was 31% in 2024 Q4), with 16 points of growth from AI services (was 13 points in 2024 Q4); management brought capacity online for Azure AI services faster than expected;  Azure’s non-AI business saw accelerated growth in its Enterprise customer segment as well as some improvement in its scale motions; management thinks the real outperfomer within Azure in 2025 Q1 is the non-AI business; the strength in the AI business in 2025 Q1 came because Microsoft was able to match supply and demand somewhat, and also deliver supply early to some customers; management thinks it’s getting harder to separate an AI workload from a non-AI workload

In Azure and other cloud services, revenue grew 33% and 35% in constant currency, including 16 points from AI services. Focused execution drove non-AI services results, where we saw accelerated growth in our Enterprise customer segment as well as some improvement in our scale motions. And in Azure AI services, we brought capacity online faster than expected…

…The real outperformance in Azure this quarter was in our non-AI business. So then to talk about the AI business, really, what was better was precisely what we said. We talked about this. We knew Q3 that we had and hadn’t really match supply and demand pretty carefully and so didn’t expect to do much better than we had guided to on the AI side. We’ve been quite consistent on that. So the only real upside we saw on the AI side of the business was that we were able to deliver supply early to a number of customers…

…[Question] You mentioned that the upside on Azure came from the non-AI services this time around. I was wondering if you could just talk a little bit more about that.

[Answer] In general, we saw better-than-expected performance across our segments, but we saw acceleration in our largest customers. We call that the Enterprise segment in general. And then in what we talked about of our scale motions, where we had some challenges in Q2, things were a little better. And we still have some work to do in our scale motions, and we’re encouraged by our progress. We’re excited to stay focused on that as, of course, we work through the final quarter of our fiscal year…

…It’s getting harder and harder to separate what an AI workload is from a non-AI workload.

Around half of Microsoft’s cloud and AI-related capex in 2025 Q1 (FY2025 Q3) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q2 (FY2025 Q4) to increase sequentially, but the guidance for total capex for FY2025 H2 is unchanged from previous guidance (previously, expectation was for capex for 2025 Q1 and 2025 Q2 to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex is still expected to grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; demand for Azure’s AI services is growing faster than capacity is being brought online and management expects to have some AI capacity constraints beyond June 2025 (or FY2025 Q4); management’s goal with Microsoft’s data center investments is to be positioned for the workload growth of the future; management thinks pretraining plus test-time compute is a big change in terms of model-training workloads; Microsoft is short of power in fulfilling its data center growth plans; Microsoft’s data center builds have very long lead-times; in Microsoft’s 2024 Q4 (FY 2025 Q1) earnings call, management expected Azure to no longer be capacity-constrained by the end of 2025 Q2 (FY2025 Q4) but demand was stronger than expected in 2025 Q1 (FY2025 Q3); management still thinks they can get better and better capital efficiency from the cloud and AI capex; Azure’s margin on the AI business now is far better than what the margin was when the cloud transition was at a similar stage

Roughly half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog of $315 billion…

…We expect Q4 capital expenditures to increase on a sequential basis. H2 CapEx in total remains unchanged from our January H2 guidance. As a reminder, there can be quarterly spend variability from cloud infrastructure build-outs and the timing of delivery of finance leases…

…Our earlier comments on FY ’26 capital expenditures remain unchanged. We expect CapEx to grow. It will grow at a lower rate than FY ’25 and will include a greater mix of short-lived assets, which are more directly correlated to revenue than long-lived assets…

… In our AI services, while we continue to bring data center capacity online as planned, demand is growing a bit faster. Therefore, we now expect to have some AI capacity constraints beyond June…

…the key thing for us is to have our builds and lease be positioned for what is the workload growth of the future, right? So that’s what you have to [ goal ] seek to. So there’s a demand part to it, there is the shape of the workload part to it, and there is a location part to it. So you don’t want to be upside down on having one big data center in one region when you have a global demand footprint. You don’t want to be upside down when the shape of demand changes because, after all, with essentially pretraining plus test-time compute, that’s a big change in terms of how you think about even what is training, right, forget inferencing…

…We will be short power. And so therefore — but it’s not a blanket statement. I need power in specific places so that we can either lease or build at the pace at which we want…

…From land to build to build-outs can be lead times of 5 to 7 years, 2 to 3 years. So we’re constantly in a balancing position as we watch demand curves…

…I did talk about in my comments, we had hoped to be in balance by the end of Q4. We did see some increased demand as you saw through the quarter. So we are going to be a little short still, say, a little tight as we exit the year…

…[Question] You’ve said in the past that you can attain better and better capital efficiency with the cloud business and probably cloud and AI business. Where do you stand today?

[Answer] The way, of course, you’ve seen that historically is right when we went through the prior cloud transitions, you see CapEx accelerate, you build out data center footprint.,, You slowly filled GPU capacity. And over time, you see software efficiencies and hardware efficiencies build on themselves. And you saw that process for us for goodness now quite a long time. And what Satya’s talking about is how quickly that’s happening on the AI side of the business and you add to that model diversity. So think about the same levers plus model efficiency, those compounds. Now the one thing that’s a little different this time is just the pace. And so when you’re seeing that happen, pace in terms of efficiency side, but also pace in terms of the build-out. So it can mask some of the progress… Our margins on the AI side of the business are better than they were at this point by far than when we went through the same transition in the server to cloud transition…

…I think the way to think about this is you can ask the question, what’s the difference between a hosting business and a hyperscale business? It’s software. That’s, I think, the gist of it. Yes, for sure, it’s a capital-intensive business, but capital efficiency comes from that system-wide software optimization. And that’s what makes the hyperscale business attractive and that’s what we want to just keep executing super well on.

Microsoft’s management sees Azure as Microsoft’s largest business; management thinks that the next platform shift in technology, which is AI, is built on the last major platform, which was for cloud computing, so this benefits Microsoft

There’s nothing certain for sure in the future, except for one thing, which is our largest business is our infrastructure business. And the good news here is the next big platform shift builds on that. So it’s not a complete rebuild, having gone through all these platform shifts where you have to come out on the other side with a full rebuild. If there is good news here is that we have a good business in Azure that continues to grow and the new platform depends on that.

It’s possible that software optimizations with AI model development and deployment could lead to even longer useful lives for GPUs, but management wants to observe this for longer

[Question] Could we start to consider the possibility that software enhancements might extend the useful life assumption that you’re using for GPUs?

[Answer] In terms of thinking about the depreciable life of an asset, we like to have a long history before we make any of those changes. So we’re focused on getting every bit of useful life we can, of course, out of assets. But to Satya’s point, that tends to be a software question more than a hardware one.

Netflix (NASDAQ: NFLX)

Netflix’s content talent are already using AI tools to improve the content production process; management thinks AI tools can enable lower-budget projects to access top-grade VFX; Rodrigo Prieto is directing his first feature film with Netflix in 2025, Pedro Paramo, and he’s able to use AI tools for de-aging VFX at a much lower cost than The Irishman film that Prieto worked on 5 years ago; the entire budget for Pedro Paramo is similar to the cost of VFX alone for The Irishman; management’s focus with AI is to find ways for AI to improve the member and creator experience

So our talent today is using AI tools to do set references or previs, VFX sequence prep, shop planning, all kinds of things today that kind of make the process better. Traditionally, only big budget projects would have access to things like advanced visual effects such as de-aging. So today, you can use these AI-powered tools so to enable smaller budget projects to have access to big VFX on screen.

A recent example, I think, is really exciting. Rodrigo Prieto was the DP on The Irishman just 5 years ago. And if you remember that movie, we were using very cutting edge, very expensive de-aging technology that still had massive limitations, still creating a bunch of complexity on set for the actors. It was a giant leap forward for sure, but nowhere near what we needed for that film. So this year, just 5 years later, Rodrigo is directing his first feature film for us, Pedro Páramo in Mexico. Using AI-powered tools he was able to deliver this de-aging VFX to the screen for a fraction of what it cost on The Irishman. In fact, the entire budget of the film was about the VFX cost on The Irishman…

…So our focus is simple, find ways for AI to improve the member and the creator experience.

Netflix’s management is building interactive search into Netflix which is based on generative AI

We’re also building out like new capabilities, an example would be interactive search. That’s based on generative technologies. We expect that will improve that aspect of discovery for members.

Paycom Software (NYSE: PAYC)

Paycom’s GONE is the industry’s first fully automated time-off solution, utilising AI, that automates all time off requests; prior to GONE, 10% of an organisation’s labour cost was unmanaged; GONE can generate ROI of up to 800%, according to Forrester; GONE helped Paycom be named by Fast Company as one of the world’s most innovative companies

Our award-winning solution, GONE, is a perfect example of how Paycom simplifies tests through automation and AI. GONE is the industry’s first fully automated time-off solution that decisions all time-off requests based on customizable guidelines set by the company’s time-off rules. Before GONE, 10% of an organization’s labor cost went substantially unmanaged, creating scheduling errors, increased cost from overpayments, staffing shortages and employee uncertainty over pending time-off requests. According to a Forrester study, GONE’s automation delivers an ROI of up to 800% for clients. GONE continues to receive recognition. Most recently, Fast Company magazine named Paycom, one of the world’s most innovative companies for a second time. This honor specifically recognized GONE and is a testament to how Paycom is shaping our industry by setting new standards for automation across the globe.

PayPal (NASDAQ: PYPL)

PayPal’s management is leaning into agentic commerce; PayPal recently launched the payments industry’s first remote MCP (Model Context Protocol) server to enable AI agent frameworks to integrate with PayPal APIs; the introduction of the MCP allows any business to create an agentic commerce experience; all major AI players are involved with PayPal’s annual Developer Days to engage PayPal’s developer community

At Investor Day, I told you we were leaning into agentic commerce…

…Just a few weeks ago, we launched the industry’s first remote MCP server and enabled the leading AI agent frameworks to seamlessly integrate with PayPal APIs. Now any business can create agentic experience that allow customers to pay, track shipments, manage invoices and more, all powered by PayPal and all within an AI client. As we speak, developers are gathering in our San Jose headquarters for our annual Developer Days. Every major player in AI is represented, providing demos and engaging with our developer community.

Shopify (NASDAQ: SHOP)

Shopify’s management recently launched TariffGuide.ai, an AI-powered tool that provides duty rates based on just a product description and the country of origin, helping merchants source the right products in minutes

And just this past week, we launched TariffGuide.ai. This AI driven tool provides duty rates based on just a product description and the country of origin. Sourcing the right products from the right country can mean the difference between a 0% and a 15% duty rate or higher, And TariffGuide.ai allows merchants to do this in minutes, not days.

Shopify CEO Tobi Lutke penned a memo recently on his vision on how Shopify should be workin with AI; AI is becoming 2nd nature to how Shopify’s employees work, where employees use AI reflexively; before any team requests for additional headcount, they need to first assess if AI can meet their goals; Shopify has built a dozen MCP (model context protocol) servers in the last few weeks to enable anyone in Shopify to ask questions and find resources more easily; management sees AI being a cornerstone of how Shopify delivers value; management is investing more in AI, but the increased investment is not a driver for the lower gross margin in Shopify’s Subscription Solutions segment in 2025 Q1; management does not expect the Subscription Solutions segment’s gross margin to change much in the near term; Shopify has shown strong operating leverage partly because of its growing internal use of AI

AI is at the core of how we operate and is transforming our work processes. For those who have not seen it, I encourage you to check out Toby’s recent company wide email on AI that has now been shared publicly. At Shopify, we take AI seriously. In fact, it’s becoming second nature to how we work. By fostering a culture of reflexive AI usage, our teams default to using AI first, reflexive being the key term here. This also means that before requesting additional headcount or resources, teams are required to start with assessing how they can meet their goals using AI first. This approach is sparking some really fascinating explorations and discussions around the company, challenging the way we think, the way we operate, and pushing us to look ahead as we redefine our decision making processes. In the past couple of weeks, we built a dozen MCP servers that make Shopify’s work legible and accessible. And now anyone within Shopify can ask questions, find resources, and leverage those tools for greater efficiency. This reflexive use of AI goes well beyond internal improvements. It supercharges our team’s capabilities and drives operational efficiencies, keeping us agile. And as we continue to innovate, AI will remain a cornerstone of how we deliver value across the board…

…Gross profit for Subscription Solutions grew 19%, slightly less than the 21% revenue growth for Subscription Solutions. The lower rate was driven primarily by higher cloud and infrastructure hosting costs needed to support higher volumes and geographic expansion. Although we are investing more in AI, it is not a significant factor in this increase. Over the past 5 years, the gross margin for Subscription Solutions has centered around 80%, plus or minus a couple of hundred basis points in any given quarter, and we do not anticipate that trend changing in the near term…

…Our continued discipline on head count across all 3 of R&D, sales and marketing and G&A continued to yield strong operating leverage, all while helping us move even faster on product development aided by our increasing use of AI.

Shopify’s management rearchitected the AI engine of Sidekick, Shopify’s AI merchant assistant, in 2025 Q1; monthly average users of Sidekick has more than doubled since the start of 2025; early results of Sidekick are really strong for both large and small merchants

In Q1, key developments for Sidekick included a complete rearchitecture of the AI engine for deeper reasoning capabilities, enhancing processing of larger business datasets and accessibility in all supported languages, allowing every Shopify merchant to use Sidekick in their preferred language. And these changes, well, they’re working. In fact, our monthly average users of Sidekick continue to climb more than doubling since the start of 2025. Now this is still really early days, but the progress we are making is already yielding some really strong results for merchants, both large and small. 

Shopify acquired Vantage Discovery in 2025 Q1; Vantage Discovery works on AI-powered, multi-vector search; management thinks the acquisition will improve the overall consumer search experience delivered by Shopify’s merchants

In March, we closed the acquisition of Vantage Discovery, which helps accelerate the development of AI-powered, multi-vector search across our search, APIs, shop and storefront search offerings. This acquisition is one piece of a broader strategy to ensure that our merchants are able to continue meeting buyers regardless of where they’re shopping or discovering great products…

…The Vantage team coming in who are rock stars in AI are going to help take our search abilities to the next level.

Shopify’s management is seeing more and more commerce searches starting away from a search engine; Shopify is already working with AI chatbot providers on AI shopping; management thinks that AI shopping is a huge opportunity; management thinks AI agents will be a great opportunity for Shopify too

One of the things we think about is that wherever commerce is taking place, Shopify will be there. And obviously, one of the things we are seeing is that more and more searches are starting on places beyond just somebody’s search engine. That’s a huge opportunity whereby more consumers are going to be searching for great products…

…We’ve talked about some of the partnerships in the past. You’ve seen what we’ve done with Perplexity and OpenAI. We will continue doing that. We’re not going to front run our product road map when it comes to anything, frankly. But we do think though that AI shopping, in particular, is a huge opportunity…

…[Question] How does Shopify view the emergence of AI agents in terms of do you guys see this as an opportunity or more of a threat because, on one hand, they could facilitate direct checkout with their own platforms. On the other hand, this may also unlock some new sales channel for Shopify merchants, very similar to sort of what happened with social media commerce

[Answer] We think it’s a great opportunity. Look, the more channels that exist in the world, the more complexity it is for merchants and brands, that’s where the value of Shopify really shines. So if there’s a new surface area, whether it’s through AI agents or through just simply LLMs and AI wrappers, that consumer goes to, to look for a new pair of sneakers or a new cosmetic or a piece of furniture, they want to have access to the most interesting products for the most important brands, and those are all on Shopify. So for us, we think that all of these new areas where commerce is happening is a great thing. It allows Shopify to increase its value.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management continues to expect AI accelerators revenue to double in 2025; management has factored China-bans on US chips into TSMC’s 2025 outlook; AI-related demand outside of China appears to have become even stronger over the last 3 months

We reaffirm our revenue from AI accelerated to double in 2025. The AI accelerators we define as AI GPU, AI ASIC and HPM controllers for AI training and inference in the data center. Based on our customers’ strong demand, we are also working hard to double our CoWoS capacity in 2025 to support their needs…

…[Question] The geopolitical risk, micro concerns is one of the major uncertainty nowadays. Last 2 days, we have like H20 being banned in China, blah, blah, blah. So how does that impact to TSMC’s focus and production planning, right? Do we have enough other customers and demand to keep our advanced node capacity fully utilized? Or how does that change our long-term production planning moving forward?

[Answer] Of course, we do not comment on specific customers or product, but let me assure you that we have taken this into consideration when providing our full year’s growth outlook. Did I answer the question?…

…[Question] AI is still expected to double this year despite the U.S. ban on AI GPUs into China. And I guess, China was a meaningful portion of accelerated shipments well over 10% of volumes. So factoring this in, it would imply your AI outlook this year, still doubling would mean that the AI orders have improved meaningfully outside of China in the last sort of 3 months. Is that how we should interpret your comment about you still expect the business to double?

[Answer] 3 months ago, we are — we just cannot supply enough wafer to our customer. And now it’s a little bit balanced, but still, the demand is very strong. And you are right, other than China, the demand is still very strong, especially in U.S.

TSMC’s management has a disciplined approach when building capacity and management recognises how important the discipline is given the high forecasted demand for AI-related chips

At TSMC, higher level of capital expenditures is always correlated with higher growth opportunities in the following years. We reiterate our 2025 capital budget is expected to be between USD 38 billion and USD 42 billion as we continue to invest to support customers’ growth. About 70% of the capital budget will be allocated for advanced process technologies. About 10% to 20% will be spent for specialty technologies and about 10% to 20% will be spent for advanced packaging, testing, mass-making and others. Our 2025 CapEx also includes a small amount related to our recently announced additional $100 billion investment plan to expand our capacity in Arizona…

…To address the structural increase in the long-term market demand profile, TSMC employed a disciplined and robust capacity planning system. This is especially important when we have such high forecasted demand from AI-related business. Externally, we work closely with our customers and our customers’ customers to plan our capacity. Internally, our planning system involves multiple teams across several functions to assess and evaluate the market demand from both a top-down and bottom-up approach to determine the appropriate capacity build.

TSMC’s management expects the Foundry 2.0 industry to grow 10% year-on-year in 2025, driven by AI-related demand and mild recovery in other end markets; management expects TSMC to outperform the Foundry 2.0 industry in 2025

Looking at the full year of 2025, we expect Foundry 2.0 industry growth to be supported by robust AI-related demand and a mild recovery in other end market segment. In January, we had forecasted a Foundry 2.0 industry to grow 10 points year-over-year in 2025, which is consistent with IDC’s forecast of 11% year-over-year growth for Foundry 2.0…

…We are confident TSMC can continue to outperform the Foundry 2.0 industry growth in 2025.

TSMC’s management thinks impact from recent AI models, including DeepSeek, will lower the barrier to future long-term AI development; TSMC’s management continues to expect mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024

Recent developments are also positive to AI’s long-term demand outlook. In our assessment, the impact from AI recent models, including DeepSeek, will drive greater efficiency and help lower the barrier to future AI development. This will lead to wider usage and greater adoption of AI models, which all require use of leading-edge silicon. These developments only serve to strengthen our conviction in the long-term growth opportunities from the industry megatrend of 5G, AI and HPC…

…Based on our planning framework, we are confident that our revenue growth from AI accelerators will approach a mid-40s percentage CAGR for the next 5 years period starting from 2024.

TSMC’s 2nd fab in Arizona will utilise N3 process technology and is already complete and management wants to speed up volume production schedule to meet AI-related demand

Our first fab in Arizona has already successfully entered high-volume production in 4Q ’24, utilizing N4 process technology with a yield comparable to our fab in Taiwan. The construction of our second fab, which will utilize the 3-nanometer process technology, is already complete and we are working on speeding up the volume production schedule based on the strong AI-related demand from our customers. Our third and fourth fab will utilize N2 and A16 process technologies and with the expectation of receiving all the necessary permits are scheduled to begin construction later this year. Our fifth and sixth fab will use even more advanced technologies. The construction and ramp schedule for this fab will be based on our customers’ demand.

TSMC’s management believes its A16 technology has a best-in-class backside power delivery solution that is also the first in the industry; A16 is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; A16 is scheduled for volume production in 2026 H2

We also introduced a 16 feature in super power rail or SPR as a separate offering. Compared with the N2P, A16 provides a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.

Tesla (NASDAQ: TSLA)

Tesla’s management continues to expect fully autonomous Tesla rides in Austin, Texas in June 2025; management will sell full autonomy software for Model Y in Austin; management now demarcates CyberCab as a separate product, and all of the other models (S, 3, X, Y) that is compatible with autonomous software as being robotaxis; management reiterates that once Tesla can solve for autonomy in 1 city, it can very quickly scale because Tesla’s autonomous solution is a general solution, not a city-specific solution; Tesla’s autonomous solution involves AI and a specific Tesla-designed AI chip, as opposed to expensive sensors and high-precision maps; the fully autonomous Teslas in June 2025 in Austin will be Model Ys; management expects full autonomy in Tesla’s fleet to ramp up very quickly; management is confident that Tesla will have large-scale autonomy by 2026 H2, meaning, millions of fully autonomous Tesla vehicles by 2026 H2; even with the introduction of full autonomy, management thinks there will be some localised parameters – effectively a mixture of experts model – set for safety; management thinks Tesla’s autonomous solution can scale well because when the FSD (Full Self Driving) software was deployed in China, it used very minimal China-specific data and yet could work well in China; validation of Tesla’s autonomous solution will be important in determining its rate of acceptance; there are now convoys of Teslas in Austin running autonomously in testing in order to compress Tesla’s AI’s learning curve; a consumer in China used FSD on a narrow mountain dirt road; management expects FSD unsupervised to be available for personal use by end of 2025; Musk thinks the first Model Y ro drive itself from factory to customer will happen later in 2025; newly-manufactured Model Ys are already driving themselves around in Tesla factories

We expect to have — be selling fully autonomous rides in June in Austin as we’ve been saying for now several months. So that’s continued…

…Unsupervised autonomy will first be sold for the Model Y in Austin, and then actually, should parse out the term for robotic taxi or robotaxi and just generally like what’s the Cybercab because we’ve got a product called the Cybercab. And then any Tesla, which could be an S, 3, X or Y that is autonomous is a robotic taxi or a robotaxi. It’s very confusing. So the vast majority of the Tesla fleet that we’ve made is capable of being a robotaxi or a robotic taxi…

…Once we can make the system work where you can have paid rides, fully autonomously with no one in the car in 1 city, that is a very scalable thing for us to go broadly within whatever jurisdiction allows us to operate. So because what we’re solving for is a general solution to autonomy, not a city-specific solution for autonomy, once we make it work in a few cities, we can basically make it work in all cities in that legal jurisdiction. So if it’s — once we can make the pace to work in a few cities in America, we can make it work anywhere in America. Once we can make it work in a few cities in China, we can make it work anywhere in China, likewise in Europe, limited only by regulatory approvals. So this is the advantage of having a generalized solution using artificial intelligence and an AI chip that Tesla designed specifically for this purpose, as opposed to very expensive sensors and high-precision maps on a particular neighborhood where that neighborhood may change or often changes and then the car stops working. So we have a general solution instead of a specific solution…

…The Teslas that will be fully autonomous in June in Austin are fully Model Ys. So that is — it’s currently on track to be able to do paid rides fully autonomously in Austin in June, and then to be in many other cities in the U.S. by the end of this year.

It’s difficult to predict the exact ramp sort of week by week and month by month, except that it will ramp up very quickly. So it’s going to be like some — basically an S-curve where it’s very difficult to predict the intermediate slope of the S-curve, but you kind of know where the S-curve is going to end up, which is the vast majority of the Tesla fleet being autonomous. So that’s why I feel confident in predicting large-scale autonomy around the middle of next year, certainly the second half next year, meaning I bet that there will be millions of Teslas operating autonomously, fully autonomously in the second half of next year, yes…

…It does seem increasingly likely that there will be a localized parameter set sort of — especially for places that have, say, a very snowy weather, like I say, if you’re in the Northeast or something like this — you can think of — it’s kind of like a human. Like you can be a very good driver in California but are you going to be also a good driver in a blizzard in Manhattan? You’re not going to be as good. So there is actually some value in — you can still drive but your probability of an accident is higher. So the — it’s increasingly obvious that there’s some value to having a localized set of parameters for different regions and localities…

…You can see that from our deployment of FSD supervised in China with this very minimal data that’s China-specific, the model is generalized quite well to completely different driving styles. That just like shows that the AI-based solution that we have is the right one because if you had gone down the previous rule-based solutions, sort of like more hard-coded HD map-based solutions, it would have taken like many, many years to get China to work. You can see those in the videos that people post online themselves. So the generalized solution that we are pursuing is the right one that’s going to scale well…

…You can think of this like location-specific parameters that Elon alluded to as a mixture of experts. And if you are sort of familiar with the AI models, Grok and others, they all use this mixture of experts to sort of specialize the parameters to specific tasks while still being general…

…What are the critical things that need to get right, one thing I would like to note is validation. Self-driving is a long-tail problem where there can be a lot of edge cases that only happen very, very rarely. Currently, we are driving around in Austin using our QA fleet, but then super [ rare ] to get interventions that are critical for robotaxi operation. And so you can go many days without getting a single intervention. So you can’t easily know whether you are improving or regressing in your capacity. And we need to build out sophisticated simulations, including neural network-based video generation…

…There’s just always a convoy of Teslas going — just going all over to Austin in circles. But yes, I just can’t emphasize this enough. In order to get a figure on the long-tail things, it’s 1 in 10,000, that says 1 in 20,000 miles or 1 in 30,000. The average person drives 10,000 miles in a year. So not trying to compress that test cycle into a matter of a few months. It means you need a lot of cars doing a lot of driving in order to compress that to do in a matter of a month what would normally take someone a year…

…I saw one guy take a Tesla on — autonomously on a narrow dirt road across like a mountain. And I’m like, still a very brave person. And I said this driving along the road with no barriers where he makes a mistake, he’s going to plunge to his doom. But it worked…

…[Question] when will FSD unsupervised be available for personal use on personally-owned cars?

[Answer] Before the end of this year… the acid test being you should — can you go to sleep in your car and wait until your destination? And I’m confident that will be available in many cities in the U.S. by the end of this year…

…I’m confident also that later this year, the first Model Y will drive itself all the way to the customer. So from our — probably from a factory in Austin and our one in here in Fremont, California, I’m confident that from both factories, we’ll be able to drive directly to a customer from the factory…

…We have — it has been put to use — it’s doing useful work fully autonomously at the factories, as Ashok was mentioning, the cars drive themselves from end of line to where they supposed to be picked up by a truck to be taken to the customer… It’s important to note in the factories, we don’t have dedicated lengths or anything. People are coming out every day, trucks delivering supplies, parts, construction.

Tesla’s management expects thousands of Optimus robots to be working in Tesla factories by end-2025; management expects Optimus to be the fastest product to get to millions of units per year; management thinks Tesla can get to 1 million units annually in 4-5 years; management expects to make thousands of Optimus robots at the end of this year; there’s no existing supply chain for all of Optimus’s components, so Tesla has to build a supply chain from scratch; the speed of manufacturing of a product is governed by the speed of the slowest item in the supply chain, but in Optimus’s case, there are many, many such items since it’s so new; Optimus production is currently rate-limited by restrictions on rare-earth magnets from China but management is working on it; management still has no idea how Optimus’s supply chain will look like at maturity

Making good progress in Optimus. We expect to have thousands of Optimus robots working in Tesla factories by the end of this year beginning this fall. And we expect to see Optimus faster than any product, I think, in history to get to millions of units per year as soon as possible. I think we feel confident in getting to 1 million units per year in less than 5 years, maybe 4 years. So by 2030, I feel confident in predicting 1 million Optimus units per year. It might be 2029…

…This year, we’ll make a few — we do expect to make thousands of Optimus robots, but most of that production is going to be at the end of the year…

…Almost everything in Optimus is new. There’s not like an existing supply chain for the motors, gearboxes, electronics, actuators, really anything in the Optimus apart from the AI for Tesla, the Tesla AI computer, which is the same as the one in the car. So when you have a new complex manufactured product, it will move as fast as the slowest and the least lucky component in the entire thing. And as a first order approximation, there’s like 10,000 unique things. So that’s why anyone who tells you they can predict with precision, the production ramp of the truly new product is — doesn’t know what they’re talking about. It is literally impossible…

…Now Optimus was affected by the magnet issue from China because the Optimus actuators in the arm to use permanent magnet. Now Tesla, as a whole, does not need to use permanent magnets. But when something is volume constrained like an arm of the robot, then you want to try to make the motor as small as possible. And then — so we did design in permanent magnets for those motors and those were affected by the supply chain by basically China requiring an export license to send out any rare earth magnets. So we’re working through that with China. Hopefully, we’ll get a license to use the rare earth magnets. China wants some assurances that these are not used for military purposes, which obviously they’re not. They’re just going into a humanoid robot. So — and it’s a nonweapon system…

…[Question] Wanted to ask about the Optimus supply chain going forward. You mentioned a very fast ramp-up. What do you envision that supply chain looking like? Is it going to require many more suppliers to be in the U.S. now because of the tariffs?

[Answer] We’ll have to see how things settle out. I don’t know yet. I mean some things we’re doing, as we’ve already talked about, which is that we’ve already taken tremendous steps to localize our supply chain. We’re more localized than any other manufacturer. And we have a lot of things kind of underway that to increase the localization to reduce supply chain risk associated with geopolitical uncertainty.

Tesla’s supervised FSD (full-self driving) software is safer than a human driver; management has been using social media (X, or Twitter) to encourage people to try out Tesla’s FSD software; management did not directly answer a question on FSD pricing once the vehicle can be totally unsupervised

Not only is FSD supervised safer than a human driver, but it is also improving the lifestyle of individuals who experience it. And again, this is something you have to experience and anybody who has experienced just knows it. And we’ve been doing a lot lately to try and get those stories out, at least on X, so that people can see how other people have benefited from this…

…[Question] Can we envision when you launch unsupervised FSD that there could be sort of a multitiered pricing approach to unsupervised versus supervised similar to what you did with autopilot versus FSD in the past?

[Answer] I mean this is something which we’ve been thinking about. I mean just so now for people who have been trying FSD and who’ve been using FSD, they think given the current pricing is too cheap because for $99, basically getting a personal shop… I mean we do need to give people more time to — if they want to look at — like a key breakpoint is, can you read your text messages or not? Can you write a text message or not? Because obviously, people are doing this, by the way, with unautonomous cars all the time. And if you just go over and drive down the highway and you’ll see people texting while driving doing 80-mile an hour… So that value — it will really be profound when you can basically do whatever you want, including sleep. And then that $99 is going to seem like the best $99 you ever spent in your life.

Tesla’s management thinks Waymo vehicles are too expensive compared to Teslas; Waymo has expensive sensor suites; management thinks Tesla will have lion’s share of the robotaxi market; a big difference between Tesla and Waymo is that Tesla is also manufacturing the cars whereas Waymo is retrofitting cars from other parties; management thinks Tesla’s vision-only approach will not have issues with cameras becoming blinded by glare and stuff because the system uses direct photon counting and bypasses image signal processing

The issue with Waymo’s cars is it costs way more money, but that is the issue. The car is very expensive, made in low-volume. Teslas are probably cost 1/4, 20% of what a Waymo costs and made in very high volume. Ironically, like we’re the ones who made the bet that a pure AI solution with cameras and [ already ] what the car actually will listen for sirens and that kind of thing. It’s the right move. And Waymo decided that an expensive sensor suite is the way to go, even though Google is very good at AI. So I’m wondering…

….As far as I’m aware, Tesla will have, I don’t know, 99% market share or something ridiculous…

…The other thing which people forget is that we’re not just developing the software solution, we are also manufacturing the cars. And like you know what like Waymo has, they’re taking cars and then trying to…

…[Question] You’re still sticking with the vision-only approach. A lot of autonomous people still have a lot of concerns about sun glare, fog and dust. Any color on how you anticipate on getting around those issues? Because my understanding, it kind of blinds the camera when you get glare and stuff.

[Answer] Actually, it does not blind the camera. We use an approach which is a direct photon count. So when you see a processed image, so the image that goes from the — with sort of photon counter, the silicon photon counter, that they get — goes through a digital signal processor or image signal processor. That’s normally what happens. And then the image that you see looks all washed out because if it’s — you pointed a camera at the sun, the post-processing of the photon counting washes things out. It actually adds noise. So quite a big breakthrough that we made some time ago was to go with direct photon counting and bypass the image signal processor. And then you can drive pretty much straight at the sun, and you can also see in what appears to be the blackest of night. And then here in fog, we can see as well as people can, probably better, but in fact probably slightly better than people than the average person anyway.

Tesla’s AI software team and chip-design team was built from scratch with no acquisitions; management thinks Tesla’s team is the best

It is worth noting that Tesla has built an incredible AI software team and AI hardware chip design team from scratch, didn’t acquire anyone. We just built it. So yes, it’s really — I mean I don’t see anyone being able to compete with Tesla at present.

Tesla’s management thinks China is ahead of the USA in physical AI with respect to autonomous drones because China has the ability to manufacture autonomous drones, but the USA does not;  management thinks Tesla is ahead of any company in the world, even Chinese companies, in terms of humanoid autonomous robots 

[Question] Between China and United States, who, in your opinion, is further ahead on the development of physical AI, specifically on humanoid and also drones?

[Answer] A friend of mine posted on X, I reposted it. I think of a prophetic statement, which is any country that cannot manufacture its own drones is going to be the vassal state of any country that can. And we can’t — America cannot currently manufacture its own drones. Let that sink in, unfortunately. So China, I believe manufactures about 70% of all drones. And if you look at the total supply chain, China is almost 100% of drones are — have a supply chain dependency on China. So China is in a very strong position. And here in America, we need to tip more of our people and resources to manufacturing because this is — and I have a lot of respect for China because I think China is amazing, actually. But the United States does have such a severe dependency on China for drones and be unable to make them unless China gives us the parts, which is currently the situation.

With respect to humanoid robots, I don’t think there’s any company and any country that can match as well. Tesla and SpaceX are #1. And then I’m a little concerned that on the leaderboard, ranks 2 through 10 will be Chinese companies. I’m confident that rank 1 will be Tesla.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s industry-leading Koa AI tools are embedded across Kokai; adoption of Kokai is now ahead of schedule, with 2/3 of clients using it; the bulk of spending on Trade Desk now takes place on Kokai; management continues to expect all Trade Desk clients to be using Kokai by end-2025; management is confident that Kokai will be seen as the most powerful buying platform by the industry by end-2025

The injection of our industry-leading Koa AI tools across every aspect of our platform has been a game changer, and we are just getting started…

…The core of Kokai has been delivered and adoption is now ahead of schedule. Around 2/3 of our clients are now using it and the bulk of the spend in our platform is now running through Kokai. We expect all clients to be using it by the end of the year…

…I’m confident that by the end of this year, we will reflect on Kokai as the most powerful buying platform the industry has ever seen, precisely because it combines client needs with the strong point of view on where value is shifting and how to deliver the most efficient return on ad spend.

…Kokai adoption now represents the majority of our spend, almost 2/3, a significant acceleration from where we ended 2024.

Deutsche Telekom used Kokai’s AI tools and saw an 11x improvement in post-click conversions and an 18x improvement in the cost of conversions; Deutsche Telekom is now planning to use Kokai across more campaigns and transition from Trade Desk’s previous platform, Solimar, like many other Trade Desk clients

Deutsche Telekom. They’re running the streaming TV service called Magenta TV, and they use our platform to try to grow their subscriber base…

…Using seed data from their existing customers, Deutsche Telekom was able to use the advanced AI tools in our Kokai platform to find new customers and define the right ad impressions across display and CTV, most relevant to retain those new customers successfully, and the results were very impressive. They saw an 11x improvement in post-click conversions attributed to advertising and an 18x improvement in the cost of those conversions. Deutsche Telekom is now planning to use Kokai across more campaigns, a transition that is fairly typical as clients move from our previous platform, Solimar to our newer, more advanced AI fuel platform, Kokai.

Visa (NASDAQ: V)

Visa recently announced the Authorize.net product that features AI capabilities, including an AI agent; Authorize.net enables all different types of payments;

In Acceptance Solutions, we recently announced 2 new product offerings. The first is a completely new version of Authorize.net, launching in the U.S. next quarter and additional countries next year. It features a streamlined user in base; AI capabilities with an AI agent, Anet; improved dashboards for day-to-day management and support for in-person card readers and Tap to Phone. It will help businesses analyze data, summarize insights and adapt to rapidly changing customer trends…

……I talked about the Authorize.net platform that we’ve relaunched and we’re relaunching. That’s a great example of enabling all different types of payments. And that’s going to be, we think, a really positive impact in the market specifically focused on growing our share in small business checkout.

Visa has an enhanced holistic fraud protection solution known as Adaptive Real-time Individual Change identification (ARIC) Risk Hub; ARIC Risk Hub uses AI to build more accurate risk profiles;

We also now provide an enhanced holistic fraud protection solution from Featurespace called the Adaptive Real-time Individual Change identification, or ARIC, Risk Hub. This solution utilizes machine learning and AI solutions to enable clients to build more accurate risk profiles and more confidently detect and block fraudulent transactions, ultimately helping to increase approvals and stop bad actors in real time. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Mastercard, Meta Platforms, Microsoft, Netflix, Paycom Software, PayPal, Shopify, TSMC, Tesla, The Trade Desk and Visa. Holdings are subject to change at any time.

Insights From Berkshire Hathaway’s 2025 Annual General Meeting

Warren Buffett and his team shared plenty of wisdom at the recent Berkshire Hathaway AGM.

Warren Buffett is one of my investment heroes. On 3 May 2025, he held court at the 2025 Berkshire Hathaway AGM (annual general meeting).

For many years, I’ve anticipated the AGM to hear his latest thoughts. This year’s session holds special significance because it may well be his last – during the AGM, he announced that he would be stepping down as CEO of Berkshire Hathaway by the end of this year, ending an amazing 60-year run since becoming the company’s leader in 1965. Greg Abel is slated to be Berkshire Hathaway’s next CEO.

The most recent Berkshire meeting contained great insights from Buffett and other senior Berkshire executives that I wish to share and document. Before I get to them, I would like to thank my friend Thomas Chua for performing a great act of public service. Shortly after the AGM ended, Thomas posted a transcript of the session at his excellent investing website Steady Compounding

Without further ado, the italicised passages between the two horizontal lines below are my favourite takeaways after I went through Thomas’ transcript.


Buffett thinks his idea on import certificates is different from tariffs and that it’s important to have more balanced trade between countries; he also thinks that trade should not be wielded as a weapon, and that the more prosperous the world becomes, the better the USA would be

Becky Quick: Thanks Warren. This first question comes from Bill Mitchell. I received more questions about this than any other question. He writes, “Warren, in a 2003 Fortune article, you argued for import certificates to limit trade deficits and said these import certificates basically amounted to a tariff, but recently you called tariffs an act of economic war. Has your view on trade barriers changed or do you see import certificates as somehow distinct from tariffs?”

Warren Buffett: Well, the import certificates were distinct, but their goal was to balance imports against exports so that the trade deficit would not grow in an enormous way. It had various provisions to help third world countries catch up a little bit. They were designed to balance trade, and I think you can make very good arguments that balanced trade is good for the world. It makes sense for cocoa to be raised in Ghana and coffee in Colombia and a few other things…

…There’s no question that trade can be an act of war, and I think it’s led to bad things like the attitudes it’s brought out in the United States. We should be looking to trade with the rest of the world. We should do what we do best, and they should do what they do best…

…The main thing is that trade should not be a weapon. The United States has become an incredibly important country starting from nothing 250 years ago – there’s never been anything like it. And it’s a big mistake when you have 7.5 billion people who don’t like you very well and you have 300 million people crowing about how well they’ve done. I don’t think it’s right and I don’t think it’s wise. The more prosperous the rest of the world becomes, it won’t be at our expense – the more prosperous we’ll become and the safer we’ll feel and your children will feel someday.

Buffett did not look at macroeconomic factors in Japan when making the huge investments he did in five Japanese trading houses; Berkshire won’t be selling the Japanese investments for a long, long time, if at all; Berkshire would be happy to invest a lot more in Japan if there was capacity to do so; the fact that Berkshire could borrow in Japanese Yen to hedge the Japanese investments’ currency risk is merely a lucky coincidence

Question: Mr. Buffett and Mr. Munger did a very good and successful investment in Japan in the past five or six years. The recent CPI in Japan is currently above 3%, not far away from its 2% target. Bank of Japan seems very determined in raising rates while Fed, ECB, and other central banks are considering cutting them. Do you think BOJ makes sense to proceed with the rate hike? Will its planned rate hike deter you from further investing in the Japanese stock market or even considering realizing your current profits?

Warren Buffett: Well, I’m going to extend the same goodwill to Japan that you’ve just extended to me. I’ll let the people of Japan determine their best course of action in terms of economics. It’s an incredible story. It’s been about six years now since our Japanese investments. I was just going through a little handbook that probably had two or three thousand Japanese companies in it. One problem I have is that I can’t read that handbook anymore – the print’s too small. But there were these five trading companies selling at ridiculously low prices. So I spent about a year acquiring them. And then we got to know the people better, and everything that Greg and I saw, we liked better as we went along…

Greg Abel: When you think of the five companies, there’s definitely a couple meetings a year, Warren. The thing we’re building with the five companies is, one, it’s been a very good investment, but we really envision holding the investment for 50 years or forever…

Warren Bufett: We will not be selling any stock. That will not happen in decades, if then…

…It’s too bad that Berkshire has gotten as big as it is because we love that position and I’d like it to be a lot larger. Even with the five companies being very large in Japan, we’ve got at market in the range of $20 billion invested, but I’d rather have $100 billion than $20 billion…

…The Japanese situation is different because we intend to stay so long with that position and the funding situation is so cheap that we’ve attempted to some degree to match purchases against yen-denominated funding. But that’s not a policy of ours…

Greg Abel: There’s no question we were fundamentally very comfortable with investing in the five Japanese companies and recognizing we’re investing in yen. The fact we could then borrow in yen was almost just a nice incremental opportunity. But we were very comfortable both with the Japanese companies and with the currency we would ultimately realize in yen.

Just the simple act of reading about companies can lead to great investment opportunities

Warren Buffett: It’s been about six years now since our Japanese investments. I was just going through a little handbook that probably had two or three thousand Japanese companies in it…

…I never dreamt of that when I picked up that handbook. It’s amazing what you can find when you just turn the page. We showed a movie last year about “turn every page,” and I would say that turning every page is one important ingredient to bring to the investment field. Very few people do turn every page, and the ones who turn every page aren’t going to tell you what they’re finding. So you’ve got to do a little of it yourself.

Berkshire’s current huge cash position is the result of Buffett not being able to find sufficiently attractive investment opportunities; Buffett thinks that great investment opportunities appear infrequently

Becky Quick: This next question comes from Advate Prasad in New York. He writes, “Today, Berkshire holds over $300 billion in cash and short-term investments, representing about 27% of total assets, a historically high figure compared to the 13% average over the last 25 years. This has also led Berkshire to effectively own nearly 5% of the entire US Treasury market. Beyond the need for liquidity to meet insurance obligations, is the decision to raise cash primarily a de-risking strategy in response to high market valuations?…

Warren Buffett: Well, I wouldn’t do anything nearly so noble as to withhold investing myself just so that Greg could look good later on. If he gets any edge of what I leave behind, I’ll resent it. The amount of cash we have – we would spend $100 billion if something is offered that makes sense to us, that we understand, offers good value, and where we don’t worry about losing money. The problem with the investment business is that things don’t come along in an orderly fashion, and they never will. I’ve had about 16,000 trading days in my career. It would be nice if every day you got four opportunities or something like that with equal attractiveness. If I was running a numbers racket, every day would have the same expectancy that I would keep 40% of whatever the handle was, and the only question would be how much we transacted. But we’re not running that kind of business. We’re running a business which is very opportunistic.

Investing in stocks is a much better investment-bet than investing in real estate

Warren Buffett: Well, in respect to real estate, it’s so much harder than stocks in terms of negotiation of deals, time spent, and the involvement of multiple parties in the ownership. Usually when real estate gets in trouble, you find out you’re dealing with more than just the equity holder. There have been times when large amounts of real estate have changed hands at bargain prices, but usually stocks were cheaper and they were a lot easier to do.

Charlie did more real estate. Charlie enjoyed real estate transactions, and he actually did a fair number of them in the last 5 years of his life. But he was playing a game that was interesting to him. I think if you’d asked him to make a choice when he was 21 – either be in stocks exclusively for the rest of his life or real estate for the rest of his life – he would have chosen stocks. There’s just so much more opportunity, at least in the United States, that presents itself in the security market than in real estate…

…When you walk down to the New York Stock Exchange, you can do billions of dollars worth of business, totally anonymous, and you can do it in 5 minutes. The trades are complete when they’re complete. In real estate, when you make a deal with a distressed lender, when you sign the deal, that’s just the beginning. Then people start negotiating more things, and it’s a whole different game with a different type of person who enjoys the game.

Berkshire’s leaders think AI will have a massive impact on the insurance business, but they are not in a hurry to pour money into AI as they think there’s plenty of faddish capital in the space

Ajit Jain: There is no question in my mind that AI is going to be a real game-changer. It’s going to change the way we assess risk, we price risk, we sell risk, and then the way we end up paying claims. Having said that, I certainly also feel that people end up spending enormous amounts of money trying to chase the next fashionable thing…

…Right now the individual insurance operations do dabble in AI and try to figure out the best way to exploit it. But we have not yet made a conscious big-time effort in terms of pouring a lot of money into this opportunity.

Buffett prefers Ajit Jain to any kind of sophisticated AI systems when pricing insurance risks

Warren Buffett: I wouldn’t trade everything that’s developed in AI in the next 10 years for Ajit. If you gave me a choice of having a hundred billion dollars available to participate in the property casualty insurance business for the next 10 years and a choice of getting the top AI product from whoever’s developing it or having Ajit making the decisions, I would take Ajit anytime – and I’m not kidding about that.

Despite the political upheaval happening in the USA right now, Buffett still thinks the long-term future of the country is incredibly bright; in Buffett’s eyes, the USA has been through plenty of tumultuous periods and emerged stronger

Warren Buffett: America has been undergoing significant and revolutionary change ever since it was developed. I mentioned that we started out as an agricultural society with high promises that we didn’t deliver on very well. We said all men were created equal, and then we wrote a constitution that counted blacks as three-fifths of a person. In Article 2, you’ll find male pronouns used 20 times and no female pronouns. So it took until 1920, with the 19th amendment, to finally give women the vote that we had promised back in 1776.

We’re always in the process of change, and we’ll always find all kinds of things to criticize in the country. But the luckiest day in my life is the day I was born, because I was born in the United States. At that time, about 3% of all births in the world were taking place in the United States. I was just lucky, and I was lucky to be born white, among other things…

…We’ve gone through all kinds of things – great recessions, world wars, the development of the atomic bomb that we never dreamt of when I was born. So I would not get discouraged about the fact that we haven’t solved every problem that’s come along. If I were being born today, I would just keep negotiating in the womb until they said I could be in the United States.

It’s important to be patient while waiting for opportunities, but equally important to pounce when the opportunity appears

Warren Buffett: The trick when you get in business with somebody who wants to sell you something for $6 million that’s got $2 million of cash, a couple million of real estate, and is making $2 million a year, is you don’t want to be patient at that moment. You want to be patient in waiting to get the occasional call. My phone will ring sometime with something that wakes me up. You just never know when it’ll happen. That’s what makes it fun. So patience is a combination of patience and a willingness to do something that afternoon if it comes to you.

It does not pay to invest in a way that depends on the appearance of a greater fool

Warren Buffett: If people are making more money because they’re borrowing money or participating in securities that are pieces of junk but they hope to find a bigger sucker later on, you have to forget that.

Buffett does not think it’s important to manage currency risk with Berkshire’s international investments, but he avoids investments denominated in currencies that are at risk of depreciating wildly

Warren Buffett: We’ve owned lots of securities in foreign currencies. We do nothing in terms of its impact on quarterly and annual earnings. We don’t do anything based on its impact on quarterly and annual earnings. There’s never been a board meeting I can remember where I’ve said, “If we do this, our annual earnings will be this, therefore we ought to do it.” The number will turn out to be what it’ll be. What counts is where we are five or 10 or 20 years from now…

…Obviously, we wouldn’t want to own anything in a currency that we thought was really going to hell.

Buffett is worried about the tendency for governments to want to devalue their currencies, the USA included, but there’s nothing much that can be done about it; Buffett thinks the USA is running a fiscal deficit that is unsustainable over a long period of time; Buffett thinks a 3% fiscal deficit appears sustainable

Warren Buffett: That’s the big thing we worry about with the United States currency. The tendency of a government to want to debase its currency over time – there’s no system that beats that. You can pick dictators, you can pick representatives, you can do anything, but there will be a push toward weaker currencies. I mentioned very briefly in the annual report that fiscal policy is what scares me in the United States because of the way it’s made, and all the motivations are toward doing things that can cause trouble with money. But that’s not limited to the United States – it’s all over the world, and in some places, it gets out of control regularly. They devalue at rates that are breathtaking, and that’s continued…

…So currency value is a scary thing, and we don’t have any great system for beating that…

…We’re operating at a fiscal deficit now that is unsustainable over a very long period of time. We don’t know whether that means two years or 20 years because there’s never been a country like the United States. But as Herbert Stein, the famous economist, said, “If something can’t go on forever, it will end.” We are doing something that is unsustainable, and it has the aspect to it that it gets uncontrollable to a certain point….

…I wouldn’t want the job of trying to correct what’s going on in revenue and expenditures of the United States with roughly a 7% gap when probably a 3% gap is sustainable…

…We’ve got a lot of problems always as a country, but this is one we bring on ourselves. We have a revenue stream, a capital-producing stream, a brains-producing machine like the world has never seen. And if you picked a way to screw it up, it would involve the currency. That’s happened a lot of places.

Buffett thinks the key factors for a developing economy to attract investors are having a solid currency, and being business-friendly

Audience member: What advice would you give to government and business leaders of emerging markets like Mongolia to attract institutional investors like yourself?

Warren Buffett: If you’re looking for advice to give the government over there, it’s to develop a reputation for having a solid currency over time. We don’t really want to go into any country where we think there’s a significant probability of runaway inflation. That’s too hard to figure…

…If the country develops a reputation for being business-friendly and currency-conscious, that bodes very well for the residents of that country, particularly if it has some natural assets that it can build around.

Private equity firms are flooding the life insurance market, but they are doing so by taking on lots of leverage and credit risk

Ajit Jain: There’s no question the private equity firms have come into the space, and we are no longer competitive in the space. We used to do a fair amount in this space, but in the last 3-4 years, I don’t think we’ve done a single deal.

You should separate this whole segment into two parts: the property casualty end of the business and the life end of the business. The private equity firms you mentioned are all very active in the life end of the business, not the property casualty end.

You are right in identifying the risks these private equity firms are taking on both in terms of leverage and credit risk. While the economy is doing great and credit spreads are low, these firms have taken the assets from very conservative investments to ones where they get a lot more return. As long as the economy is good and credit spreads are low, they will make money – they’ll make a lot of money because of leverage.

However, there is always the danger that at some point the regulators might get cranky and say they’re taking too much risk on behalf of their policyholders, and that could end in tears. We do not like the risk-reward that these situations offer, and therefore we put up the white flag and said we can’t compete in this segment right now.

Buffett thinks Berkshire’s insurance operation is effectively unreplicable

Warren Buffett: I think there are people that want to copy Berkshire’s model, but usually they don’t want to copy it by also copying the model of the CEO having all of his money in the company forever. They have a different equation – they’re interested in something else. That’s capitalism, but they have a whole different situation and probably a somewhat different fiduciary feeling about what they’re doing. Sometimes it works and sometimes it doesn’t work. If it doesn’t work, they go on to other things. If what we do at Berkshire doesn’t work, I spend the end of my life regretting what I’ve created. So it’s just a whole different personal equation.

There is no property casualty company that can basically replicate Berkshire. That wasn’t the case at the start – at the start we just had National Indemnity a few miles from here, and anybody could have duplicated what we had. But that was before Ajit came with us in 1986, and at that point the other fellows should have given up.

Buffett thinks recent market volatility is not noteworthy at all; it’s nearly certain that significant downward moves in stocks will happen sometime in the next 20 years

Warren Buffett: What has happened in the last 30-45 days, 100 days, whatever this period has been, is really nothing. There have been three times since we acquired Berkshire that Berkshire has gone down 50% in a fairly short period of time – three different times. Nothing was fundamentally wrong with the company at any time. This is not a huge move. The Dow Jones average was at 381 in September of 1929 and got down to 42. That’s going from 100 to 11. This has not been a dramatic bear market or anything of the sort. I’ve had about 17,000 or 18,000 trading days. There have been plenty of periods that are dramatically different than this…

…You will see a period in the next 20 years that will be a “hair curler” compared to anything you’ve seen before. That just happens periodically. The world makes big mistakes, and surprises happen in dramatic ways. The more sophisticated the system gets, the more the surprises can come out of left field. That’s part of the stock market, and that’s what makes it a good place to focus your efforts if you’ve got the proper temperament for it and a terrible place to get involved if you get frightened by markets that decline and get excited when stock markets go up.

Berkshire’s leaders think the biggest change autonomous vehicles will bring to the automotive insurance industry is substitution of operator error policies by product liability policies; Berkshire’s leaders also think that the cost per repair in the event of an accident will rise significantly; the total cost of providing insurance for autonomous vehicles is still unclear; from the 1950s to today, cars have gotten 6x safer but auto insurance has become 50x pricier

Ajit Jain: There’s no question that insurance for automobiles is going to change dramatically once self-driving cars become a reality. The big change will be what you identified. Most of the insurance that is sold and bought revolves around operator errors – how often they happen, how severe they are, and therefore what premium we ought to charge. To the extent these new self-driving cars are safer and involved in fewer accidents, that insurance will be less required. Instead, it’ll be substituted by product liability. So we at GEICO and elsewhere are certainly trying to get ready for that switch, where we move from providing insurance for operator errors to being more ready to provide protection for product errors and errors and omissions in the construction of these automobiles…

…We talked about the shift to product liability and protection for accidents that take place because of an error in product design or supply. In addition to that shift, I think what we’ll see is a major shift where the number of accidents will drop dramatically because of automatic driving. But on the other hand, the cost per repair every time there’s an accident will go up very significantly because of the amount of technology in the car. How those two variables interact with each other in terms of the total cost of providing insurance, I think, is still an open issue…

Warren Buffett: When I walked into GEICO’s office in 1951, the average price of a policy was around $40 a year. Now it’s easy to get up to $2,000 depending on location and other factors. During that same time, the number of people killed in auto accidents has fallen from roughly six per 100 million miles driven to a little over one. So the car has become incredibly safer, and it costs 50 times as much to buy an insurance policy.

There’s a tax now when American companies conduct share buybacks

Warren Buffett: I don’t think people generally know that, but there is a tax that was introduced a year or so ago where we pay 1%. That not only hurts us because we pay more for it than you do – it’s a better deal for you than for us – but it actually hurts some of our investee companies quite substantially. Tim Cook has done a wonderful job running Apple, but he spent about $100 billion in a year repurchasing shares, and there’s a 1% charge attached to that now. So that’s a billion dollars a year that he pays when he buys Apple stock compared to what you pay.

Buffett is very careful with the risks that come with derivative contracts on a company’s balance sheet

Greg Abel: I’ll maybe go back to the very first meeting with Warren because it still stands out in my mind. Warren was thinking about acquiring Mid-America Energy Holdings Company at that time, and we had the opportunity with my partners to go over there on a Saturday morning. We were discussing the business and Warren had the financial statements in front of him. Like anybody, I was sort of expecting a few questions on how the business was performing, but Warren locked in immediately to what was on the balance sheet and the fact we had some derivative contracts, the “weapons of mass destruction.”

In the utility business, we do have derivatives because they’re used to match certain positions. They’re never matched perfectly, but we have them and they’re required in the regulated business. I remember Warren going to it immediately and asking about the composition and what was the underlying risk, wanting to thoroughly understand. It wasn’t that big of a position, but it was absolutely one of the risks he was concerned about as he was acquiring Mid-America, especially in light of Enron and everything that had gone on.

The followup to that was a year or 18 months later. There was an energy crisis in the US around electricity and natural gas, and various companies were making significant sums of money. Warren’s follow-up question to me was, “How much money are we making during this energy crisis? Are we making a lot? Do we have speculative positions in place?” The answer was we weren’t making any more than we would have been six months ago because all those derivatives were truly to support our business and weren’t speculative. That focus on understanding the business and the risks around it still stands out in my mind.

Buffett spends more time analysing a company’s balance sheet than other financial statements

Warren Buffett: I spend more time looking at balance sheets than I do income statements. Wall Street doesn’t pay much attention to balance sheets, but I like to look at balance sheets over an 8 or 10 year period before I even look at the income account because there are certain things it’s harder to hide or play games with on the balance sheet than with the income statement.

Buffett thinks America’s electric grid needs a massive overhaul and it can only be done in via a partnership between the private sector and the government – unfortunately, nobody has figured out the partnership model yet

Warren Buffett: t’s very obvious that the country needs an incredible improvement, rethinking, redirection to some extent in the electric grid. We’ve outgrown what would be the model that America should have. In a sense, it’s a problem something akin to the interstate highway system where you needed the power of the government really to get things done because it doesn’t work so well when you get 48 or 50 jurisdictions that each has their own way of thinking about things…

…There are certain really major investment situations where we have capital like nobody else has in the private system. We have particular knowhow in the whole generation and transmission arena. The country is going to need it. But we have to figure out a way that makes sense from the standpoint of the government, from the standpoint of the public, and from the standpoint of Berkshire, and we haven’t figured that out yet. It’s a clear and present use of hundreds of billions of dollars. You have people that set up funds and they’re getting paid for just assembling stuff, but that’s not the way to handle it. The way to handle it is to have some kind of government-private industry cooperation similar to what you do in a war.

The risk of wildfires to electric utilities is not going to go away, and in fact, will increase over time

Greg Abel: The reality is the risk around wildfires – do the wildfires occur – they’re not going away, and we know that. The risk probably goes up each year.

Berkshire’s leaders think it’s important for utilities to de-energise when wildfires occur to minimise societal damage; Berkshire is the only utility operator so far that’s willing to de-energise; but de-energising also has its drawbacks; Berkshire may not be able to solve the conundrum of de-energising

Greg Abel: the one thing we hadn’t tackled – this is very relevant to the significant event we had back in 2020 in PacifiCorp – is we didn’t de-energize the system as the fire was approaching. Our employees and the whole management team have been trained all their lives to keep the lights on, and the last thing they want to do is turn those lights off and have a system de-energized. After those events and as we looked at how we’re going to move forward in managing the assets and reducing risk, we recognized as a team that we have to de-energize those assets. Now as we get fires encroaching at a certain number of miles, we de-energize because we do not want to contribute to the fire nor harm any of our consumers or contribute to a death. We had to take our team to managing a different risk now. It’s not around keeping the lights on, it’s around protecting the general public and ensuring the fire does not spread further. We’re probably the one utility or across our utilities that does that today, and we strongly believe in that approach.

Becky Quick: Doesn’t that open you up to other risk if you shut down your system, a hospital gets shut down, somebody dies?

Greg Abel: That’s something we do deal with a lot because we have power outages that occur by accident. When we look at critical infrastructure, that’s an excellent point and we’re constantly re-evaluating it. We do receive a lot of feedback from our customer groups as to how to manage that…

Warren Buffett: There’s some problems that can’t be solved, and we shouldn’t be in the business of taking investors’ money and tackling things that we don’t know the solution for. You can present the arguments, but it’s a political decision when you are dealing with states or the federal government. If you’re in something where you’re going to lose, the big thing to do is quit.

Buffett thinks the value of electric utility companies have fallen a lot over the past two years because of societal trends and his enthusiasm for investing in electric utilities has waned considerably

Becky Quick: Ricardo Bri, a longtime shareholder based in Panama, says that he was very happy to see Berkshire acquire 100% of BHE. It was done in two steps: one in late 2022 – 1% was purchased from Greg Abel for $870 million implying a valuation of BHE of $87 billion, and then in 2024 the remaining 8% was purchased from the family of Walter Scott Jr. for $3.9 billion implying a valuation of $48.8 billion for the enterprise. That second larger transaction represented a 44% reduction in valuation in just two years. Ricardo writes that PacifiCorp liabilities seem too small to explain this. Therefore, what factors contributed to the difference in value for BHE between those two moments in time?

Warren Buffett: Well, we don’t know how much we’ll lose out of PacifiCorp and decisions that are made, but we also know that certain of the attitudes demonstrated by that particular example have analogues throughout the utility system. There are a lot of states that so far have been very good to operate in, and there are some now that are rat poison, as Charlie would say, to operate in. That knowledge was accentuated when we saw what happened in the Pacific Northwest, and it’s eventuated by what we’ve seen as to how utilities have been treated in certain other situations. So it wasn’t just a direct question of what was involved at PacifiCorp. It was an extrapolation of a societal trend…

…We’re not in the mood to sell any business. But Berkshire Hathaway Energy is worth considerably less money than it was two years ago based on societal factors. And that happens in some of our businesses. It certainly happened to our textile business. The public utility business is not as good a business as it was a couple of years ago. If anybody doesn’t believe that, they can look at Hawaiian Electric and look at Edison in the current wildfires situation in California. There are societal trends that are changing things…

…I would say that our enthusiasm for buying public utility companies is different now than it would have been a couple years ago. That happens in other industries, too, but it’s pretty dramatic in public utilities. And it’s particularly dramatic in public utilities because they are going to need lots of money. So, if you’re going to need lots of money, you probably ought to behave in a way that encourages people to give you lots of money.

Buffett thinks the future capital intensity of the USA’s large technology companies remains to be seen

Warren Buffett: It’ll be interesting to see how much capital intensity there is now with the Magnificent 7 compared to a few years ago. Basically, Apple has not really needed any capital over the years and it’s repurchased shares with a dramatic reduction. Whether that world is the same in the future or not is something yet to be seen.

Buffett thinks there’s no better system than capitalism that has been discovered so far

Warren Buffett: Capitalism in the United States has succeeded like nothing you’ve ever seen. But what it is is a combination of this magnificent cathedral which has produced an economy like nothing the world’s ever seen, and then it’s got this massive casino attached…

…In the cathedral, they’re designing things that will be producing goods and services for 300 and some million people like it’s never been done before in history. It’s an interesting system we developed, but it’s worked. It dispenses rewards in what seems like a terribly capricious manner. The idea that people get what they deserve in life – it’s hard to make that argument. But if you argue with it that any other system works better, the answer is we haven’t found one.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Alphabet, Amazon, Apple, Meta Platforms, Microsoft, and Tesla (they are all part of the Magnificent 7). Holdings are subject to change at any time.

This Book Explains The Economic Problems Facing The USA and China Today (Including Tariffs!)

The book mentioned in the title of this article is The Other Half of Macroeconomics and the Fate of Globalization written by economist Richard C. Koo (Gu Chao Ming) and published in 2018.

I first came across Koo’s book in March 2020 when I chanced upon a review of it in Mandarin, written by investor Li Lu. I can read Mandarin and I found myself agreeing to the ideas from the book that Li shared, so much so that I made a self-directed attempt at translating the review into English. But I only began reading the actual book near the start of this year and finished it about a month ago. There was even more richness in the book’s ideas about how economies should operate than what was shared in Li’s already-wonderful review.

Earlier this month, the US government, under the Trump administration, made sweeping changes to the global trading system by introducing the Reciprocal Tariff Policy, which raised tariffs, sometimes significantly so, for many of the US’s trading partners. Major driving forces behind the Reciprocal Tariff Policy ostensibly include the US’s sustained trade deficits (particularly with China) and a desire by the Trump administration to bring manufacturing jobs back to the country.

As I contemplated the Trump administration’s actions, and the Chinese government’s reactions, I realised Koo’s book explained why all these issues happened. So I’m writing my own notes and takeaways from the book for easy reference in the future and I would like to share them in this article in the hopes that they could be useful for you. I will be borrowing from my translation of Li’s Mandarin review in this article. Below the horizontal line, all content in grey font are excerpts from my translation while all italicised content are excerpts from the book. 


The three stages of economic development a country goes through

There are six important ideas that Koo discussed in his book. One of them is the concept that a country goes through three distinct stages of economic development over time. 

The first stage of development would be a country that is industrialising and has yet to reach the Lewis Turning Point (LTP). The LTP is the “point at which urban factories have finally absorbed all the surplus rural labour.” When a country starts industrialising, people are mostly living in rural areas and there are only a very few educated elite who have the knowhow to kickstart industrialisation. There is also a surplus of labour. As a result, the educated elite – the industrialists – hold the power and “most of the gains during the initial stage of industrialisation therefore go to the educated few.”  The first stage of economic development is also when income inequality widens – the gains from industrialisation continue to accumulate in the hands of the elite as they reinvest profits into their businesses because there continues to be a surplus of labour.

The second stage of development happens when an industrialising economy reaches the LTP. At this point, labour “gains the bargaining power to demand higher wages for the first time in history, which reduces the share of output accruing to business owners.” But business owners are happy to continue reinvesting their profits as they are still “achieving good returns, leading to further tightness in the labour market.” This dynamic leads to an economy’s “golden era”:

As labor’s share increases, consumption’s share of GDP will increase at the expense of investment. At the same time, the explosive increase in the purchasing power of ordinary citizens means that most businesses are able to increase profits simply by expanding existing productive capacity. Consequently, both consumption and investment will increase rapidly…

…Inequality also diminishes as workers’ share of output increases relative to that of capital… 

…With incomes rising and inequality falling, this post-LTP maturing phase may be called the golden era of economic growth…

…Higher wages force businesses to look harder for profitable investment opportunities. On the other hand, the explosive increase in the purchasing power of ordinary workers who are paid ever-higher wages creates major investment opportunities. This prompts businesses to invest for two reasons. First, they seek to increase worker productivity so that they can pay ever-higher wages. Second, they want to expand capacity to address workers’ increasing purchasing power. Both productivity- and capacity-enhancing investments increase demand for labor and capital that add to economic growth. In this phase, business investment increases workers’ productivity even if their skill level remains unchanged…

…With rapid improvements in the living standards of most workers, the post-LTP maturing phase is characterised by broadly distributed benefits from economic growth.”

The golden era has its problems too. This is because this period is when “workers begin to utilise their newfound bargaining power” such as by organising strikes. But business owners and labour tend to be able to work out their differences.

The third stage of development is what Koo calls a “post-LTP pursued economy.” When a country is in the golden era, at some point ever-growing wages creates inroads for foreign competitors – and the country starts being chased by the foreign competitors that have lower wages. This is when businesses in the country find it very challenging to “find attractive investment opportunities at home because it often makes more sense for them to buy directly from the “chaser” or to invest in that country themselves.” This is also when “the return on capital is higher abroad than at home.” During the pursued stage, “real wage growth will be minimal” and “economic growth also slows.” Although a pursued country can continue to grow economically, a major problem is that inequality once again rears its ugly head:

“Japan’s emergence in the 1970s shook the U.S. and European industrial establishments. As manufacturing workers lost their jobs, ugly trade frictions ensued between Japan and the West. This marked the first time that Western countries that had already passed their LTPs had been chased by a country with much lower wages…

…While Western companies at the forefront of technology continued to do well, the disappearance of many well-paying manufacturing jobs led to worsening income inequality in these countries…

…Some of the pain Western workers felt was naturally offset by the fact that, as consumers, they benefited from cheaper imports from Asia, which is one characteristic of import-led globalisation. Businesses with advanced technology continued to do well, but it was no longer the case that everyone in society was benefiting from economic growth. Those whose jobs could be transferred to lower-cost locations abroad saw their living standards stagnate or even fall.”

Koo wrote that Western economies – the USA and Europe – entered their golden eras around the 1950s and became pursued starting in the 1970s by Japan. During the golden era of the West, “it was in an export-led globalisation phase as it exported consumer and capital goods to the world.” But as the West started getting pursued, they entered “an import-led globalisation phase as capital seeks higher returns abroad and imports flood the domestic market.”

The four states of an economy

Another important idea from Koo’s book is the concept that an economy has four distinct states, which are summarised in Table 1 below:

An economy is always in one of four possible states depending on the presence or absence of lenders (savers) and borrowers (investors). They are as follows: (1) both lenders and borrowers are present in sufficient numbers, (2) there are borrowers but not enough lenders even at high interest rates, (3) there are lenders but not enough borrowers even at low interest rates, and (4) both lenders and borrowers are absent.

Table 1

Koo’s idea that an economy has four distinct states is important because mainstream economic-thought does not cater for the disappearance of borrowers:

“Of the four, only Cases 1 and 2 are discussed in traditional economics, which implicitly assumes there are always enough borrowers as long as real interest rates are low enough.”

There are two key reasons why an economy would be in Cases 3 and 4, i.e. when borrowers disappear. The first is when private-sector businesses are unable to find attractive investment opportunities (this is related to economies that are in the third-stage of development discussed earlier in this article, when attractive domestic investment opportunities become scarce):

The first is one in which private‐sector businesses cannot find investment opportunities that will pay for themselves. The private sector will only borrow money if it believes it can pay back the debt with interest. And there is no guarantee that such opportunities will always be available. Indeed, the emergence of such opportunities depends very much on scientific discoveries and technological innovations, both of which are highly irregular and difficult to predict.

In open economies, businesses may also find that overseas investment opportunities are more attractive than those available at home. If the return on capital is higher in emerging markets, for example, pressure from shareholders will force businesses to invest more abroad while reducing borrowings and investments at home. In modern globalized economies, this pressure from shareholders to invest where the return on capital is highest may play a greater role than any technological breakthroughs, or lack thereof, in the decision as to whether to borrow and invest at home.”

The second reason for the disappearance of borrowers is named by Koo as a “balance sheet recession” which is described as such:

“In the second set of circumstances, private‐sector borrowers have sustained huge losses and are forced to rebuild savings or pay down debt to restore their financial health. Such a situation may arise following the collapse of a nationwide asset price bubble in which a substantial part of the private sector participated with borrowed money. The collapse of the bubble leaves borrowers with huge liabilities but no assets to show for the debt. Facing a huge debt overhang, these borrowers have no choice but to pay down debt or increase savings in order to restore their balance sheets, regardless of the level of interest rates.

Even when the economy is doing well, there will always be businesses that experience financial difficulties or go bankrupt because of poor business decisions. But the number of such businesses explodes after a nationwide asset bubble bursts.

For businesses, negative equity or insolvency implies the potential loss of access to all forms of financing, including trade credit. In the worst case, all transactions must be settled in cash, since no supplier or creditor wants to extend credit to an entity that may seek bankruptcy protection at any time. Many banks and other depository institutions are also prohibited by government regulations from extending or rolling over loans to insolvent borrowers in order to safeguard depositors’ money. For households, negative equity means savings they thought they had for retirement or a rainy day are no longer there. Both businesses and households will respond to these life‐threatening conditions by focusing on restoring their financial health—regardless of the level of interest rates—until their survival is no longer at stake.

A balance sheet recession can be a huge problem for a country’s economy if it is unresolved as it can lead to a rapidly shrinking economy as a manifestation of the “fallacy of composition” problem:

“One person’s expenditure is another person’s income…

…The interaction between thinking and reacting households and businesses create a situation where one plus one does not necessarily equal two. For example, if A decides to buy less from B in order to set aside more savings for an uncertain future, B will have less income to buy things from A. That will lower A’s income, which in turn will reduce the amount A can save.

This interaction between expenditure and income also means that, at the national level, if one group is saving money, another group must be doing the opposite – “dis-saving” – to keep the economy running. In most cases, this dis-saving takes the form of borrowing by businesses that seek to expand their operations. If everyone is saving and no one is dis-saving on borrowing, all of those savings will leak out of the economy’s income stream, resulting in less income for all.

For example, if a person with an income of $1,000 decides to spend $900 and save $100, the $900 that is spent becomes someone else’s income and continues circulating in the economy. The $100 that is saved is typically deposited with a financial institution such as a bank, which then lends it to someone else who can make use of it. When that person borrows and spends the $100, total expenditures in the economy amount to $900 plus $100, which is equal to the original $1,000, and the economy moves forward…

…If there are no borrowers for $100 in savings in the above example, even at zero interest rates, total expenditures in the economy will drop to $900, while the saved $100 remains unborrowed in financial institutions or under mattresses. The economy has effectively shrunk by 10 percent, from $1,000 to $900. That $900 now becomes someone else’s income. If that person decides to save 10 percent, and there are still no borrowers, only $810 will be spent, causing the economy to contract to $810. This cycle will repeat, and the economy will shrink to $730, if borrowers remain on the sidelines. This process of contraction is called a “deflationary spiral.”…

…Keynes had a name for this state of affairs, in which everyone wants to save but is unable to do so because no one is borrowing. He called it the paradox of thrift. It is a paradox because if everyone tries to save, the net result is that no one can save.

The phenomenon of right behaviour at the individual level leading to a bad result collectively is known as the “fallacy of composition.””

Japan was the “first advanced country to experience a private-sector shift to debt minimization for balance sheet reasons since the Great Depression.” Japan’s real estate bubble burst in 1990, where real estate prices fell by 87%, “devastating the balance sheets of businesses and financial institutions across the country” and leading to a disappearance of borrowers:

Demand for funds shrank rapidly when the bubble finally burst in 1990. Noting that the economy was also slowing sharply, the BOJ took interest rates down from 8 percent at the height of the bubble to almost zero by 1995. But demand for funds not only failed to recover but actually turned negative that year. Negative demand for funds means that Japan’s entire corporate sector was paying down debt at a time of zero interest rates, a world that no economics department in university or business school had ever envisioned. The borrowers not only stopped borrowing but began moving in the opposite direction by paying down debt and continued doing so for a full ten years, until around 2005…

…While in a textbook economy the household sector saves and the corporate sector borrows, both sectors became net-savers in post-1999 Japan, with the corporate sector becoming the largest saver in the country from 2002 onward in spite of zero interest rates.”

The Western economies experienced their own balance sheet recessions starting in 2008 with the bursting of housing bubbles in that year. When the “bubbles collapsed on both sides of the Atlantic in 2008, the balance sheets of millions of households and many financial institutions were devastated.” Borrowers also disappeared; the “private sectors in virtually all major advanced nations have been increasing savings or paying down debt since 2008 in spite of record low interest rates.” For example, “the U.S. private sector saved 4.1 percent of GDP at near-zero interest rates in the four quarters through Q1 2017” and the “Eurozone’s overall private sector is saving 4.6 percent of GDP in spite of negative interest rates.

Prior to the Japanese episode, the most recent example was the Great Depression:

Until 2008, the economics profession considered a contractionary equilibrium (the $500 economy) brought about by a lack of borrowers to be an exceptionally rare occurrence – the only recent example was the Great Depression, which was triggered by the stock market crash in October 1929 and during which the U.S. lost 46 percent of nominal GNP. Although Japan fell into a similar predicament when its asset price bubble burst in 1990, its lessons were almost completely ignored by the economics professional until the Lehman shock of 2008.”

The appropriate macroeconomic policies

The third important idea from Koo’s book is that depending on which stage (pre-LTP, golden era, or pursued) and which state (Case 1, Case 2, Case 3, or Case 4) a country’s economy is in, there are different macroeconomic policies that would be appropriate.

First, it’s important to differentiate the two policies governments can wield, namely, monetary policy and fiscal policy:

“The government also has two types of policy, known as monetary and fiscal policy, that it can use to help stabilise the economy by matching private-sector savings and borrowings. Themore frequently used is monetary policy, which involves raising or lowering interest rates to assist the matching process. Since an excess of borrowers is usually associated with a strong economy, a higher policy rate might be appropriate to prevent overheating and inflation. Similarly, a shortage of borrowers is usually associated with a weak economy, in which case a lower policy rate might be needed to avert a recession or deflation.

With fiscal policy, the government itself borrows and spends money on such projects as highways, airports, and other social infrastructure. While monetary policy decisions can be made very quickly by the central bank governor and his or her associates, fiscal policy tends to be very cumbersome in a peacetime democracy because elected representatives must come to an agreement on how much to borrow and where to spend the money. Because of the political nature of these decisions and the time it takes to implement them, most recent economic fluctuations were dealt with by central banks using monetary policy.”

Fiscal policy is more important than monetary policy when a country’s economy is in the pre-LTP stage, but the relative importance of the two types of policies switches once the economy enters the golden era; fiscal policy once again becomes the more important type of policy when the economy is in the pursued stage: 

“In the early phases of industrialisation, economic growth will rely heavily on manufacturing, exports, and the formation of capital etc. At this juncture, the government’s fiscal policies can play a huge role. Through fiscal policies, the government can gather scarce resources and invest them into basic infrastructure, resources, and export-related services etc. These help emerging countries to industrialise rapidly. Nearly every country that was in this stage of development saw their governments implement policies that promote active governmental support.

In the second stage of development, the twin engines of economic growth are rising wages and consumer spending. The economy is already in a state of full employment, so an increase in wages in any sector or field will inevitably lead to higher wages in other areas. Rising wages lead to higher spending and savings, and companies will use these savings to invest in productivity to improve output. In turn, profits will grow, leading to companies having an even stronger ability to raise wages to attract labour. All these combine to create a positive feedback loop of economic growth. Such growth comes mainly from internal sources in the domestic economy. Entrepreneurs, personal and household investing behaviour, and consumer spending patterns are the decisive players in promoting economic growth, since they are able to nimbly grasp business opportunities in the shifting economic landscape. Monetary policies are the most effective tool in this phase, compared to fiscal policies, for a few reasons. First, fiscal policies and private-sector investing both tap on a finite pool of savings. Second, conflicts could arise between the private sector’s investing activities and the government’s if poorly thought-out fiscal policies are implemented, leading to unnecessary competition for resources and opportunities. 

When an economy reaches the third stage of development (the stage where it’s being chased), fiscal policy regains its importance. At this stage, domestic savings are high, but the private sector is unwilling to invest domestically because the investing environment has deteriorated – domestic opportunities have dwindled, and investors can get better returns from investing overseas. The government should step in at this juncture, like what Japan did, and invest heavily in infrastructure, education, basic research and more. The returns are not high. But the government-led investments can make up for the lack of private-sector investments and the lack of consumer-spending because of excessive savings. In this way, the government can protect employment in society and prevent the formation of a vicious cycle of a decline in GDP. In contrast, monetary policy is largely ineffective in the third stage.”

It’s worth noting that an economy that is in the pre-LTP stage is likely to be in Case 4, where borrowers and lenders are both absent. Meanwhile, an economy that is in the Golden Era is likely to be in Case 1 (where both borrowers and lenders are present in abundance) and in Case 2 (where borrowers are present, but lenders are absent) during a run-of-the-mill recession, although a Case 1 Golden Era economy can also quickly be in Case 3 or Case 4. Once an economy is in the pursued stage, it is likely to be in Case 3 (where borrowers are absent but lenders are present) because of a lack of domestic investment opportunities or a balance sheet recession, or Case 4 (where borrowers and lenders are both absent) because of a balance sheet recession.

When a country’s economy is in Case 1 or Case 2, monetary policy is more important:

“Case 1 requires a minimum of policy intervention – such as slight adjustments to interest rates – to match savers and borrowers and keep the economy going. Case 1, therefore, is associated with ordinary interest rates and can be considered the ideal textbook case.

The causes of Case 2 (insufficient lenders) can be traced to both macro and financial factors. The most common macro factor is when the central bank tightens monetary policy to rein in inflation. The tighter credit conditions that result certainly leave lenders less willing to lend. Once inflation is under control, however, the central bank typically eases monetary policy, and the economy returns to Case 1. A country may also be too poor or underdeveloped to save. If the paradox of thrift leaves a country too poor to save, the situation would be classified as Case 3 or 4 because it is actually attributable to a lack of borrowers.

Financial factors weighing on lenders may also push the economy into Case 2. One such factor is an excess of non-performing loans (NPLs) in the banking system, which depresses banks’ capital ratios and prevents them from lending. This is what is typically called a “credit crunch.” Over-regulation of financial institutions by the authorities can also lead to a credit crunch. When many banks encounter NPL problems at the same time, mutual distrust may lead not only to a credit crunch, but also to a dysfunctional interbank market, a state of affairs typically referred to as a “financial crisis.”…

…Non-developmental causes of a shortage of lenders all have well-known remedies… For example, the government can inject capital into the banks to restore their ability to lend, or it can relax regulations preventing financial institutions from serving as financial intermediaries.

In the case of a dysfunctional interbank market, the central bank can act as lender of last resort to ensure the clearing system continues to operate. It can also relax monetary policy. The conventional emphasis on monetary policy and concerns over the crowding-out effect of fiscal policy are justified in Cases 1 and 2 where there are borrowers but (for a variety of reasons in Case 2) not enough lenders.””

When a country’s economy is in Case 3 or Case 4, fiscal policy is more important because monetary policy does not work when borrowers disappear, although the appropriate type of fiscal policy can also differ:

“It should be noted that in the immediate aftermath of a bubble collapse, the economy is usually in Case 4, characterized by a disappearance of both lenders and borrowers. The lenders stop lending because they provided money to borrowers who participated in the bubble and are now facing technical or real insolvency. Banks themselves may be facing severe solvency problems when many of their borrowers are unable to service their debts…

…In a financial crisis, therefore, the central bank must act as lender of last resort to ensure that the settlement system continues to function…

Once the bubble bursts and households and businesses are left facing debt overhangs, no amount of monetary easing by the central bank will persuade them to resume borrowing until their balance sheets are fully repaired…

…When private-sector borrowers disappear and monetary policy stops working, the correct way to prevent a deflationary spiral is for the government to borrow and spend the excess savings of the private sector… 

…In other words, the government should mobilize fiscal policy and serve as borrower of last resort when the economy is in Case 3 or 4. 

If the government borrows and spends the $100 left unborrowed by the private sector, total expenditures will amount to $900 plus $100, or $1,000, and the economy will move on. This way, the private sector will have the income it needs to pay down debt or rebuild savings…

…It has been argued that the fiscal stimulus is essential when the economy is in Case 3 or 4. But there are two kinds of fiscal stimulus: government spending and tax cuts. If the economy is in a balance sheet recession, the correct form of fiscal stimulus is government spending. If the economy is suffering from a lack of domestic investment opportunities, the proper response would be a combination of tax cuts and deregulation to encourage innovation and risk taking… augmented by government spending…

…The close relationship observed prior to 2008 between central-bank-supplied liquidity, known as the monetary base, and growth in money supply and private-sector credit broke down completely after the bubbles burst and the private sector began minimizing debt. Here money supply refers to the sum of all bank accounts plus bills and coins circulating in the economy, and credit means the amount of money lent to the private sector by financial institutions…

…In this textbook world, a 10 percent increase in central bank liquidity would increase both the money supply and credit by 10 percent. This means there were enough borrowers in the private sector to borrow all the funds supplied by the central bank, and the economies were tin Case 1…

…But after the bubble burst, which forced the private sector to minimize debt in order to repair its balance sheet, no amount of central bank accommodation was able to increase private-sector borrowings. The U.S. Federal Reserve, for example, expanded the monetary base by 349 percent after Lehman Brothers went under. But the money supply grew by only 76 percent and credit by only 27 percent. A 27 percent increase in private-sector credit over a period of nearly nine years represents an average annual increase of only 2.75 percent, which is next to nothing.”

Fiscal stimulus equates to government-spending, which increases public debt. Koo suggests that (1) when an economy is in Case 3 or Case 4, rising and/or high public debt is not necessarily a problem, and (2) the limits of public debt should be determined by the bond market:

“Debt is simply the flip side of savings. Somebody has to be saving for debt to grow, and it is bound to increase as long as someone in the economy continues to save. Moreover, if someone is saving but debt levels fail to grow (i.e., if no one borrows and spends the saved funds), the economy will fall into the $1000 – $900 – $810 – $730 deflationary spiral….

…Growth in debt (excluding debt financed by the central bank) is merely a reflection of the fact that the private sector has continued to save. 

If debt is growing faster than actual savings, it simply means there is double counting somewhere, i.e., somebody has borrowed the money but instead of using it himself, he lent it to someone else, possibly with a different maturity structure (maturity transfer) or interest rates (fixed to floating or vice versa). With the prevalence of carry trades and structured financial products involving multiple counterparties, debt numbers may grow rapidly on the surface, but the actual debt can never be greater than the actual savings. 

Furthermore, the level of debt anyone can carry also depends on the level of interest rates and the quality of projects financed with the debt. If the projects earn enough to pay back both borrowing costs and principal, then no one should care about the debt load, no matter how large, because it does not represent a future burden on anyone. Similarly, no matter how great the national debt, if the funds are invested in public works projects capable of generating returns high enough to pay back both interest and principal, the projects will be self-financing and will not increase the burden on future taxpayers…

…Whether or not fiscal policy has reached its limits should be decided by the bond market, not by some economist using arbitrarily chosen criteria. 

During the golden era, when the private sector has strong demand for funds to finance productivity- and capacity-enhancing investments, fiscal stimulus will have a minimal if not negative impact on the economy because of the crowding-out effect. The bond market during this era correctly assigns very low prices (high yields) to government bonds, indicating that such stimulus is not welcome.

During the pursued era or during balance sheet recessions, however, private-sector demand for funds is minimal if not negative. At such times, fiscal stimulus is not only essential, but it has maximum positive impact on the economy because there is no danger of crowding out. During this period, the bond market correctly sets very high prices (low yields) for government bonds, indicating they are welcome…

…Ultra-low bond yields in economies in Cases 3 and 4 are also a signal to the government to look for public works projects capable of producing a social rate of return in excess of those rates. If such projects can be found, fiscal stimulus centered on them will ultimately place no added burden on future taxpayers.” 

The experience of the economies of the US, the UK, Japan, and Europe in the aftermath of the housing bubble bursting in 2008 which thrust them into balance sheet recessions is instructive on the importance of fiscal policy in combating balance sheet recessions:

In November 2008, just two months after Lehman Brothers went under, the G20 countries agreed at an emergency meeting in Washington to implement fiscal stimulus. That decision kept the world economy from falling into a deflationary spiral. But in 2010, the fiscal orthodoxy of those who did not understand balance sheet recessions reasserted itself at the Toronto G20 meeting, where members agreed to cut deficits in half even though private-sector balance sheets were nowhere near a healthy state. The result was a sudden loss of forward momentum for the global economy that prolonged the recession unnecessarily in many parts of the world. After 2010, those countries that understood the danger of balance sheet recessions did well, while those that did not fell by the wayside…

…Bernanke and Yellen both understood this, and they used the expression “fiscal cliff” to warn Congress about the danger posed by fiscal consolidation, which the Republicans and many orthodox economists supported. The extent of Bernanke’s concerns about fiscal consolidation can be gleaned from a press conference on April 25, 2012, when he was asked what the Fed would do if Congress pushed the U.S. economy off the fiscal cliff. He responded, “There is . . . absolutely no chance that the Federal Reserve could or would have any ability whatsoever to offset that effect on the economy.”10 Bernanke clearly understood that the Fed’s monetary policy not only cannot offset the negative impact of fiscal consolidation, but would also lose its effectiveness if the government refused to act as borrower of last resort.

Even though the U.S. came frighteningly close to falling off the fiscal cliff on a number of occasions, including government shutdowns, sequesters, and debt‐ceiling debates, it ultimately managed to avoid that outcome thanks to the efforts of officials at the Fed and the Obama administration. And that is why the U.S. economy is doing so much better than Europe, where virtually every country did fall off the fiscal cliff…

…The warnings about the fiscal cliff set the Fed apart from its counterparts in Japan, the UK, and Europe. In the UK, then-BOE Governor Mervyn King publicly supported David Cameron’s rather draconian austerity measures, arguing that his bank’s QE policy would provide necessary support for the British economy. At the time, the UK private sector was saving a full 9 percent of GDP when interest rates were at their lowest levels in 300 years. That judgement led to the disastrous performance of the UK economy during the first two years of the Cameron administration…

…BOJ Governor Haruhiko Kuroda also argued strongly in favor of hiking the consumption tax rate, believing a Japanese economy supported by his quantitative easing regime would be strong enough to withstand the shock of fiscal consolidation. This was in spite of the fact that Japanese private sector was saving 6.2 percent of GDP at a time of zero interest rates. The tax hike, which was carried out in April 2014, threw the Japanese economy back into recession…

…ECB President Mario Draghi has admonished member governments to meet the austerity target imposed by the Stability and Growth Pact at every press conference, even though his own inflation forecasts have been revised downwards almost every time they are updated. He seems to be completely oblivious to the danger posed by fiscal austerity when the Eurozone private sector has been saving an average of 5 percent of GDP since 2008 despite zero or even negative interest rates.” 

Koo also noted that when Japan’s real estate bubble burst in 1990, the government was “quick to administer fiscal stimulus to stop the implosion” and that “the economy responded positively each time fiscal stimulus was implemented, but lost momentum each time the stimulus was removed.” The Japanese government was under enormous pressure to cut fiscal stimulus in the aftermath of the bubble, but the government did not completely cave, and the Japanese economy managed to fare better than it would otherwise:

“The orthodox fiscal hawks who dominated the press and academia also tried to stop fiscal stimulus at every step of the way, arguing that large deficits would soon lead to skyrocketing interest rates and a fiscal crisis. These hawks forced politicians to cut stimulus as soon as the economy showed signs of life, prompting another downturn. The resulting on-again, off-again fiscal stimulus did not imbue the public with confidence in the government’s handling of the economy. Fortunately, the LDP [Liberal Democratic Party] had enough pork-barrel politicians to keep a minimum level of stimulus needed in place, and as a result, Japanese GDP never once fell below its bubble peak. Nor did the Japanese unemployment rate ever exceed 5.5 percent.

That was a fantastic achievement in view of the fact that the Japanese private sector was saving an average of 8 percent of GDP from 1995 to 2005, and the Japanese lost three times as much wealth (as a share of GDP) as Americans did during the Great Depression, when nominal GNP fell 46 percent.”

The reason for US backlash against globalisation & the conflict between free-trade and free-capital

The fourth and fifth important ideas from Koo’s book are connected and they are respectively, (a) the possible reasons behind the backlash against globalisation that is seen from the current US government under the Trump administration, and (b) the possible conflict between free trade and free-movement of capital. Again, Koo’s book was published in 2018, so it was discussing Donald Trump’s first term as President. But the ideas appear to me to be very applicable to today’s context.

Koo advanced that the Western economies’ entrance into the third stage of economic development – pursued stage – is a reason for the backlash against globalisation:

“One reason for the frustration and social backlash witnessed in the advanced countries is that these countries are experiencing the post-Lewis Turning Point (LTP) pursued phase for the first time in history… 

…Many were caught offguard, having assumed that the golden era that they enjoyed into the 1970s would last forever. It comes as no surprise that those who have seen no improvement in their living standards for many years but still remember the golden age, when everyone was hopeful and living standards were steadily improving, would long for the “good old days.”…

…In the U.S. too, the Trump phenomenon, which has depended largely on the support of blue-collar white males, suggests that people are longing for the life they enjoyed during the golden era, when U.S. manufacturing was the undisputed leader of the world.

Participants in this social backlash in many of the pursued economies view globalization as the source of all evil and are trying to slow down the free movement of both goods and people. Donald Trump and others like him are openly hostile toward immigration while arguing in favour of protectionism and the scuttling of agreements such as the TPP that seek even freer trade.”

Koo described the mainstream view that free trade creates overall gains for trading partners, but cautioned that the view has a flawed assumption, in that imports and exports will be largely balanced as free trade grows, and it is that wrong assumption that also contributed to the backlash against globalisation:

“Economists have traditionally argued that while free trade creates both winners and losers within the same country, it offers significant overall welfare gains for both trading partners because the gains of the winners are greater than the losses of the losers. In other words, there should be more winners than losers from free trade…

…This conclusion, however, is based on one key assumption: that imports and exports will be largely balanced as free trade expands. When – as in the U.S. during the past 30 years – that assumption does not hold and a nation continues to run massive trade deficits, free trade may produce far more losers than theory would suggest. With the U.S. running a trade deficit of almost [US]$740bn a year, or about four percent of GDP, there were apparently enough losers from free trade to put the protectionist Donald Trump into the White House. The fact that Hillary Clinton was also nominated to be the Democratic Party’s candidate for president in the arena full of banners saying “No to TPP” indicates that the social backlash has grown very large indeed.”

Koo clarified that free trade is important and has its benefits, but the way free trade has taken place since World War II is hugely problematic because of (1) the way free trade is structured, and (2) the free movement of capital that is happening in parallel:

“Outright protectionism is likely to benefit the working class in the short term only. In the long run, history has repeatedly shown that protected industries always fall behind on competitiveness and technological advances, which means the economy will stagnate and be overtaken by more dynamic competitors…

…This does not mean that free trade as practiced since 1945 and globalism in general have no problems. They both have major issues, but these can be addressed if properly understood. A correct understanding is important here because even though increasing imports is the most visible feature of an economy in a pursued phase, trade deficits and the plight of workers displaced by imports have been made far worse by the free movement of capital since 1980…

…Once the U.S. opened up its massive markets to the world after 1945 and the GATT-based [General Agreement on Tariffs and Trade] system of free trade was adopted, nations belonging to this system found that it was possible to achieve economic growth without territorial expansion as long as they could produce competitive products. The first countries to recognize this were the vanquished nations of Japan and West Germany, which then decided to devote their best people to developing globally competitive products…

…By the end of the 1970s, however, the West began losing its ability to compete with Japanese firms as the latter overtook the U.S. and European rivals in many sectors, including home appliances, shipbuilding, steel, and automobiles. This led to stagnant income growth and disappearing job opportunities for Western workers.

When Japan joined the GATT in 1963, it still had many tariff and non-tariff trade barriers. In other words, while Western nations had been steadily reducing their own trade barriers, they were suddenly confronted with an upstart from Asia that still had many barriers in place. But as long as Japan’s maximum tariff rates were falling as negotiated and the remaining barriers applied to all GATT members equally, GATT members who had opened their markets earlier could do little under the agreement’s framework to force Japan to open its market (the same problem resurfaced when China joined the WTO 38 years later)…

…When U.S.-Japan trade frictions began to flare up in the 1970s, however, exchange rates still responded correctly to trade imbalances. In other words, when Japanese exports to the U.S. outstripped U.S. exports to Japan, there were more Japanese exporters selling dollars and buying yen to pay employees and suppliers in Japan than there were U.S. exporters selling yen and buying dollars to pay employees and suppliers in the U.S.

Since foreign exchange market participants in those days consisted mostly of exporters and importers, excess demand for yen versus the dollar caused the yen to strengthen against the dollar. That, in turn, made Japanese products less competitive in the U.S. As a result, trade frictions between the U.S. and Japan were prevented from growing any worse than they did because the dollar fell from ¥360 in mid-1971 to less than ¥200 in 1978 in response to widening Japanese trade surpluses with the U.S..

But this arrangement, in which the foreign exchange market acted as a trade equalizer, broke down with financial liberalization, which began in the U.S. with the Monetary Control Act of 1980…

…These changes prompted huge capital outflows from Japan as local investors sought higher-yielding U.S. Treasury securities. Since Japanese investors needed dollars to buy Treasuries, their demand for dollars in the currency market outstripped the supply of dollars from Japanese exporters and pushed the yen back to ¥280 against the dollar. This rekindled the two countries’ trade problems, because few U.S. manufacturers were competitive vis-a-vis the Japanese at that exchange rate.

When calls for protectionism engulfed Washington, President Ronald Reagan, a strong supporter of free trade, responded with the September 1985 Plaza Accord, which took the dollar from ¥240 in 1985 down to ¥120 just two years later. The dollar then rose to ¥160 in 1990 but subsequently fell as low as ¥79.75 in April 1995, largely ending the trade-related hostilities that had plagued the two nations’ relationship for nearly two decades…

…Capital transactions made possible by the liberalization of cross-border capital flows also began to dominate the currency market. Consequently, capital inflows to the U.S. have led to continued strength of the dollar – and stagnant or declining incomes for U.S. workers – even as U.S. trade deficits continue to mount. In other words, the foreign exchange market lost its traditional function as an automatic stabilizer for trade balances, and the resulting demands for protectionism in deficit countries are now at least as great as they were before the Plaza Accord in 1985.”

Specifically with regards to the belligerent relationship the US has with China today, Koo suggested that it was because of flaws in the free trade framework of the World Trade Organisation (WTO):

“…a key contradiction in the WTO framework: the fact that China levies high tariffs on imports from all WTO nations is no reason why the U.S.—which runs a huge trade deficit with China—should have to settle for lower tariffs on imports from China.

This problem arose because the developed‐world members of the WTO had already lowered tariffs among themselves before developing countries such as China, with their significantly lower wages and higher tariffs, were allowed to join. When they joined, developing countries could argue that they were still underdeveloped and needed higher tariffs to allow infant domestic industries to grow and to keep their trade deficits under control. Although that was a valid argument for developing countries at the time and their maximum tariff rates have come down as negotiated, the effective rates remained higher than those of advanced countries long after those countries became competitive enough to run trade surpluses with the developed world…

…Because the WTO system is based on the principle of multilateralism, with rules applied equally to all member nations, this framework provides no way of addressing bilateral imbalances between the U.S. and China. It is therefore not surprising that the Trump administration has decided to pursue bilateral, not multilateral, trade negotiations.

In retrospect, what the WTO should have done is to impose a macroeconomic condition stating that new members must lower their tariff and non‐tariff barriers to advanced‐country norms after they start to run significant trade surpluses with the latter. Here the term “significant” might be defined to mean running a trade surplus averaging more than, say, two percent of GDP for three years. If a country fails to reduce its tariffs to the advanced‐nation norm within say five years after reaching that threshold, the rest of the WTO community should then be allowed to raise tariffs on products from that country to the same level that country charges on its imports. The point is that if the country is competitive enough to run trade surpluses vis‐à‐vis advanced countries, then it should be treated as one.

If this requirement had existed when Japan joined the GATT in 1963 or when China joined the WTO in 2001, subsequent trade frictions would have been far more manageable. Under the above rules, Japan would have had to lower its tariffs starting in 1976, and China would have had to lower its tariffs from the day it joined the WTO in 2000! Such a requirement would also have enhanced the WTO’s reputation as an organization that supports not only free trade but also fair trade.”

Koo also noted that the term globalisation actually has two components, namely, free trade and free movement of capital, and that the former is important for countries to continue maintaining because of the benefits it brings, while the latter system needs improvement:

“The term “globalization” as used today actually has two components: free trade and the free movement of capital. 

Of the two, it was argued in previous chapters that the system of free trade introduced by the U.S. after 1947 led to unprecedented global peace and prosperity. Although free trade produces winners and losers and providing a helping hand to the losers is a major issue in the pursued economies, the degree of improvement in real living standards since 1945 has been nothing short of spectacular in both pursued and pursuing countries…

…The same cannot be said for the free movement of capital, the second component of globalization. Manufacturing workers and executives in the pursued economies feel so insecure not only because imports are surging but also because exchange rates driven by portfolio capital flows of questionable value are no longer acting to equilibrate trade.

To better understand this problem, let us take a step back and consider a world in which only two countries – the U.S. and Japan – are engaged in trade, and each country buys $100 in goods from the other. The next year, both countries will have the $100 earned from exporting to its trading partner, enabling it to buy another $100 in goods from that country. The two nations’ trade accounts are in balance, and the trade relationship is sustainable. 

But if the U.S. buys $100 from Japan, and Japan only buys $50 from the U.S., Japan will have $100 to use the next year, but the U.S. will have only $50, and Japanese exports to the U.S. will fall to $50 as a result. Earning only $50 from the U.S., the Japanese may have to reduce their purchases from the U.S. the following year. This sort of negative feedback loop may push trade into a “contractionary equilibrium.”

When exchange rates are added to the equation, the Japanese manufacturer that exported $100 in goods to the U.S. must sell those dollars on the currency market to buy the yen it needs to pay domestic suppliers and employees. However, the only entity that will sell it those yen is the U.S. manufacturer that exported $50 in goods to Japan.

With $100 of dollar selling and only $50 worth of yen selling, the dollar’s value versus the yen will be cut in half. This is how a surplus country’s exchange rate is pushed higher to equilibrate trade…

…If Japanese life insurers, pension funds, or other investors who need dollars to invest in the U.S. Treasury bonds sold yen and bought the remaining $50 the Japanese exporters wanted to sell, there would then be a total of $100 in dollar-buying demand for the $100 the Japanese exporter seeks to sell, and exchange rates would not change. If Japanese investors continued buying $50-worth of dollar investments each year, exchange rates would not change, in spite of the sustained $50 trade imbalances. 

Although the above arrangement may continue for a long time, the Japanese investors would effectively be lending money to the U.S. This means that at some point the money would have to be paid back. 

Unless the U.S. sells goods to Japan, there will be no U.S. exporters to provide the Japanese investors with the yen they need when they sell their U.S. Treasury bonds to pay yen obligations to Japanese pensioners and life insurance policyholders. Unless Japan is willing to continue lending to the U.S. in perpetuity, therefore, the underlying 100:50 trade imbalance will manifest itself when the lending stops.

At that point, the value of the yen will increase, resulting in large foreign exchange losses for Japanese pensioners and life insurance policyholders. Hence this scenario is also unsustainable in the long run. The U.S., too, would prefer a healthy relationship in which it sells goods to Japan and uses the proceeds to purchase goods from Japan to an unhealthy one in which it funds its purchases via constant borrowings…

…When financial markets are liberalized, capital moves to equalize the expected return in all markets. To the extent that countries with strong domestic demand tend to have higher interest rates than those with weak demand, money will flow from the latter to the former. Such flows will strengthen the currency of the former and weaken the currency of the latter. They may also add to already strong investment activity in the former by keeping interest rates lower than they would be otherwise, while depressing already weak investment activity in the latter by pushing interest rates higher than they would be otherwise.

To the extent that countries with strong domestic demand tend to run trade deficits and those with weak domestic demand run trade surpluses, these capital flows will exacerbate trade imbalances between the two by pushing the deficit country’s currency higher and pushing the surplus country’s currency lower. In other words, these flows are not only not in the best interests of individual countries, but are also detrimental to the attainment of balanced trade between countries. The widening imbalances then increase calls for protectionism in deficit countries.”

Prior to the liberalization of capital flows in the 1980s, “trade was free, but capital flows were regulated, so the foreign exchange market was driven largely by trade-related transactions.” This also meant that currency transactions could play the role they were meant to in terms of driving balanced trade:

“The currencies of trade surplus nations therefore tended to strengthen, and those of trade deficit nations to weaken. That encouraged surplus countries to import more and deficit countries to export more. In other words, the currency market acted as a natural stabilizer of trade between nations.”

As a sign of how free movement of capital has distorted the currency market, Koo noted that when the book was published “only about five percent of foreign exchange transactions involve trade, while the remaining 95 percent are attributable to capital flows.”

The problems with China’s economy

The sixth and last important idea from Koo’s book is a discussion of the factors that affect China’s economic growth, and why the country’s growth rate has slowed in recent years from the scorching pace seen in the 1990s and 200s. One issue, described by Koo, is that China no longer has a demographic tailwind to drive rapid economic growth, and is now facing a “middle-income trap” after passing the LTP around 2012:

“China actually passed the LTP around 2012 and is now experiencing sharp increases in wages. This means the country is now in its golden era, or post‐LTP maturing phase. However, because the Chinese government is wary of strikes, labor disputes, or other public disturbances of any kind, it is trying to pre‐empt such conflict by administering significant wage increases each year, with businesses required to raise wages under directives issued by local governments. In some regions, wages had risen at double‐ digit rates in a bid to prevent labor disputes. It remains to be seen whether such top‐down actions can substitute for a process in which employers and employees learn through confrontation what can reasonably be expected from the other party.

Just as China was passing the LTP, its working‐age population—defined as those aged 15 to 594—started shrinking in 2012. From a demographic perspective, it is highly unusual for the entire labor supply curve to begin shifting to the left just as a country reaches the LTP. Japan, Taiwan, and South Korea all enjoyed about 30 years of workforce growth after reaching their LTPs. The huge demographic bonus China enjoyed until 2012 is not only exhausted, but has now reversed… That means China that will not be able to maintain the rapid pace of economic growth seen in the past, and in fact growth has already slowed sharply. 

Higher wages in China are now leading both Chinese and foreign businesses to move factories to lower-wage countries such as Vietnam and Bangladesh, prompting fears that China will become stuck in the so-called “middle-income trap”. This trap arises from the fact that once a country loses its distinction as the lowest-cost producer, many factories may leave for lower-cost destinations, resulting in less investment and less growth. In effect, the laws of globalization and free trade that benefited China when it was the lowest-cost producer are now posing real challenges for the country.”

Koo proposed ideas for China to reinvigorate China’s growth, such as investing in productivity-enhancing measures for domestic workers. 

Another important factor affecting China’s economic growth involves the appropriate type of policy the government should implement to govern the economy. Since the country had passed the LTP more than a decade ago, and is in its golden era, fiscal policy – the act of the government directing the economy – is no longer the effective way to govern the economy. But is the government relinquishing control? To complicate matters, there are early signs now that China may already be in the pursued stage, in which case, fiscal policy will be important again. It remains to be seen what would be the most appropriate way for the government to lead China’s economy. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q1 2025

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the first quarter of 2025.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the first quarter of 2025 – was held last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy is facing turbulence, with a multitude of problems, but consumers and businesses still remain financially healthy

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy is facing turbulence, with problems including tariffs, trade wars, inflation, and high asset prices

The economy is facing considerable turbulence (including geopolitics), with the potential positives of tax reform and deregulation and the potential negatives of tariffs and “trade wars,” ongoing sticky inflation, high fiscal deficits and still rather high asset prices and volatility. As always, we hope for the best but prepare the Firm for a wide range of scenarios.

2. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose from US$1.9 billion a year ago; management increased the probability weightings for downside scenarios in its CECL (current expected credit losses) framework for credit allowances in 2025 Q1 because of higher risks and uncertainties from the environment seen in the last few weeks; the increase in allowance is not driven by deterioration in credit performance; Consumer & Community Banking’s net charge-offs surged significantly from US$0.72 billion a year ago 

Credit costs were $3.3 billion with net charge offs of $2.3 billion and a net reserve bill of $973 million…

…With this quarter’s reserve bill, firm’s total allowance for credit losses is $27.6 billion. Let’s take a second to add a little bit of context to our thinking surrounding this number in light of the unique environment of the last several weeks. Our first quarter allowance is anchored on the relatively benign central case economic outlook, which was in effect at the end of the quarter. But in light of the significantly elevated risks and uncertainties at the time, we increased the probability weightings associated with the downside scenarios in our CECL framework. As a result, the weighted average unemployment rate embedded in our allowance is 5.8%, up from 5.5% last quarter, driving the $973 million increase in the allowance. So with that in mind, the consumer build of $441 million was driven by changes in the weighted average macroeconomic outlook. The wholesale build of $549 million was predominantly driven by credit quality changes on certain exposures and that lending activity, as well as changes in the outlook…

…The increase in the allowance is not to any meaningful degree driven by deterioration in the actual credit performance in the portfolio which remains largely in line with expectations…

…Credit costs were $2.6 billion, reflecting net charge-offs of $2.2 billion, up $275 million year on year, pred ominantly driven by the seasoning of recent vintages and card with delinquencies and losses in line with expectations.

3. Management is seeing recent downtrends in consumer and small business sentiment, but consumers and small businesses remain financially healthy; management is seeing consumers front-load spending ahead of tariffs; management is seeing small businesses face more challenges than large businesses because of tariffs-related uncertainty; management is seeing a drop in travel-spending among consumers, but it’s not indicative of broader patterns; management is seeing relatively weaker spending from lower-income consumers, but they are not in distress 

Consumers and small businesses remain financially healthy despite the recent down trends in consumer and small business sentiment. Based on our data, spend, cash buffers, payment to income ratios, and credit utilization are all in line with our expectations…

…On the consumer side, the thing to check is the spending. And to be honest, the main thing that we see there would appear to be a certain amount of front-loading of spending ahead of people expecting price increases from tariffs…

…In terms of our corporate clients, obviously, they’ve been reacting to the changes in tariff policy… Across the size of the clients, I think smaller clients, small business, and smaller corporates are probably a little bit more challenged. I think the larger corporates have a bit more experience dealing with these things and more resources to manage…

…We obviously saw the airlines discuss what they are seeing as headwinds for them, specifically in airline travel. And we’re seeing that too through the card spend. It’s not obvious to us that that’s necessarily an indicator for broader patterns…

…When we look at our card data and also our cash buffers and people checking accounts, of course, it is true that it is relatively weaker in the lower income segment. But when you take a step back and you ask, are we seeing signs of distress in the lower income segment? The answer is no. So sure, the margin cash buffers are lower, and you see some rotation of spend and spending is a little bit weaker than it was in the peak spending moments. But actually, some of the increases in spending that we’re seeing in April are actually coming from the lower income segment. So no evidence of distress, I would say.

4. JPMorgan’s credit card outstanding loans was up double-digits year-on-year

Card outstandings were up 10% due to strong account acquisition.

5. Auto originations were up year-on-year

In auto, originations were $10.7 billion, up 20%, driven by higher lease volume.

6. JPMorgan’s investment banking fees had good growth in 2025 Q1, with growth in debt underwriting fees but a decline in equity underwriting fees, signalling higher appetite for refinancing activity from companies; management is seeing companies adopting a wait-and-see attitude when it comes to capital markets activities because of tariffs-related uncertainty in the current environment

IB fees were up 12% year on year, and we ranked number one with wallet share of 9%. In advisory, fees were up 16%, benefiting from the closing of deals announced in 2024. Debt underwriting fees were up 16%, primarily driven by elevated refinancing activity, particularly in leveraged finance. And equity underwriting fees were down 9% year on year, reflecting challenging market conditions. In light of market conditions, we are adopting a cautious stance on the investment banking outlook. While client engagement and dialogue is quite elevated, both the conversion of the existing pipeline and origination of new activity will require a reduction in the current levels of uncertainty…

…In terms of our corporate clients, obviously, they’ve been reacting to the changes in tariff policy. And at the margin, that shifts their focus away from more strategic priorities with obvious implications for the investment banking pipeline outlook towards more short-term work, optimizing supply chains, and trying to figure out how they’re going to respond to the current environment. So as a result, I think we would characterize what we’re hearing from our corporate clients as a little bit of a wait-and-see attitude.

7. Management expects credit card net charge-offs for 2025 to be in line with previous guidance because of the mechanical way credit card charge-offs work, and not because management thinks credit card net charge-offs will really be healthy as the year progresses

On credit, we expect the card net charge-off rate to be in line with our previous guidance of approximately 3.6%…

…[Question] No change to the full year credit card net charge-off forecast. How do we square that with the rising recession risk?

[Answer] We should have not given you that forecast. We don’t know what the number is going to be. I would say that’s a short-term number. And based on what’s happening today is there’s a wide range of potential outcomes… There are some mechanical elements to the way card charge-off works. That means that it’s pretty baked, pretty far out of time a couple of quarters… It just doesn’t necessarily tell you that much about what might actually happen through the end of the year, even if unemployment were to increase significantly, it probably wouldn’t flow through the charge-offs until later.

8. Management is now incorporating 3 interest rate cuts for 2025, up from the previous expectation of 1 cut

If you remember last quarter we said that we had one cut in the curve. I think, latest curve has something like three cuts.

9. JPMorgan’s economists think there’s a 50% chance of a recession

What I would say is our excellent economists, Michael Feroli, I call him this morning, specifically to ask him how they’re looking at their forecast today. And they think it’s about 50-50 for a recession. So I’ll just refer to that.

10. Management thinks inflation in the US will be sticky

We have sticky inflation. We had that before. I personally have told you I don’t think that’s going to go away, and that relates to that.

11. Management thinks the US dollar will remain the reserve currency globally

Obviously, the US dollar still is the reserve currency, and that isn’t going to change though some people may feel slightly differently about it.

12. Management thinks that the current situation is different from past cycles

[Question] You’ve been through many cycles. And I think we’re all interested in understanding how you think this next cycle is likely to progress. And I’m wondering, is there anything that you’ve seen in the past that looks like this or that you would suggest if any slowdown coming forward, is it more likely to be similar to what kind of prior cycle you’ve seen?

[Answer] This is different, okay? This is different. This is the global economy. And please read my chairman’s letter. The most important thing to me is the Western world stays together economically when we get through all this and militarily to keep the world safe and free for democracy. That is the most important thing… We obviously have to follow the law of the land, but it’s a significant change we’ve never seen in our lives.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Lessons From Tariff History

Arguably the biggest event in the financial markets this year so far has been the Reciprocal Tariff Policy introduced by the US government, under the Trump administration, last week. The policy is based on calculations that appear haphazardly made-up. Regardless of the intellectual-legitimacy of the Reciprocal Tariff Policy, if the new tariff rates hold, they will represent the highest weighted average tariff rate implemented by the US in more than 100 years, according to investment bank Evercore.

In uncertain times like these, there is some usefulness to learn from history. There’s a presentation from the FDRA, a US-based trade organisation representing the footwear industry, that looks back at past episodes of major increases in US tariff rates going back nearly 250 years. The presentation is fantastic – I encourage anyone reading this article to also look at the whole deck – and I want to document my takeaways for easy reference in the future. My notes:

  • There have been five instances since 1776 – and before the recent Reciprocal Tariff Policy – where the US had raised tariffs significantly and they happened in 1828, 1890, 1922, 1930, and 2018; the 1930 episode is commonly known as the Smoot-Hawley tariff era.
  • In past episodes of higher tariffs, US consumers had to pay higher prices each time.
  • When the US raised tariffs in the past, its trading partners always introduced retaliatory trade-related actions against the US.
  • The political party that pushed for the higher tariffs was subsequently voted out in the next voting cycle.
  • The 1930s Smoot-Hawley tariffs occurred when the US was running a major trade surplus, unlike today, when the US has a substantial trade deficit. In fact, the current trade deficit is a major driving force for the Trump administration introducing the Reciprocal Tariff Policy.

Although the lessons from history are useful, it’s also important to note that they can at best be used to form expectations and not predictions. It’s anybody’s guess as to what happens next with the US’s trade policies and thus the US economy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What Warren Buffett Thinks About Tariffs

More than 20 years ago, Warren Buffett shared his thoughts on tariffs and their effects on the US economy

Earlier this week, the US government, under the Trump administration, announced a Reciprocal Tariff policy. The policy imposes a minimum tariff of 10% on all of the US’s trading partners, with higher rates – some significantly so – for many countries. For example, China’s rate is 34%, Taiwan’s is 32%, India’s is 27%, and the European Union’s is 20%. Officially, the reciprocal tariff rates are half of what the Trump administration says are “tariffs charged to the U.S.A, including currency manipulation and trade barriers.” In reality, the formula used was laughably simple and has nothing to do with trade barriers or tariffs charged to the US:

A country’s reciprocal tariff rate = (US’s trade deficit with the country) divided by (US’s imports from the country) divided by (2)

If the formula spits out a lower number, a reciprocal tariff rate of 10% was applied. 

The sweeping tariffs have created widespread fear in the financial markets. So this is an appropriate time to revisit Warren Buffett‘s November 2003 article titled “America’s Growing Trade Deficit Is Selling the Nation Out From Under Us. Here’s a Way to Fix the Problem—And We Need to Do It Now” where he laid out his thoughts on tariffs. From this point on, all content in italics are direct quotes from Buffett’s article.

The danger of sustained trade deficits

The first part of the article discusses the reasons why Buffett even thought about tariffs: He saw risks in the American economy from sustained trade deficits. To illustrate his point, he used a hypothetical example of two islands – Thriftville and Squanderville – that only trade among themselves. 

At the beginning, the populations of both Thriftville and Squanderville worked eight hours a day to produce enough food for their own sustenance. After some time, the population of Thriftville decided to work 16 hours a day, which left them with surplus food to export to Squanderville. The population of Squanderville are delighted – they could now exchange Squanderbonds (denominated in Squanderbucks) for Thriftville’s surplus food. But this exchange, when carried out for a long time, becomes a massive problem for Squanderville. Buffett explained:

“Over time Thriftville accumulates an enormous amount of these bonds, which at their core represent claim checks on the future output of Squanderville. A few pundits in Squanderville smell trouble coming. They foresee that for the Squanders both to eat and to pay off—or simply service—the debt they’re piling up will eventually require them to work more than eight hours a day. But the residents of Squanderville are in no mood to listen to such doomsaying.

Meanwhile, the citizens of Thriftville begin to get nervous. Just how good, they ask, are the IOUs of a shiftless island? So the Thrifts change strategy: Though they continue to hold some bonds, they sell most of them to Squanderville residents for Squanderbucks and use the proceeds to buy Squanderville land. And eventually the Thrifts own all of Squanderville.

At that point, the Squanders are forced to deal with an ugly equation: They must now not only return to working eight hours a day in order to eat—they have nothing left to trade—but must also work additional hours to service their debt and pay Thriftville rent on the land so imprudently sold. In effect, Squanderville has been colonized by purchase rather than conquest.”

To ground the hypothetical example in reality, Buffett then discussed the US’s actual trade deficits back then and their economic costs:

“Our annual trade deficit now exceeds 4% of GDP. Equally ominous, the rest of the world owns a staggering [US]$2.5 trillion more of the U.S. than we own of other countries. Some of this [US]$2.5 trillion is invested in claim checks—U.S. bonds, both governmental and private— and some in such assets as property and equity securities.

In effect, our country has been behaving like an extraordinarily rich family that possesses an immense farm. In order to consume 4% more than we produce—that’s the trade deficit—we have, day by day, been both selling pieces of the farm and increasing the mortgage on what we still own.

To put the [US]$2.5 trillion of net foreign ownership in perspective, contrast it with the [US]$12 trillion value of publicly owned U.S. stocks or the equal amount of U.S. residential real estate or what I would estimate as a grand total of [US]$50 trillion in national wealth. Those comparisons show that what’s already been transferred abroad is meaningful—in the area, for example, of 5% of our national wealth.

More important, however, is that foreign ownership of our assets will grow at about [US]$500 billion per year at the present trade-deficit level, which means that the deficit will be adding about one percentage point annually to foreigners’ net ownership of our national wealth. As that ownership grows, so will the annual net investment income flowing out of this country. That will leave us paying ever-increasing dividends and interest to the world rather than being a net receiver of them, as in the past. We have entered the world of negative compounding— goodbye pleasure, hello pain.”

The solution to sustained trade deficits

In the next part of his article, Buffett shared the solution he has for the US’s problem with trade deficits: Import Certificates, or ICs. Each exporter in the US will be issued ICs in an amount equal to the value of its exports, meaning $100 of exports will come with 100 ICs. Each importer in the US will then need to buy ICs when importing products into the US – to import $100 worth of products, an importer will need to purchase ICs that were issued with $100 of exports.

Buffett thought that the ICs would (1) have an “exceptionally liquid market” given the volume of the US’s exports, (2) likely trade for $0.10 per dollar of exports, and (3) be viewed by US exporters as a reduction in cost, in this case, of 10%, given the likely trading price of the ICs. The reduction in cost from the ICs would allow US exporters to sell their products internationally at a lower cost while maintaining profit margins, leading to US exports becoming more competitive. 

But there are costs that the American society has to pay for the IC plan. Buffett explained:

“It would have certain serious negative consequences for U.S. citizens. Prices of most imported products would increase, and so would the prices of certain competitive products manufactured domestically. The cost of the ICs, either in whole or in part, would therefore typically act as a tax on consumers.”

Those costs, however, are necessary when compared to the alternatives, as Buffett illustrated:

“That is a serious drawback. But there would be drawbacks also to the dollar continuing to lose value or to our increasing tariffs on specific products or instituting quotas on them—courses of action that in my opinion offer a smaller chance of success. Above all, the pain of higher prices on goods imported today dims beside the pain we will eventually suffer if we drift along and trade away ever larger portions of our country’s net worth.” 

Tariff in nature, ICs in name

So now we understand Buffett’s view with the US’s sustained trade deficits and his solution for the problem. But where do tariffs come into play? Buffett actually recognised that his IC solution “is a tariff called by another name.” In other words, Buffett thought that a good solution for the US’s trade deficits is to implement a tariff, which he named ICs. But crucially, the IC plan “does not penalize any specific industry or product” and “the free market would determine what would be sold in the U.S. and who would sell it.”

Buffett also discussed the implications of ICs on global trade and geopolitics in his article. In short, he thought the risks were minor and manageable, that foreign manufacturers would absorb the extra costs from the ICs, and that the eventual outcome would be the US exporting more products around the world:

“Foreigners selling to us, of course, would face tougher economics. But that’s a problem they’re up against no matter what trade “solution” is adopted—and make no mistake, a solution must come…

……To see what would happen to imports, let’s look at a car now entering the U.S. at a cost to the importer of $20,000. Under the new plan and the assumption that ICs sell for 10%, the importer’s cost would rise to $22,000. If demand for the car was exceptionally strong, the importer might manage to pass all of this on to the American consumer. In the usual case, however, competitive forces would take hold, requiring the foreign manufacturer to absorb some, if not all, of the $2,000 IC cost…

…This plan would not be copied by nations that are net exporters, because their ICs would be valueless. Would major exporting countries retaliate in other ways? Would this start another Smoot-Hawley tariff war? Hardly. At the time of Smoot-Hawley we ran an unreasonable trade surplus that we wished to maintain. We now run a damaging deficit that the whole world knows we must correct.

For decades the world has struggled with a shifting maze of punitive tariffs, export subsidies, quotas, dollar-locked currencies, and the like. Many of these import-inhibiting and export-encouraging devices have long been employed by major exporting countries trying to amass ever larger surpluses—yet significant trade wars have not erupted. Surely one will not be precipitated by a proposal that simply aims at balancing the books of the world’s largest trade debtor…

…The likely outcome of an IC plan is that the exporting nations—after some initial posturing—will turn their ingenuity to encouraging imports from us.”

Buffett also pointed out that in his IC plan, the value of ICs is designed to approach zero if the plan works, since if the volume of US exports grows significantly, the volume of ICs in existence would also grow proportionally, driving down their price.

An unknown future

It’s clear that Buffett thought intelligently-designed tariffs are a good solution for the US’s trade deficit problem. The US is still running a trade deficit today (interestingly, the trade deficit in 2024 was 3.1% of the US’s GDP, which is a lower percentage than when Buffett published his article on his IC plan) and this dynamic is a driving force behind the Trump administration’s Reciprocal Tariff policy. Unfortunately, the policy is poorly designed, as evidenced by how haphazardly the calculations were made. Moreover, the policy comes in the form of increased tariffs (according to investment bank Evercore, the Reciprocal Tariff policy “pushes the overall U.S. weighted average tariff rate to 24%, the highest in over 100 years”), which Buffett pointed out in his article had a low chance of success. So although some form of well-designed tariffs may be a good idea for the US economy – following Buffett’s logic** – the way they are currently implemented by the Trump administration is questionable at best.

All these said, anyone who thinks they have a firm idea on what would happen to the US economy because of the Reciprocal Tariff policy is likely lying (to others and/or to themselves). These things have second and third-order consequences that could be surprising. And as the late Charlie Munger once said, “If you’re not a little confused about what’s going on, you don’t understand it.”

** It’s worth noting that even Buffett’s logic that sustained trade deficits have negative consequences may not be correct. In Buffett’s article, he noted that he had been worried about the US’s trade deficits since 1987 and had been wrong from then up to the point the article was published. It has been more than 20 years since the article’s publication, and the US’s GDP has grown to be around 2.5 times larger today. So sustained trade deficits may not even be a bad thing for the US economy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2024 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

Earlier this month, I published the two-part article, The Latest Thoughts From American Technology Companies On AI (2024 Q4) (see here and here). In them, I shared commentary in earnings conference calls for the fourth quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management will be offering new Firefly web app subscriptions that will support both Adobe’s Firefly AI models and 3rd-party models; management envisions the Firefly app as the umbrella destination for ideation; management recently introduced Adobe’s new Firefly video model into the Firefly app offering; management will be introducing Creative Cloud offerings with Firefly tiering; the Firefly video model has been very well-received by brands and creative professionals; users of the Firefly video model can generate video clips from a text prompt or image; the Firefly web app allows users to generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages; the Firefly web app subscription plans include Firefly Standard, Firefly Pro, and Firefly Premium; more than 90% of paid users of the Firefly web app have been generating videos; Firefly has powered 20 billion generations (16 billion in 2024 Q3) since its launch in March 2023, and is now doing more than 1 billion generations a month; management thinks the commercially-safe aspect of Firefly models is very important to users; management thinks the high-level of creative control users get with Firefly models is very important to them; the adoption rates of the Firefly paid plan signals to management that Firefly is adding value to creative professionals

In addition to Creative Cloud, we will offer new Firefly web app subscriptions that integrate and are an on-ramp for our web and mobile products. While Adobe’s commercially safe Firefly models will be integral to this offering, we will support additional third-party models to be part of this creative process. The Firefly app will be the umbrella destination for new creative categories like ideation. We recently introduced and incorporated our new Firefly video model into this offering, adding to the already supported image, vector and design models. In addition to monetizing stand-alone subscriptions for Firefly, we will introduce multiple Creative Cloud offerings that include Firefly tiering…

…The release of the Adobe Firefly Video model in February, a commercially-safe generative AI video model, has been very positively received by brands and creative professionals who have already started using it to create production-ready content. Users can generate video clips from a text prompt or image, use camera angles to control shots, create distinct scenes with 3D sketches, craft atmospheric elements and develop custom motion design elements. We’re thrilled to see creative professionals and enterprises and agencies, including Dentsu, PepsiCo and Stagwell finding success with the video model….

…In addition to generating images, videos and designs from text, the app lets you generate videos from key frames, use 3D designs to precisely direct generations, and translate audio and video into multiple languages. We also launched 2 new plans as part of this release, Firefly Standard and Firefly Pro and began the rollout of our third plan, Firefly Premium, yesterday. User engagement has been strong with over 90% of paid users generating videos…

…Users have generated over 20 billion assets with Firefly…

…We’re doing more than 1 billion generations now a month and 90% of people using Firefly the app also saw — are generating video as well as part of that…

…For Firefly, we have imaging, vector, design, video, voice, video and voice coming out just a couple of weeks ago, off to a good start. I know there have been some questions about how important is commercially safety of the models. They’re very important. A lot of enterprises are turning to them for the quality, the breadth but also the commercial safety, the creative control that we give them around being able to really match structure, style, set key frames for precise video generation, 3D to image, image to video…

…If we look at the early adoption rates of the Firefly paid plan, it really tells us both of these stories. We have a high degree of conviction that it’s adding value and being used by Creative Professionals,

Adobe’s management thinks that marketing professionals will need to create and deliver an unprecedented volume of personalised content and that marketing professionals will need custom, commercially safe AI models and AI agents to achieve this, and this is where Adobe GenStudio and Firefly Services can play important roles; management is seeing customers turn to Firefly Services and Custom Models for scaling on-brand marketing content production; there are over 1,400 custom models created since launch of Firefly Services and Custom Models; Adobe GenStudio for Performance Marketing has won leading brands recently as customers; Adobe GenStudio for Performance Marketing has partnerships with leading digital advertising companies

Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile applications, e-mail, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom, commercially safe models and agents tailored to address the inefficiencies of the content supply chain. With Adobe GenStudio and Firefly Services, Adobe is transforming how brands and their agency partners collaborate on marketing campaigns, unlocking new levels of creativity, personalization and efficiency. The combination of the Adobe Experience Platform and apps and Adobe GenStudio is the most comprehensive marketing platform to deliver on this vision…

…We had another great quarter in the enterprise with more customers turning to Firefly Services and Custom Models to scale on-brand content production for marketing use cases, including leading brands such as Deloitte Digital, IBM, IPG Health, Mattel and Tapestry. Tapestry, for example, has implemented a new and highly productive digital twin workflow using Custom Models and Firefly…

…Strong demand for Firefly Services and Custom Models as part of the GenStudio solution with over 1,400 custom models since launch.

GenStudio for Performance Marketing wins at leading brands including AT&T, Lennar, Lenovo, Lumen, Nebraska Furniture Mart, Red Hat, Thai Airways, and University of Phoenix.

Strong partnership momentum with GenStudio for Performance Marketing supporting ad creation and activation for Google, Meta, Microsoft Ads, Snap, and TikTok and several partners including Accenture, EY, IPG, Merkle and PWC offering vertical extension apps.

Adobe’s generative AI solutions are infused across the company’s products and management sees the generative AI solutions as a factor driving billions in annualised recurring revenue (ARR) for the company from customer acquisition to customer retention and upselling; Adobe has AI-first stand-alone and add-on products such as Acrobat AI Assistant, the Firefly App and Services, and GenStudio for Performance Marketing; the AI-first stand-alone and add-on products already accounted for $125 million in book of business for Adobe in 2024 Q4 (FY2025 Q1), and management expects this book of business to double by the end of FY2025; management thinks that the monetisation of Adobe’s AI services goes beyond the $125 million in book of business and also incorporates customers who subscribe to Adobe’s services and use the AI features

Our generative AI innovation is infused across the breadth of our products, and its impact is influencing billions of ARR across acquisition, retention and value expansion as customers benefit from these new capabilities. This strength is also reflected in our AI-first stand-alone and add-on products such as Acrobat AI Assistant, Firefly App and Services and GenStudio for Performance Marketing, which have already contributed greater than $125 million book of business exiting Q1 fiscal ’25. And we expect this AI book of business to double by the end of fiscal ’25…

…A significant amount of the AI monetization is also happening in terms of attracting people to our subscription, making sure they are retained and having them drive higher-value price SKUs. So when somebody buys Creative Cloud or when somebody buys Document Cloud, in effect, they are actually monetizing AI. But in addition to that, Brent, what we wanted to do was give you a flavor for the new stand-alone products that we have when we’ve talked about introducing Acrobat AI Assistant and rolling that out in different languages, Firefly, and making sure that we have a new subscription model associated with that on the web, Firefly Services for the enterprise and GenStudio. So the $125 million book of business that we talked about exiting Q1 only relates to that new book of business.

Adobe’s management is seeing every CMO (Chief Marketing Officer) being very interested in using generative AI in their content supply chain

Every CMO that we talk to, every agency that we work with, they’re all very interested in how generative AI can be used to transform how the content supply chain works.

Adobe’s management sees AI as bringing an even larger opportunity for Adobe

I am more excited about the larger opportunity without a doubt as a result of AI. And we’ve talked about this, Kash. If you don’t take advantage of AI, it’s a disruption. In our particular case, the intent is clearly to show how it’s a tailwind.

Adobe’s management is happy to support 3rd-party models within the Firefly web app or within other Adobe products so long as the models deliver value to users

We’ll support all of the creative third-party models that people want to support, whether it’s a custom model we create for them or whether it’s any other third-party model within Firefly as an app and within Photoshop, you’re going to see support for that as well. And so think of it as we are the way in which those models actually deliver value to a user. And so it’s actually just like we did with Photoshop plug-ins in the past, you’re going to see those models supported within our flagship applications.

Adobe’s management is seeing very strong attach rate and adoption of generative AI features in Adobe’s products with creative professionals

This cohort of Creative Professionals, we see very strong attach and adoption of the generative AI features we put in the product partially because they’re well integrated and very discoverable and because they just work and people get a lot of value out of that. So what you will see is you’ll start to see us integrating these new capabilities, these premium capabilities that are in the Firefly Standard, Pro and Premium plans more deeply into the creative workflow so more people have the opportunity to discover them.

Meituan (OTC: MPNGY)

Meituan’s autonomous vehicles and drones have accumulated 4.9 million and 1.45 million in orders-fulfilled by end-2024; Meituan’s drones started operating in Dubai recently

By year end of 2024, the accumulated number of commercial orders fulfilled by our autonomous vehicles and drones have reached 4.9 million and 1.45 million, respectively. Our drone business also started commercial operation in Dubai recently.

Meituan’s management wants to expand Meituan’s investments in AI, and is fully committed to integrating AI into Meituan’s platform; management’s AI strategy for Meituan has 3 layers, which are (1) integrating AI into employees’ work, (2) infusing AI into Meituan’s products, and (3) building Meituan’s own large language model

We will actively embrace and expand investment in cutting-edge technologies, such as AI or unmanned aerial delivery or autonomous delivery service vehicles, and accelerate the application of these technologies. And we are committed to fully integrating AI into consumers’ daily lives and help people eat better, live better…

…Our AI strategy builds upon 3 layers. The first one is AI at work. We are integrating AI in our employees’ day-to-day work and our daily business operations and to significantly enhance the productivity and work experience for our over 400,000 employees. And then second layer is AI in products. So we will use AI to upgrade our existing products and services, both 2B and 2C. And we will also launch brand-new AI-native products to better serve our consumers, merchants, couriers and business partners…

…The third layer is building our own in-house large language model, and we plan to continue to invest and enhance our in-house large language model with increased CapEx.

Meituan’s management has developed Meituan’s in-house large language model named Longcat; management has rolled out Longcat alongside 3rd-party models to improve employees’ productivity; Longcat has been useful for AI coding, conducting smart meetings, short-form video generation, for AI sales assistance, and more; Longcat has been used to develop an in-house AI customer service agent, which has driven a 20% improvement in efficiency and a 7.5 percentage points improvement in customer satisfaction; the AI sales assistant reduced the workload of Meituan’s business development (BD) team by 44% during the Spring Festival holidays; 27% of new code in Meituan is currently generated by its AI coding tools

On the first layer, AI at work, on the employee productivity front, we have our — we have developed our in-house large language model. It’s called longcat. By putting longcat side by side with external models, we have rolled out our very highly efficient tools for our employees, including AI coding, smart meeting and document assistant, and also, it’s quite useful in graphic design and short-form video generation and also AI sales assistance. These tools have substantially boost employee productivity and working experience…

…We have developed an intelligent AI customer service agent using our in-house large language model. So after the pilot operation, the results show more than 20% enhanced efficiency. And moreover, the customer satisfaction rate has improved over 7.5 percentage points…

…During this year’s Spring Festival holidays, we gathered an updated business information of our 1.2 million merchants on our platform with AI sales assistant. So it very effectively reduced the workload of our BD team, yes, by 44% and further enhanced the accuracy of the listed merchant information on our platform…

…Right now, in our company, about 27% of new code is generated by AI coding tools.

Meituan’s management is using AI to help merchants with online store design, information enhancement, and display and operation management; management is testing an AI assistant to improve the consumer experience in their search and transactions; management will launch a brand-new advanced AI assistant later this year that will give everyone a free personal assistant; the upcoming advanced AI assistant will be able to satisfy a lot of consumer-needs in the physical world because in order to bring AI to the physical world, physical infrastructure is needed and Meituan has that

We use AI across multiple categories by providing various tools such as smart online store design and smart merchant information enhancement and display and operation management…

…On the consumer side, we have already started testing AI assistant in some categories to enhance customer — consumer experience for their search and transaction on our platform. And for example, we have rolled out a restaurant assistant and travel assistant — reservation assistant. They can chat with the users, either by text or voice, making things more convenient and easier to use for users. And right now, we are already working on a brand-new AI native product. We expect to launch this more advanced AI assistant later this year and to cover all Meituan services so that everyone can have a free personal assistant. So based on our rich off-line service offerings and efficient on-demand delivery network, I think we will be able to handle many personalized needs in local services. And whether it’s ordering food delivery or making a restaurant reservation or purchasing group deals or ordering groceries or planning trips or booking hotels, I think we have got it covered with a one-stop, and we are going to deliver it to you on time…

…Our AI assistant will not only offer consumer services in the digital world, not just a chatbot, but it’s going to be able to satisfy a lot of their needs in the physical world because in order to bring AI to the physical world, you need more than just very smart algorithms or models. You need infrastructure in the physical world, and that’s our advantage…

…We have built a big infrastructure in the physical world with digital connections. We believe that, that kind of infrastructure is going to be very valuable when we are moving to the era of physical AI.

Meituan’s management expects to incur a lot of capex to improve Meituan’s in-house large language model, Longcat; to develop Longcat, management made the procurement of GPUs in 2024 a top priority, and expects to further scale GPU-related capital expenditure in 2025; Longcat has quite good evaluation results in China; Longcat’s API core volume has increased from 10% at the beginning of 2024 to 68% currently

On the algorithm model and compute side, it’s going to need a lot of CapEx and a very good foundation model. So in the past year, to ensure adequate supply of GPU resources has been a top priority for us. And even as we allocate meaningful resources in shareholder return and new initiatives, we keep investing billions in GPU resources. So our capital — CapEx this year has been substantial. And this year, we plan to further scale our investment in this very critical area. And thanks to our infrastructure and large language model team, we have made significant optimization, both in efficiency and effectiveness. And as a result, our in-house large language model, longcat, has achieved quite good evaluation results comparable to the top-tier models in China…

…The API core volume for Longcat has increased from 10% at the beginning of last year to 68% currently, so — which further validates the effectiveness of our in-house foundation model.

Meituan’s management believes that AI is going to give a massive push to the robotics industry; Meituan has been researching autonomous vehicles since 2016 and drones since 2017; management has made several investments in leading robotics and autonomous driving start-ups; management expects Meituan’s efforts in robotics and AI to be even more tightly integrated in the future

I think AI is going to give a massive push to the development of robotics. So we have been a very early mover when it comes to autonomous delivery vehicles and drones. So actually, we started our R&D in autonomous vehicles in late ’26 (sic) [ late ’16 ]. And we started our R&D in drones in 2017. So we have been working on this for many years, and we are making very good progress. So right now, we are looking to ways to apply AI in the on-demand delivery field. So apart from our in-house research — in-house R&D, we have also made quite several investments in leading start-ups in the robotics and autonomous driving sector to support their growth…

…In future, our robotics and AI will be even more tightly integrated, and we will keep improving in the areas such as autonomous delivery and logistics and automations because right now, apart — besides the last-mile delivery of on-demand delivery, we also operate a lot of rather big warehouses, and that will be very good use cases for automation technologies.

MongoDB (NASDAQ: MDB)

MongoDB’s management expects customers to start building AI prototypes and AI apps in production in 2025 (FY2026), but management expects the progress to be gradual, and so MongoDB’s business will only benefit modestly from AI in 2025 (FY2026); there are high-profile AI companies building on top of MongoDB Atlas, but in general, customers’ journeys with building AI applications will be gradual; management thinks that customers are slow in building AI applications because they lack AI skills and because there are still questions on the trustworthiness of AI applications; management sees the AI applications of today as being fairly simplistic, but thinks that AI applications will become more sophisticated as people become more comfortable with the technology

In fiscal ’26, we expect our customers will continue on their AI journey from experimenting with new technology stacks to building prototypes to join apps and production. We expect the progress to remain gradual as most enterprise customers are still developing in-house skills to leverage AI effectively. Consequently, we expect the benefits of AI to be only modestly incremental to revenue growth in fiscal ’26…

…We have some high-profile AI companies who are building on top of Atlas. I’m not at liberty to name who they are, but in general, I would say that the journey for customers is going to be gradual. I would say one is a lack of AI skills in their organizations. They really don’t have a lot of experience and it’s compounded by the rapid evolution of AI technology that they feel like it’s very hard for them to kind of think about like what’s stack to use and so on and so forth. The second, as I mentioned earlier, on the Voyage question, there’s also a real worry about the trustworthiness of a lot of these applications. So I would say the use cases you’re seeing are fairly simplistic — customer chat bots, maybe document summarization, maybe some very simple [indiscernible] workflows. But I do think that, that is we are in the early innings, and I expect a sophistication to increase as people get more and more comfortable,

In 2024 (FY2025), MongoDB started demonstrating that the modernisation of the technology-stack for applications (i.e. MongoDB’s Relational Migrator service) can be reduced with the help of AI tools; management will expand customer engagement for the modernisation so that it can contribute meaningfully to MongoDB’s business in 2026 (FY2027) and beyond; management will start with Java apps that run on Oracle; management sees a significant revenue opportunity in the modernisation of apps; MongoDB has successfully modernised financial applications for one of Europe’s largest ISVs (independent software vendors); management is even more confident of Relational Migrator now than in the past; Relational Migrator is tackling a very tough problem because it involves massive legacy code, and the use of AI in deciphering the code is very helpful; management is seeing a lot of interest from customers for Relational Migrator because the customers are in pain from their technical debt, and their legacy technology stack cannot handle AI applications

In fiscal ’25, our pilots demonstrated that AI tooling combined with services can reduce the cycle time of modernization. This year, we’ll expand our customer engagements so that app monetization can meaningfully contribute to our new business growth in fiscal ’27 and beyond. To start with, and based on customer demand, we are specifically targeting Java apps running on Oracle, which often have thousands of complex store procedures that need to be understood, converted and tested to successfully monetize the application. We addressed this through a combination of AI tools and agents along with inspection verification by delivery teams. Though the complexity of this work is high, the revenue opportunity for modernizing those applications is significant. For example, we successfully modernize our financial application for one of the largest ISVs in Europe, and we’re now in talks to modernize the majority of the legacy estate…

…[Question] What sort of momentum have you seen with relational migrator. And maybe how should we be thinking about that as a growth driver going forward?

[Answer] Our confidence and bullish on the space is even higher today than it was before…

…When you’re looking at a legacy app that’s got hundreds — tens of thousands, if not thousands, not tens of thousands of store procedures being able to reason about that code, being able to decipher that code and then ultimately to convert that code takes — is a lot of effort. And — but the good news is that we are seeing a lot of progress in that area. We see a lot of interest from our customers in this area because they are in so much pain with all the technical debt they’ve assumed. Second is that when they think about the future and how they enable AI in these applications, there’s no way they can do this on their legacy platforms. And so they’re motivated to try and modernize as quickly as possible.

MongoDB’s management sees AI transforming software from a static tool into a decision-making partner, but the rate of change is governed by the quality of the software’s data infrastructure; legacy databases cannot keep up with the requirements of AI and this is where MongoDB’s document-model database is advantageous; MongoDB’s database simplifies AI development by providing an all-in-one solution incorporating all the necessary pieces, including an operational data store, a vector database, and embedding and reranking models; MongoDB’s database provides developers with a structured approach when they are building AI applications; management sees AI applications being much better than traditional software in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language

AI is transforming software from a static tool into a dynamic decision-making partner. No longer limited to predefined tasks, AI-powered applications will continuously learn from real-time data, but this software can only adapt as fast as the data infrastructure is built on and legacy systems simply cannot keep up. Legacy technology stacks were not designed for continuous adaptation. Complex architectures, batch processing and rigid data models create friction at every step, slowing development, limiting organization’s ability to act quickly and making even small updates time consuming and risky. AI will only magnify these challenges. MongoDB was built for change. MongoDB was designed from the outset to remove the constraints of legacy databases, enabling businesses to scale, adapt and innovate at AI speed. Our flexible document model handles all types of data while seamless scalability ensures high performance for unpredictable workloads…

…We also simplify AI development by natively including vector and tech search directly in the database providing a seamless developer experience that reduces cognitive load, system complexity, risk and operational overhead, all with the transactional, operational and security benefits intrinsic to MongoDB. But technology alone isn’t enough. MongoDB provides a structured solution-oriented approach that addresses the challenges customers have with the rapid evolution of AI technology, high complexity and a lack of in-house skills. We are focused on helping customers move from AI experimentation to production faster with best practices that reduce risk and maximize impact…

…AI-powered applications excel where traditional software often falls short, particularly in scenarios that require nuanced understanding, sophisticated reasoning and interaction and natural language…

…MongoDB demarcatizes the process of building trustworthy AI applications right out of the box. Instead of cobbling together all the necessary piece parts and operational data store, a vector database and embedding and reranking models, MongoDB delivers all of it with a compelling developer experience…

…We think architecturally, we have a huge advantage of the competition. One, the document model really supports different types of data structured, semi-structured and unstructured. We embed a search and Vector Search onto a platform. No one else does that. Then we’ve now with the Voyage AI, we have the most accurate embedding and reranking models to really address the quality and trust issue. And all this is going to be put together in a very elegant developer experience that reduces friction and enables them to move fast.

MongoDB acquired Voyage AI for $220 million, $200 million of which was paid in MongoDB shares; Voyage AI helps MongoDB’s database solve the hallucination issue – a big problem with AI applications – and make AI applications more trustworthy; management thinks the best way to ensure accurate results with AI applications is through high-quality data retrieval, and high-quality data retrieval is enabled by vector embedding and reranking models; Voyage AI’s vector embedding and reranking models have excellent ratings in the Hugging Face community and are used by important AI companies; Voyage AI has an excellent AI team; through Voyage AI, MongoDB can offer best-in-class embedding and reranking models; ISVs (independent software vendors) have gotten better performance when they switched from other embedding models to Voyage AI’s models; Voyage AI’s models increase the trustworthiness of the most demanding and mission-critical AI applications; Voyage AI’s models will only be available on Atlas

With the Voyage AI acquisition, MongoDB makes AI applications more trustworthy by pairing real-time data and sophisticated embedding and retreatment models that ensure accurate and relevant results…

…Our decision to acquire Voyage AI addresses one of the biggest problems customers have when building and deploying AI applications, the risk of hallucinations…

…The best way to ensure accurate results is through high-quality data retrieval, which shows that not only the most relevant information is extracted from an organization’s data with precision, high-quality retrieval is enabled by vector embedding and reranking models. Voyage AI has embedding and reranking models and are among the highest rated in the Hugging Face community for retrieval, classification, clustering and reranking and are used by AI leaders like Anthropic, LangChain, Harvey and Replit. Voyage AI led by Stanford professor, Tang Yuma, who has assembled a world-class AI research team from AI Labs at Stanford, MIT, Berkeley and Princeton. With this acquisition, MongoDB will offer best-in-class embedding and reranking models to power native AI retrievable…

…Let me address how the acquisition of Voyage AI will impact our financials. We disclosed last week that the total consideration was $220 million. Most Voyage shareholders received a consideration in MongoDB stock with only $20 million being paid out in cash…

…We know a lot of ISVs have already reached out to us since the acquisition saying they switched to Voyage from other model providers and they got far better performance. So the value of Voyage is being able to increase the quality and hence the trustworthiness of these AI applications that people are building in order to serve the most demanding and mission-critical use cases…

…Some of these new capabilities like Voyage now that will be available only on Atlas.

Swisscom was able to deploy a generative AI application in just 12 weeks using MongoDB Atlas

Swisscom, Switzerland’s leading provider of mobile, Internet and TV services deployed in new GenAI app in just 12 weeks using Atlas. Swisscom implemented Atlas to power a RAG application for the East Foresight library transforming unstructured data such as reports, recordings and graphics into vector bettings that large language models can interpret. This enables Vector Search to find any relevant contact resulting in more accurate and tailored responses for users.

If an LLM (large language model) is a brain, a database is memory, and embedding models are a way to find the right information for the right question; embedding models provide significant performance gains when used with LLMs

So think about the LLM as the brain. Think about the database is about your memory and the state of where how things are. And so — and then think about embedding as an ability to find the right information for the right question. So imagine you have a very smart person, say, like Albert Einstein on your staff and you’re asking him, in this case, the LLM, a particular question. While Einstein still needs to go do some homework based on what the question is about finding some information before he can formulate an answer. Rather than reading every book in a library, what the embedding models do is essentially act like a library and pointing Einstein to the right section, the right aisle, the right shelf, the right book and the right chapter on the right page, to get the exact information to formulate an accurate and high-quality response. So the performance gains you get a leveraging embedding models is significant.

Okta (NASDAQ: OKTA)

The emergence of AI agents has contributed to the growing importance to secure identity; management will provide access to Auth For GenAI on the Auth0 platform in March 2025; 200-plus startups and large enterprises are on the waitlist for Auth For GenAI; Auth For GenAI allows AI agents to securely call APIs; management is seeing that companies are trying to build agentic systems, only to run into problems with giving these agents access to systems securely; within AI, management sees agentic AI as the most applicable for Okta’s business in the medium term

With the steady rise of cloud adoption, machine identities and now AI agents, there has never been a more critical time to secure identity…

…On the Auth0 platform, we announced Auth For GenAI. We’ll begin early access this month. We already have a wait list of eager customers ranging from early startups to Fortune 100 organizations. Auth for GenAI is developed to help customers securely build and scale their Gen AI applications. This suite of features allows AI agents to securely call APIs on behalf of users while enforcing the right level of access to sensitive information…

…People are trying to stitch together agentic platforms and write their own agentic systems and what they run smack into is, wait a minute. How am I going to get these agents access all these systems if I don’t even know what’s in these systems and I don’t even know the access permissions that are there and how to securely authenticate them, so that’s driving the business…

…I’ll focus on the agentic part of AI. That’s probably the most, in the medium term, that’s probably the most applicable to our business…

…On the agent side, the equivalent of a lot of these deployments have like passwords hardcoded in the agent. So if that agent gets compromised, it’s the equivalent of your monitor having a bunch of sticky notes on it with your passwords before single sign-on. So Auth for GenAI gives you a protocol in a way to do that securely. So you can store these tokens and have these tokens that are secured. And then if that agent needs to pop out and get some approval from the user, Auth for GenAI supports that. So you can get a step-up biometric authentication from the user and say, “Hey, I want to check Jonathan’s fingerprint to make sure before I book this trip or I spend this money, it’s really Jonathan.” So those 3 parts are what Auth for GenAI is, and we’re super, super excited about it. We have a waitlist. Over 200-plus Fortune 100s and startups are on that thing.

Okta’s management thinks agentic AI is a real phenomenon and will turbocharge machine identity for Okta by 2 orders of magnitude higher; already today, a good part of Okta’s business is providing machine identity; management is the most excited about the customer identity part of Okta’s business when it comes to agentic AI because companies will start having agentic AIs as customers too; management thinks Okta will be monetise agentic AI from both people building agents, and people using agents

The agenetic revolution is real, and the power of AI and the power of these language models, the interaction modalities that you can have with these systems these machines doing things on your path and what they can do and how they can infer next actions, et cetera, et cetera. You all know it’s really real. But the way to think about it from an Okta perspective, it is like machine identity on steroids, turbocharged to like 2 orders of magnitude higher. So that’s like really exciting for us because what do we do. A good part of our business is actually logging in machines right now. Auth0 has the machine-to-machine tokens where people, if they build some kind of web app that services other machines, they can use Auth0 for the login for that. Okta has similar capabilities. And now you have not only that basic authentication challenge but you have the — all of these applications as you get 2 orders of magnitude, more things logging in, you have to really worry about the fine grain authorization into your services…

…[Question] Which side of the business are you more excited about from an agentic AI perspective?

[Answer] I think the customer identity side is more exciting. I think it’s a little bit of a — my answer is a little bit of a — I’m kind of like having both ways because a lot of the — when you talk about developers building agentic AI, they’re doing it inside of enterprises. So like the pattern I was talking about earlier, there’s these teams and these companies that have been tasked with we hear about this [ agent ] and make it work. And the first thing they have to do is I’ve had many conversations with customers where they’ve been in these discussions and we want — we did a POC and now we’re worried about doing it broadly, but the task was basically hook everything up to our existing — hook these agents up to all of our existing systems. And before we could do that inside of enterprise, we had to get a good identity foundation in front of all these things. And so it’s kind of like similar to your building something and you’re a developer, you’re exposing APIs, you’re doing fine grain authorization. You’re taking another — you’re using another platform or you’re building your own agentic AI platform, and you’re having to talk to those systems and those APIs to do things on user’s behalf, so you’re a developer, but it’s kind of like a workforce use case, but I think people building these systems and getting the benefit from that is really exciting…

…We can monetize it on “both side”, meaning people building the agents and people using the agents. The agents have to log in and they have to log into something. So I think it’s potential to monetize it on both sides.

Okta’s management thinks the software industry does not yet know how to account for AI agents in software deals; management thinks that companies will eventually be buying software licenses for both people and AI agents

One of the things that we don’t have today is the industry doesn’t have a way to like identify an agent. I don’t mean in the sense of like authenticating or validated agent. I mean to actually a universal vernacular for how to record an agent, how to track it and how to account for it. And so I think that’s something you’ll see coming. You’ll see there will be actually a type of account, an Okta that’s an agent account. You’ll see companies starting to — when they buy software, they say, hey, I buy these many people and these many agentic licenses. And that’s not quite there yet. Of course, platforms that are coming out with agent versions have this to some degree, but there isn’t a common cross-company, cross enterprise definition of an agent, which is an interesting opportunity for us actually.

Sea Ltd (NYSE: SE)

Sea’s management is using AI in Shopee to understand shoppers’ queries better and to help sellers enhance product listings, and these AI initiatives have improved purchase conversion rates and sellers’ willingness to spend on advertising; management has upgraded Shopee’s chatbots with AI and this led to meaningful improvement in customer service satisfaction score and customer service cost-per-contact; management is using AI to improve the shopper return-refund process and has seen a 40% year-on-year decrease in resolution times in Asia markets; management thinks Shopee is still early in the AI adoption curve

We continue to adopt AI to improve service quality in a practical and effective manner. By using large language models to understand queries, we have made search and discovery more accurate, helping users find relevant products faster. We provide our sellers with AI tools to enhance product listings by improving descriptions, images, and videos. These initiatives have improved purchase conversion rates while also making sellers more willing to spend on ads, boosting our ad revenue…

… After upgrading our chatbots with AI, we saw a meaningful increase in our customer service satisfaction score over the past year, and a reduction in our customer service cost-per-contact by nearly 30% year-on-year. We also used large language model capabilities to enhance our buyer return-refund process, addressing a key e-commerce pain-point. In the fourth quarter, we improved resolution times in our Asia markets by more than 40% year-on-year, with nearly six in ten cases resolved within one day. We believe we are still early in the AI adoption curve and remain committed to exploring AI-driven innovations to improve efficiency and deliver better experiences for our users.

Sea’s management thinks the use of AI is helping Sea both monetise its services better, and save costs

[Question] I just wanted to get some color with regard to the benefit from AI. Are we actually seeing cost efficiency, i.e., the use of AI actually save a lot of the manual labor cost? So that helps to achieve a lot of cost savings? Or are we actually seeing the monetization is getting better coming from AI?

[Answer] We are seeing both, in fact, for example, in our search and recommendations, we actually use the large language model to better understand user queries, making certain discovery a lot more accurate and helping users find relevant faster… We are also using the AI to understand the product a lot better like historically, it was a fintech matching, but now we can use existing pictures and the descriptions and the reviews to generate a lot more richer understanding of the product. And all those help us essentially matching, our product users’ intention a lot better. 

We are also having a lot of AIGC, AI-generated content in our platform. We provide that as a tool to our sellers to be able to produce image, a description of the product or the videos, especially a lot better compared to what they had before.

And both of this increased our conversions meaningfully in our platform.

On the other side, on the cost savings side, I think in Forrest’s opening, we talked about the chatbot, the — if you look at our queries, about 80% of the queries are answered by the chatbot already, which is a meaningful cost savings for the — for our operations. I think that’s also why you can see that our cost management for e-commerce is doing quite well. Even for the 20% answered by the agent, we have an AI tool for the agent to be able to understand the context a lot better, so can help them to respond a lot faster to the customers,

Tencent (OTC: TCEHY)

Tencent’s AI initiatives can be traced to 2016; management has been investing in Tencent’s proprietary foundation model, HunYuan, since early 2023; management sees HunYuan as the foundation for Tencent’s consumer and enterprise businesses

Our AI initiatives really trace back to 2016 when we first established our AI lab. Since 2023, early part of that, we have been investing heavily in our proprietary HunYuan foundation model, which forms an important technology foundation for our consumer and enterprise-facing businesses and will serve as a growth driver for us in the long run. Our investments in HunYuan enable us to develop end-to-end foundation model capabilities in terms of infrastructure, algorithm, training, alignment and data management and also to tailor solutions for the different needs of internal and external use cases.

Tencent’s management has released multimodal HunYuan foundation models across image, video, and 3D generation; the multimodal HunYuan foundation models have received excellent scores in AI benchmarking

In addition to LLMs, we have released multimodal HunYuan foundation models with capabilities that span across image, video and 3D generation. HunYuan’s image generation models achieved the highest score from FlagEval in December of last year. In video generation, our model excels in video output quality and ranked first on Hugging Face in December of last year. 

Tencent’s management has been actively releasing Tencent’s AI models to the open source community

Our 3D generation model was the industry’s first open source model supporting text and image to 3D generation. In addition to that, we also contribute to the open source community actively and have open sourced a series of advanced models in the HunYuan family for 3D generation, video generation, large language and image generation. Several of these models have gained great popularity among developers worldwide.

For Tencent’s consumer-facing AI products, management has been utilising different AI models because they believe that a combination of models can handle complex tasks better than a single model; Tencent’s native AI application, Yuanbao, provides access to multiple models; Yuanbao’s DAU (daily active users) increased 20-fold from February 2025 to March 2025; management has been testing AI features in Weixin to improve the user experience and will be adding more AI features over time; management will be introducing a lot more consumer-facing AI applications in the future; management thinks consumer AI is in a very early stage, but they can see Yuanbao becoming a strong AI native assistant helping with deep research, and the Ema Copilot being a personal and collaborative library; management is looking to infuse AI into each of Tencent’s existing consumer products

Going to our consumer-facing AI products. We adopt a multimodal strategy to provide the best AI experience to our users, so we can leverage all available models to serve different user needs. We need this because different AI models are optimized for different capabilities, performance metrics and use cases and a combination of various models can handle complex tasks better than a single model…

…On the product front, our AI native application, Yuanbao, provides access to multiple models, including Chain of Thought reasoning models such as HunYuan T1 and DeepSeek R1 and fast-thinking model HunYuan Turbo S with the option of integrating web search results. Yuanbao search results can directly access high-quality proprietary content from Tencent ecosystem, such as official accounts and video accounts. By leveraging HunYuan’s multimodal capabilities, Yuanbao can process prompts in images, voice and documents in addition to text. Our cloud infrastructure supports stable and uncapped access to leading models. From February to March, Yuanbao’s DAU increased 20-fold to become the third highest AI native mobile application in China by DAU…

…We have also started testing AI features in Weixin to enhance user experience, such as for search, language input and content generation and we will be adding more AI features in Weixin going forward…

…We actually have a whole host of different consumer-facing applications and you should expect more to come. I think AI is actually in a very early stage. So it’s really hard to talk about what the eventual state would look like. But I would say, one, each product will continue to evolve into very useful and even more powerful products for users. So Yuanbao can be sort of a very strong AI native assistant and the Ema copilot could be your personal library and also a collaborative library for team collaborations. And Weixin can have many, many different features to come, right? And in addition to these products, I think our other products would have AI experiences, including QQ, including browser and other products. So I think we would see more and more AI — consumer AI-facing products. And at the same time, each one of the products will continue to evolve…

…Each one of our products would actually try to look for unique use cases in which they can leverage AI to provide a great user experience to their users…

…Yuanbao, well, right now, it is a chatbot and search. But over time, I think it would actually proliferate into a all-capable AI assistant with many different functionalities serving different types of people. So if — it would range from sort of students who want to learn and it would include all kinds of different people who, actually knowledge workers who want to complete their work and would sort of cover deep research, which allows people to very deep research into different topics.

Tencent’s management thinks that there are advantages to both developing Tencent’s own foundation models and using 3rd-party models

By investing in our own foundation models, we are able to fully leverage our proprietary data to tailor solutions to meet customized internal and customer needs, while at the same time, making use of external models allowed us to benefit from innovations across the industry.

Tencent’s management has been accelerating AI integration into Tencent’s cloud businesses, including its infrastructure as a service business, its platform as a service business, and its software as a service business; the AI-powered transcription and meeting summarisation functions in Tencent Meeting saw a year-on-year doubling in monthly active users to 15 million

We have been accelerating AI integration into our cloud business across our infrastructure, platform and Software as a Service solutions.

Through our Infrastructure as a Service solutions, enterprise customers can achieve high-performance AI training and inference capabilities at scale and developers can access and deploy mainstream foundation models.

For Platform as a Service, PaaS, our TI platform supports model fine-tuning and inference demands with flexibility, will provide powerful solutions supporting enterprise customers in customizing AI assistants using their own proprietary data and developers in generating mini programs and mobile applications through natural language prompts.

Our SaaS products increasingly benefit from AI-powered tools. Real-time transcription and meeting summarization functions in Tencent Meeting gained significant popularity resulting in monthly active users for these AI functions doubling year-on-year to 15 million. Tencent Docs also enhanced the user productivity and content generation and processing.

Tencent’s AI cloud revenue doubled in in 2024, despite management having limited the availability of GPUs for cloud services in preference for internal use-cases for ad tech, foundation model training, and inference for Yuanbao and Weixin; management has stepped up the purchase of GPUs in 2024 Q4 and expects the revenue growth of cloud services to accelerate as the new GPUs are deployed for external use cases; Tencent’s capital expenditures in 2024 Q4 increased more than 3x to US$10.7 billion from a year ago because of the higher purchases of GPUs; management believes the step-up in capex in 2024 Q4 is to a new higher steady state

In 2024, our AI cloud revenue approximately doubled year-on-year. Increased allocation of GPUs for internal use cases initially for ad tech and foundation model training and more recently on AI inference for Yuanbao and Weixin has limited our provision of GPUs to external clients and thus constrained our cloud services revenue growth. For external workloads, we have prioritized available GPUs towards high-value use cases and clients. Since the fourth quarter of 2024, we have stepped up our purchase of GPUs. And as we deploy these GPUs, we expect to accelerate the revenue growth of our overall cloud services…

…As the capabilities and benefits of AI become clearer, we have stepped up our AI investments to meet our internal business needs, train foundation models and support searching demand for inference we’re experiencing from our users. To consolidate our resources around this all important AI effort, we have reorganized our AI teams to sharpen focus on both fast product innovation and deep model research. Matching our stepped-up execution momentum and decision-making velocity, we increased annual CapEx more than threefold to USD 10.7 billion in 2024, equivalent to approximately 12% of our revenue with a notable uplift in fourth quarter of the year as we bought more GPUs for both inference needs as well as for our cloud services…

…We did step up CapEx to a new sort of higher steady state in the fourth quarter of last year…

…Part of the reason why you see such a big step up in terms of the CapEx in the fourth quarter is because we have a bunch of rush orders for GPUs for both inference as well as for our cloud service. And we would only be able to capture the large increase in terms of IaaS service demand when we actually install these GPUs into the data center, which would take some time. So I would say we probably have not really captured a lot of that during the first quarter. But over time, we will capture quite a bit of it with the arrival and installation of the GPUs.

Tencent’s management already sees positive returns for Tencent from their investment in AI; the positive returns come in 3 areas, namely, in advertising, in games, and in video and music services; in advertising, Tencent has been using AI to approve ad content more efficiently, improve ad targeting, streamline the ad creative process for advertisers, and deliver higher return on investment for advertisers; Tencent’s marketing services experienced revenue growth of 20% in 2024 because of AI integration, despite a challenging macro environment; in games, Tencent is using AI to improve content production efficiency and build in-game chat bots, among other uses; in video and music services, Tencent is using AI to improve productivity in content creation and effectively boost content discovery

We believe our investment in AI has already been generating positive returns for us…

…For advertising, we enhanced our advertising system with neural network AI capabilities since 2015. We rebuilt ad tech platform using large model capabilities since 2020, enabling long sequence user behavior analysis across multiple properties which resulted in increased user engagement and higher click-through rates. Since 2023, we have been adding large language model capabilities to facilitate more efficient approvals of ad content, to better understand merchandise categories and users commercial intent for more precise ad targeting and to provide generative AI tools for advertisers to streamline the ad creative process, leveraging AI-powered ad targeting capabilities and generative AI ad creative solutions. Our marketing services business is already a clear beneficiary of AI integration with revenue growth of 20% in 2024 amid challenging macro environment.

In games, we adopted machine learning technology in our PvP games since 2017. We leveraged AI in games to optimize matching experience, improve game balance and facilitate AI coaching for new players, empowering our evergreen games strategy. Our games business is now integrating large language model capabilities, enhanced 3D content production efficiency and to empower in-game chatbots.

For our video and music services, we’re leveraging AI to improve productivity in animation, live action video and music content creation. Our content recommendation algorithms are powered by AI and are proven effective in boosting content discovery. These initiatives enables us to better unlock the potential of our great content platforms…

…Across pretty much every industry we monitor the AI enhancements we’re deploying and delivering superior return on investment for advertisers versus what they previously enjoyed and versus what’s available elsewhere.

Tencent’s management expects to further increase capital expenditure in 2025 and for capital expenditure to be a low-teens percentage of revenue for the year; while capital expenditure in 2025 is expected to increase, the rate of growth has slowed down significantly

We intend to further increase our capital expenditures in 2025 and expect our CapEx to account for low teens percentage of our revenue…

…[Question] You guided a CapEx to revenue ratio of low-teens for 2025, which is a similar ratio as for ’24. So basically, this guidance implies a significant slowdown of CapEx growth.

Tencent’s management sees several nuances on the impact to Tencent’s profit margins from the higher AI capital expenditures expected, but they are optimistic that Tencent will be able to protect its margins; the AI capital expenditures go into 4 main buckets, namely, (1) ad tech and games, (2) large language model training, (3) renting out GPUs in the cloud business, and (4) consumer-facing inference; management sees good margins in the 1st bucket, decent margins in the 3rd bucket, and potentially some margin-pressure in the 4th bucket; but in the 4th bucket, management sees (1) the potential to monetise consumer-facing inference through a combination of advertising revenue and value-added services, and (2) avenues to reduce unit costs through software and better algorithms

[Question] As we step up the CapEx on AI, our margin will be inevitably dragged by additional depreciation and R&D expenses. So over the past few years, we have seen meaningful increase in margin as we focus on high-quality growth. So going forward, how should we balance between growth and profitability improvement?

[Answer] It’s worth digging into exactly where that CapEx is going to understand whether the depreciation becomes a margin pressure or not. So the most immediate use of the CapEx is GPUs to support our ad tech and to a lesser extent, our games businesses. And you can see from our results, you can hear from what Martin talked about, that, that CapEx actually generates good margins, high returns.

A second use of CapEx was GPUs for large language model training…

…Third, there’s CapEx related to our cloud business, which, we buy this GPU servers, we rent them out to customers, we generate a return. It may not be the highest return business in our portfolio but nonetheless, it’s a positive return. It covers the cost of the GPUs and therefore, the attendant depreciation.

And then finally, where I think there is potentially the short-term pressure is the CapEx for 2C [to-consumers] inference. And it — that is an additional cost pressure but we believe it’s a manageable cost pressure because that CapEx is a subset of the total CapEx. And we’re also optimistic that over time those — the 2C inference activity that we’re generating, just like previous activity within different Tencent platforms will be monetizing through a combination of advertising revenue and value-added services. So overall, while we understand that you have questions around the step-up in CapEx and how that translates into profitability over time, we’re actually quite optimistic that we can continue to grow the business while protecting margins…

…In the inference for consumer-facing product. There’s actually a lot of avenues through which we can actually reduce the unit cost by technical means, by software and by better algorithms. So I think that’s also sort of a factor to keep in mind.

Tencent’s management believes that the AI industry is now getting much higher productivity on large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected, as a result of DeepSeek’s breakthroughs; previously, the belief was that each new generation of large language models would require an order of magnitude more GPUs; Tencent’s AI-related capital expenditure is the largest amongst Chinese technology companies; management thinks that Chinese technology companies are spending less on capital expenditure as a percentage of revenue than Western peers because Chinese companies have been prioritizing efficient utilization of GPUs without impairing the ultimate effectiveness of the AI technology developed

There was a period of time last year when there was a belief that every new generation of large language model required an order of magnitude more GPUs. That period of time ended with the breakthroughs that DeepSeek demonstrated. And now the industry and we within the industry are getting much higher productivity on a large language model training from existing GPUs without needing to add additional GPUs at the pace previously expected…

…There was a period last year when people asked us if our CapEx was big enough relative to our China peers, relative to our global peers. And now out of the listed companies, I think we had the largest CapEx of any China tech company in the fourth quarter. So we’re at the forefront among our China peers. In general, the China tech companies are spending less on CapEx as a percentage of revenue than some of their Western peers. But we believe for some time that’s because the Chinese companies are generally prioritizing efficiency and utilization — efficient utilization of the GPU servers. And that doesn’t necessarily impair the ultimate effectiveness of the technology that’s being developed. And I think DeepSeek’s success really sort of symbolized and solidified, demonstrated that, that reality.

Tencent’s management thinks AI can benefit Tencent’s games business in 3 ways, namely, (1) a direct, more short-term benefit in helping game developers be more productive, (2) an indirect, more long-term benefit in terms of games becoming an important element of human expression in an AI-dominated world, and (3) allow evergreen games to be more evergreen

We do believe that games benefit in a direct and potentially a less direct way from AI technology enhancements. The direct way is the game developers using AI to assist them in creating more content more quickly and serving more users more effectively. And then the indirect way, which may be more of a multi-decade rather than the second half of this year story is that as humanity uses AI more broadly, then we think there’ll be more time and also more desire for high agency activities among people who are now empowered by AI. And so one of the best ways for them to express themselves in a high agency way rather than a passive way is through interactive entertainment, which is games…

…We actually felt AI would allow evergreen games to be more evergreen. And we are already seeing sort of how AI can help us to execute and magnify our evergreen strategy. And part of it is within production, right, you can actually produce great content now within a shorter period of time so that you can keep updating the games with higher frequency of high-quality content. And with the PvE experience, when you have smarter box, right, you actually sort of make the game more exciting and more like PvP. And within PvP, a lot of the matching and balancing and coaching of new users can actually sort of be done in a much better way when you apply AI.

Tencent’s management sees strong competitive advantages that Tencent has when it comes to AI agents because of the large user base of Tencent’s products and the huge variety of activities that happen within Tencent’s products

We would be able to build stand-alone AI agents by leveraging models that are of great quality and at the same time by leveraging the fact that we have a lot of consumers on our different software platforms like our browser, like Yuanbao over time. But at the same time, right, even within Weixin and within QQ, we can have AI agents. And the AI agents can actually leverage the ecosystem within the apps and provide really great service to our users by completing complex tasks, right? If you look at Weixin, for example, Weixin has got a lot of users, a very long user time per day as well as high frequency of users opening up the app, that’s 1 advantage. The second advantage is that if you look at the activities within Weixin is actually very, very diversified, right? It’s not just sort of entertainment, it’s not just transactions, it’s actually sort of social communication and content and a lot of people will conduct their work within Weixin, a lot of people conduct their learning within Weixin and there are a lot of transactions that go through Weixin. And there’s a multitude of Mini Programs, which actually allowed all sorts of different activities to be carried out, right? So if you look at the Mini program ecosystem, we can easily build an agent based on a model that actually can connect to a lot of the different Mini Programs and have activities and complex tasks completed for our users. So I think those are all very distinctive advantages that we have.

Tencent’s management believes that AI search will eventually replace traditional search

At a high level, if we look at the history of web search subsuming web directory, if we look at our own behavior, with AI prompts vis-a-vis traditional search, I think it’s possible that AI search will subsume traditional search because ultimately, web directory traditional search, AI prompt, all represent mechanisms for accessing the Internet’s knowledge graph.

Tencent’s management believes that in China, AI chatbots will be monetised first through performance advertising followed by value-added services, as opposed to in the West, where AI chatbots have been monetised first through subscription models followed by performance advertising

In terms of how the AI prompt will be monetized, time will tell but I think that we can already see in the Western world, the first monetization is through subscription models and then over time, performance advertising will follow. I think in China, it will start with performance advertising and then value-added services will follow.

Veeva Systems (NYSE: VEEV)

Veeva’s management’s AI strategy for Veeva is to have its Commercial Cloud, Development Cloud, and Quality Cloud be the life sciences industry’s standard core systems of record; management is making Veeva’s data readily available for the development of AI applications by Veeva and 3rd parties through the Direct Data API released in 2024; management is seeing good uptake on the Direct Data API; the Direct Data API will be free to all of Veeva’s customers because management wants people to be building on the API; management found a way to offer the Direct Data API with lesser compute resources than originally planned for; Veeva is already using the Direct Data API internally, and 10 customers are already using it; it takes time for developers to be used to the Direct Data API, because it’s a fundamentally new type of API, but it’s a great API; management believes that Direct Data API will enable the life sciences industry to leverage their core data through AI faster than any other industry

We also executed well on our AI strategy. Commercial Cloud, Development Cloud, and Quality Cloud are becoming the industry’s standard core systems of record. With significant technology innovation including the Direct Data API released this year, we are making the data from our applications readily available for the development of relevant, timely AI solutions built by Veeva, our customers, and partners…

…We are seeing good uptake of the Direct Data API. And we — yes, as you mentioned, we recently announced that that’s going to be free to all of our customers. And — the reason there is we want everybody building on that pack of API. It’s just a much better, faster API for many use cases, and we found a way to do it where it was not going to consume as many compute resources as we thought it was…

…We are using it internally, for example, for connecting different parts of our clinical suite, different parts of our safety suite together and our partners are starting to do it. We have more than 10 customers that are already doing it. Some of them are large customers. it takes some time because it’s a different paradigm for integration. People have been using a hammer for a long time. And now you’re giving them a Jack Hammer and they got to learn how to use it. But we are super enthused. It’s a fundamental new type of API where you can get like all of the data out of your Vault, super quickly…

…I’m really pleased about what we’re doing for the life sciences industry because many of our core systems are Veeva and now their core systems are going to be enabled with this fundamental new API that’s going to allow them to leverage their core data faster than any other industry.

Reminder from management that Veeva recently announced 3 new AI solutions, namely, Vault CRM Bot, CRM Voice Control, and MLR Bot; management has more AI solutions in the pipeline, but the timing for release is still unclear; management wants to invest more in AI solutions and they think the company has strong leadership in that area

We announced new Veeva AI Solutions including Vault CRM Bot, CRM Voice Control, and MLR Bot…

……Now with CRM Voice Control that we’ll be bringing out this year, and also CRM Bot and the MLR Bot, medical legal regulatory review, and we have quite a few others in the plan, too. We don’t know exactly which ones we’ll bring out when, but we have — we’re putting more investment in AI solutions. We centralized the group around that, so we can develop. I have a strong leader there and develop more core competency around around AI.

Veeva’s management was initially a little skeptical of AI because of the amount of money flowing in, and the amount of hype surrounding it

[Question] I want to start with the AI offerings that you’ve built out Peter, maybe if you were on the tape back a year, there was a little bit of a perception from the investment community. That you were coming off as maybe a little bit skeptical on AI, but now you’ve come out with a lot of these products. Maybe can you walk us through kind of what’s driven the desire or the momentum to push out these products kind of quickly?

[Answer] AI is certainly captivating technology, right? So much money going into it, so much progress, and so much hype.

Veeva’s management thinks AI is shaking out the way they expected to, which is the existence of many large language models; management also thinks the development of AI has become more stable

If we just stay at that level, I’m really pleased that things are starting to shake out roughly how we thought they were going to take out. There’s not going to be one large language model, there are going to be multiple. There’s not going to be 50, but there’s going to be a good handful and they’re going to specialize in different areas. And it’s not so unstable anymore, where you wake up and everything changes, right? DeepSeek came out, came out, Yes, well, guess what? The world keeps turning. NVIDIA is going to have their own model? That’s okay, and the world keeps turning. So I think it’s starting to settle out.

Veeva’s management sees the infrastructure layer of AI as being really valuable, but they also see a lot of value in building specific use cases on top of the infrastructure layer, and that is where they want Veeva to play in

So it’s settling out that these core large language models are going to be at the platform level and that’s super valuable, right? That’s not where companies like Veeva play in, that core infrastructure level. It’s very valuable. But there’s a lot of great value on specific use cases on top that can be used in the workflow. So that’s what we’re doing now, focusing on our AI solutions.

Veeva’s management is using AI internally but it’s still early days and it has yet to contribute improvements to Veeva’s margin; Veeva’s expected margin improvements for 2025 (FY2026) is not related to AI usage

[Question] Going back to the topic of AI… how you’re leaning into kind of internal utilization too, if we think about kind of some of the margin strength you’re delivering throughout the business?

[Answer] Around the internal use of AI and the extent to which that was contributing to margins, I think. And I think the short answer there is it’s an area that we’re really excited about internally as well. We’re building strategies around, but it’s not a major contributor to the margin expansion that we saw in Q4 or in the coming year. So it’s something we’re looking into. We’re building strategies around. It’s not something we’re counting on, though, to deliver on this year’s guidance.

In 2023 and 2024, Veeva’s management was seeing customers get distracted from core technology spending because the customers were chasing AI; management is no longer seeing the AI distraction at play

I believe we called it before AI disruption, maybe that was 18 months or so a year ago. I think that’s largely behind us. Our customers have settled into what AI is and what it does. They’re still doing some innovation projects, but it’s not consuming them or distracting from the core work. So I think we’re largely through that [ 3 ] of AI distraction now.

Veeva’s management thinks that Veeva is the fastest path to AI for a life sciences industry CRM because any AI features will have to be embedded in the workflow of a life sciences company

It turns out Veeva is the fastest path to AI that you can use in CRM because it has to be done in the workflow of what you’re doing. This is not some generic AI. This is AI for pre-call planning for compliance, for how to — for the things that a pharmaceutical rep does in a compliant way based on the data sources that are needed in CRM. So Veeva is the fastest path to AI.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Meituan, MongoDB, Okta, Tencent, and Veeva Systems. Holdings are subject to change at any time.

Is There Something Wrong With Pinduoduo’s Asset-Light Business Model?

The company’s gross property, plant, and equipment is barely sufficient to buy a laptop for each employee

Chinese e-commerce company Pinduoduo has experienced explosive growth in the past few years, with its revenue growing at a breath-taking pace of 142% per year from RMB 505 million in 2016 to RMB 247 billion in 2023. Profitability has also increased markedly over the same period, rising from a loss of RMB 322 million to a profit of RMB 60 billion. What’s even more impressive is that Pinduoduo has achieved this while being asset-light. The company ended 2023 with total assets of merely RMB 348 billion, which equates to a remarkable return on assets (profit as a percentage of total assets) of 17%. 

But I noticed two odd things about Pinduoduo’s asset-light nature as I dug deeper into the numbers. Firstly, Pinduoduo’s gross property, plant, and equipment per employee in 2023 was miles ahead of other large Chinese technology companies such as Alibaba, Meituan, and Tencent. This is shown in Table 1.

Table 1; Source: Company annual reports

Secondly – and the more important oddity here – was that Pinduoduo’s gross property, plant, and equipment per employee in 2017 was merely RMB 10,591, or RMB 0.01 million (gross property, plant, and equipment of RMB 12.3 million, and total employees of 1,159). According to ChatGPT, a professional laptop cost at least RMB 8,000 in China back in 2017, meaning to say, Pinduoduo’s gross property, plant, and equipment in that year was barely sufficient to purchase just a professional laptop for each employee. 

I’m not saying that something nefarious is definitely happening at Pinduoduo. But with the numbers above, I wonder if there’s something wrong with Pinduoduo’s purportedly asset-light business model. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Meituan and Tencent. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q4) – Part 2

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the fourth quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

I’ve split the latest commentary into two parts for the sake of brevity. This is Part 2, and you can Part 1 here. With that, I’ll let the management teams take the stand… 

Microsoft (NASDAQ: MSFT)

Microsoft’s management is seeing enterprises move to enterprise-wide AI deployments 

Enterprises are beginning to move from proof of concepts to enterprise-wide deployments to unlock the full ROI of AI. 

Microsoft’s AI business has surpassed an annual revenue run rate of $13 billion, up 175% year-on-year; Microsoft’s AI business did better than expected because of Azure, Microsoft Copilot (within Copilot, price per seat was a strength and still retains good signal for value)

Our AI business has now surpassed an annual revenue run rate of $13 billion, up 175% year-over-year…

…[Question] Can you give more color on what drove the far larger-than-expected Microsoft AI revenue? We talked a bit about the Azure AI component of it. But can you give more color on that? And our estimates are that the Copilot was much bigger than we had expected and growing much faster. Any more details on the breakdown of what that Microsoft AI beat would be great.

[Answer] A couple of pieces to that, which you correctly identified, number one is the Azure component we just talked about. And the second piece, you’re right, Microsoft Copilot was better. And what was important about that, we saw strength both in seats, both new seats and expansion seats, as Satya talked about. And usage doesn’t directly impact revenue, but of course, indirectly does as people get more and more value added. And also price per seat was actually quite good. We still have a good signal for value.

Microsoft’s management is seeing AI scaling laws continue to show up in both pre-training and inference-time compute, and both phenomena have been observed internally at Microsoft for years; management has seen gains of 2x in price performance for each new hardware generation, and 10x for each new model generation

AI scaling laws continue to compound across both pretraining and inference time compute. We ourselves have been seeing significant efficiency gains in both training and inference for years now. On inference, we have typically seen more than 2x price performance gain for every hardware generation and more than 10x for every model generation due to software optimizations. 

Microsoft’s management is balancing across training and inference in the buildout of Microsoft’s AI capacity; the buildout going forward will be governed by revenue growth and capability growth; Microsoft’s Azure data center capacity is expanding in line with both near-term and long-term demand signals; Azure has more than doubled its capacity in the last 3 years, and added a record amount of capacity in 2024; Microsoft’s data centres uses both in-house as well as 3rd-party chips

Much as we have done with the commercial cloud, we are focused on continuously scaling our fleet globally and maintaining the right balance across training and inference as well as geo distribution. From now on, it’s a more continuous cycle governed by both revenue growth and capability growth thanks to the compounding effects of software-driven AI scaling laws and Moore’s Law…

…Azure is the infrastructure layer for AI. We continue to expand our data center capacity in line with both near-term and long-term demand signals. We have more than doubled our overall data center capacity in the last 3 years, and we have added more capacity last year than any other year in our history. Our data centers, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next-generation AI workloads. We continue to take advantage of Moore’s Law and refresh our fleet as evidenced by our support of the latest from AMD, Intel, NVIDIA, as well as our first-party silicon innovation from Maia, Cobalt, Boost and HSM.

Microsoft’s management is seeing growth in raw storage, database services, and app platform services as AI apps scale, with an example being Azure OpenAI apps that run on Azure databases and Azure App Services

We are seeing new AI-driven data patterns emerge. If you look underneath ChatGPT or Copilot or enterprise AI apps, you see the growth of raw storage, database services and app platform services as these workloads scale. The number of Azure OpenAI apps running on Azure databases and Azure App Services more than doubled year-over-year, driving significant growth in adoption across SQL, Hyperscale and Cosmos DB.

OpenAI has made a new large Azure commitment; OpenAI’s APIs run exclusively on Azure; management is still very happy with the OpenAI partnership; Microsoft has ROFR (right of first refusal) on OpenAI’s Stargate project

As we shared last week, we are thrilled OpenAI has made a new large Azure commitment…

… And with OpenAI’s APIs exclusively running on Azure, customers can count on us to get access to the world’s leading models…

…[Question] I wanted to ask you about the Stargate news and the announced changes in the OpenAI relationship last week. It seems that most of your investors have interpreted this as Microsoft, for sure, remaining very committed to OpenAI’s success, but electing to take more of a backseat in terms of funding OpenAI’s future training CapEx needs. I was hoping you might frame your strategic decision here around Stargate.

[Answer] We remain very happy with the partnership with OpenAI. And as you saw, they have committed in a big way to Azure. And even in the bookings, what we recognized is just the first tranche of it. And so you’ll see, given the ROFR we have, more benefits of that even into the future. 

Microsoft’s management thinks Azure AI Foundry has best-in-class tooling run times for users to build AI agents and access thousands of AI models; Azure AI Foundry already has 200,000 monthly active users after just 2 months; the models available on Azure AI Foundry include DeepSeek’s R1 model, and more than 30 industry-specific models from partners; Microsoft’s Phi family of SLMs (small language model) has over 20 million downloads

Azure AI Foundry features best-in-class tooling run times to build agents, multi-agent apps, AIOps, API access to thousands of models. Two months in, we already have more than 200,000 monthly active users, and we are well positioned with our support of both OpenAI’s leading models and the best selection of open source models and SLMs. DeepSeek’s R1 launched today via the model catalog on Foundry and GitHub with automated red teaming, content safety integration and security scanning. Our Phi family of SLM has now been downloaded over 20 million times. And we also have more than 30 models from partners like Bayer, PAYG AI, Rockwell Automation, Siemens to address industry-specific use cases.

Microsoft’s management thinks Microsoft 365 Copilot is the UI (user interface) for AI; management is seeing accelerated adoption of Microsoft 365 Copilot across all deal sizes; majority of Microsoft 365 Copilot customers purchase more seats over time; daily users of Copilot more than doubled sequentially in 2024 Q4, while usage intensity grew 60% sequentially; more than 160,000 organisations have used Copilot Studio, creating more than 400,000 custom agents in 2024 Q4, uo 2x sequentially; Microsoft’s data cloud drives Copilot as the UI for AI; management is seeing Copilot plus AI agents disrupting business applications; the initial seats for Copilot were for departments that could see immediate productivity benefits, but the use of Copilot then spreads across the enterprise

Microsoft 365 Copilot is the UI for AI. It helps supercharge employee productivity and provides access to a swarm of intelligent agents to streamline employee workflow. We are seeing accelerated customer adoption across all deal sizes as we win new Microsoft 365 Copilot customers and see the majority of existing enterprise customers come back to purchase more seats. When you look at customers who purchased Copilot during the first quarter of availability, they have expanded their seat collectively by more than 10x over the past 18 months. To share just one example, Novartis has added thousands of seats each quarter over the past year and now have 40,000 seats. Barclays, Carrier Group, Pearson and University of Miami all purchased 10,000 or more seats this quarter. And overall, the number of people who use Copilot daily, again, more than doubled quarter-over-quarter. Employees are also engaging with Copilot more than ever. Usage intensity increased more than 60% quarter-over-quarter and we are expanding our TAM with Copilot Chat, which was announced earlier this month. Copilot Chat, along with Copilot Studio, is now available to every employee to start using agents right in the flow of work…

…More than 160,000 organizations have already used for Copilot Studio, and they collectively created more than 400,000 custom agents in the last 3 months alone, up over 2x quarter-over-quarter…

…What is driving Copilot as the UI for AI as well as our momentum with agents is our rich data cloud, which is the world’s largest source of organizational knowledge. Billions of e-mails, documents and chats, hundreds of millions of Teams meetings and millions of SharePoint sites are added each day. This is the enterprise knowledge cloud. It is growing fast, up over 25% year-over-year…

…What we are seeing is Copilot plus agents disrupting business applications, and we are leaning into this. With Dynamics 365, we took share as organizations like Ecolab, Lenovo, RTX, TotalEnergies and Wyzant switched to our AI-powered apps from legacy providers…

…[Question] Great to hear about the strength you’re seeing in Copilot… Would love to get some color on just the common use cases that you’re seeing that give you that confidence that, that will ramp into monetization later.

[Answer] I think the initial sort of set of seats were for places where there’s more belief in immediate productivity, a sales team, in finance or in supply chain where there is a lot of, like, for example, SharePoint grounded data that you want to be able to use in conjunction with web data and have it produce results that are beneficial. But then what’s happening very much like what we have seen in these previous generation productivity things is that people collaborate across functions, across roles, right? For example, even in my own daily habit, it’s I go to chat, I use Work tab and get results, and then I immediately share using Pages with colleagues. I sort of call it think with AI and work with people. And that pattern then requires you to make it more of a standard issue across the enterprise. And so that’s what we’re seeing.

Azure grew revenue by 31% in 2024 Q4 (was 33% in 2024 Q3), with 13 points of growth from AI services (was 12 points in 2024 Q3); Azure AI services was up 157% year-on-year, with demand continuing to be higher than capacity;  Azure’s non-AI business had weaker-than-expected growth because of go-to-market execution challenges

Azure and other cloud services revenue grew 31%. Azure growth included 13 points from AI services, which grew 157% year-over-year, and was ahead of expectations even as demand continued to be higher than our available capacity. Growth in our non-AI services was slightly lower than expected due to go-to-market execution challenges, particularly with our customers that we primarily reach through our scale motions as we balance driving near-term non-AI consumption with AI growth.

For Azure’s expected growth of 31%-32% in 2025 Q1 (FY2025 Q3), management expects  contribution from AI services to grow from increased AI capacity coming online; management expects Azure’s non-AI services to still post healthy growth, but there are still impacts from execution challenges; management expects Azure to no longer be capacity-constrained by the end of FY2025 (2025 Q2); Azure’s capacity constraint has been in power and space

In Azure, we expect Q3 revenue growth to be between 31% and 32% in constant currency driven by strong demand for our portfolio of services. As we shared in October, the contribution from our AI services will grow from increased AI capacity coming online. In non-AI services, healthy growth continues, although we expect ongoing impact through H2 as we work to address the execution challenges noted earlier. And while we expect to be AI capacity constrained in Q3, by the end of FY ’25, we should be roughly in line with near-term demand given our significant capital investments…

…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year.

More than half of Microsoft’s cloud and AI-related capex in 2024 Q4 (FY2025 Q2) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q1 (FY2025 Q3) and 2025 Q2 (FY2025 Q4) to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex will grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; Microsoft’s long-lived infrastructure investments are fungible; the long-lived assets are land; the presence of Moore’s Law means that management does not want to invest too much capex in any one year because the hardware and software will become much better in just 1 year; management thinks Microsoft’s AI infrastructure should be continuously upgraded to take advantage of Moore’s Law; Microsoft’s AI capex growth going forward will be tagged to customer contract delivery; the fungibility of Microsoft’s AI infrastructure investments relates to not just inference (which is the primary use case), but also training, post training, and running the commercial cloud business

More than half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog…

…Next, capital expenditures. We expect quarterly spend in Q3 and Q4 to remain at similar levels as our Q2 spend. In FY ’26, we expect to continue investing against strong demand signals, including customer contracted backlog we need to deliver against across the entirety of our Microsoft Cloud. However, the growth rate will be lower than FY ’25 and the mix of spend will begin to shift back to short-lived assets, which are more correlated to revenue growth. As a reminder, our long-lived infrastructure investments are fungible, enabling us to remain agile as we meet customer demand globally across our Microsoft Cloud, including AI workloads…

…When I talk about being capacity constrained, it takes two things. You have to have space, which I generally call long-lived assets, right? That’s the infrastructure and the land and then you have to have kits. We’re continuing, and you’ve seen that’s why our spend has pivoted this way, to be in the long-lived investment. We have been short power and space. And so as you see those investments land that we’ve made over the past 3 years, we get closer to that balance by the end of this year…

…You don’t want to buy too much of anything at one time because, in Moore’s Law, every year is going to give you 2x, your optimization is going to give you 10x. You want to continuously upgrade the fleet, modernize the fleet, age the fleet and, at the end of the day, have the right ratio of monetization and demand-driven monetization to what you think of as the training expense…

…I do think the way I want everyone to internalize it is that the CapEx growth is going through that cycle pivot, which is far more correlated to customer contract delivery, no matter who the end customer is…

…  the other thing that’s sometimes missing is when we say fungible, we mean not just the primary use, which we’ve always talked about, which is inference. But there is some training, post training, which is a key component. And then they’re just running the commercial cloud, which at every layer and every modern AI app that’s going to be built will be required. It will be required to be distributed, and it will be required to be global. And all of those things are really important because it then means you’re the most efficient. And so the investment you see us make in CapEx, you’re right, the front end has been this sort of infrastructure build that lets us really catch up not just on the AI infrastructure we needed, but think about that as the building itself, data centers, but also some of the catch-up we need to do on the commercial cloud side. And then you’ll see the pivot to more CPU and GPU. 

Microsoft’s management thinks DeepSeek had real innovations, but those are going to be commoditized and become broadly used; management thinks that innovations in AI that reduce the cost of inference will drive more consumption and more apps being developed, and make AI more ubiquitous, which are all positive forces for Microsoft

I think DeepSeek has had some real innovations. And that is some of the things that even OpenAI found in ’01. And so we are going to — obviously, now that all gets commoditized and it’s going to get broadly used. And the big beneficiaries of any software cycle like that is the customers, right? Because at the end of the day, if you think about it, right, what was the big lesson learned from client server to cloud? More people bought servers, except it was called cloud. And so when token prices fall, inference computing prices fall, that means people can consume more, and there will be more apps written. And it’s interesting to see that when I referenced these models that are pretty powerful, it’s unimaginable to think that here we are in sort of beginning of ’25, where on the PC, you can run a model that required pretty massive cloud infrastructure. So that type of optimization means AI will be much more ubiquitous. And so therefore, for a hyperscaler like us, a PC platform provider like us, this is all good news as far as I’m concerned.

Microsoft has been reducing prices of GPT models over the years through inference optimizations

We are working super hard on all the software optimizations, right? I mean, just not the software optimizations that come because of what DeepSeek has done, but all the work we have done to, for example, reduce the prices of GPT models over the years in partnership with OpenAI. In fact, we did a lot of the work on the inference optimizations on it, and that’s been key to driving, right?

Microsoft’s management is aware that launching a frontier AI model that is too expensive to serve is useless

One of the key things to note in AI is you just don’t launch the frontier model, but if it’s too expensive to serve, it’s no good, right? It won’t generate any demand.

Microsoft’s management is seeing many different AI models being used for any one application; management thinks that there will always be a combination of different models used in any one application

What you’re seeing is effectively lots of models that get used in any application, right? When you look underneath even a Copilot or a GitHub Copilot or what have you, you already see lots of many different models. You build models. You fine-tune models. You distill models. Some of them are models that you distill into an open source model. So there’s going to be a combination…

….There’s a temporality to it, right? What you start with as a given COGS profile doesn’t need to be the end because you continuously optimize for latency and COGS and putting in different models.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s Data Center revenue again had incredibly strong growth in 2024 Q4, driven by demand for the Hopper GPU computing platform and the ramping of the Blackwell GPU platform 

In the fourth quarter, Data Center revenue of $35.6 billion was a record, up 16% sequentially and 93% year-on-year, as the Blackwell ramp commenced and Hopper 200 continued sequential growth. 

Blackwell’s sales exceeded management’s expectations and is the fastest product ramp in NVIDIA’s history; it is common for Blackwell clusters to start with 100,000 GPUs or more and NVIDIA has started shipping for multiple such clusters; management architected Blackwell for inference; Blackwell has 25x higher token throughput and 20x lower cost for AI reasoning models compared to the Hopper 100; Blackwell has a NVLink domain that handles the growing complexity of inference at scale; management is seeing great demand for Blackwell for inference, with many of the early GB200 (GB200 is based on the Blackwell family of GPUs) deployments earmarked for inference; management expects NVIDIA’s gross margin to decline slightly initially as the Blackwell family ramps, before rebounding; management expects a significant ramp of Blackwell in 2025 Q1; the Blackwell Ultra, the next generation of GPUs within the Blackwell family, is slated for introduction in 2025 H2; the system architecture between Blackwell and Blackwell Ultra is exactly the same

In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp in our company’s history, unprecedented in its speed and scale…

…With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size…

…Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Its revolutionary transformer engine is built for LLM and mixture of experts inference. And its NVLink domain delivers 14x the throughput of PCIe Gen 5, ensuring the response time, throughput and cost efficiency needed to tackle the growing complexity of inference at scale…

…Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture…

…As Blackwell ramps, we expect gross margins to be in the low 70s. Initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure. When fully ramped, we have many opportunities to improve the cost and gross margin will improve and return to the mid-70s, late this fiscal year…

…Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1…

…Blackwell Ultra is second half…

…The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories and of course, new processors, and all of that is coming online…

…This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It’s a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition. But the next transition will slot right in. Blackwell Ultra will slot right in.

NVIDIA’s management sees post-training and model customisation has demanding orders of magnitude more compute than pre-training

The scale of post-training and model customization is massive and can collectively demand orders of magnitude, more compute than pretraining.

NVIDIA’s management is seeing accelerating demand for NVIDIA GPUs for inference, driven by test-time scaling and new reasoning models; management thinks reasoning models require 100x more compute per task than one-shot inference models; management is hopeful that future generation of reasoning models will require millions of times more compute; management is seeing that the vast majority of NVIDIA’s compute today is inference

Our inference demand is accelerating, driven by test-time scaling and new reasoning models like OpenAI o3, DeepSeek-R1 and Grok 3. Long thinking reasoning AI can require 100x more compute per task compared to one-shot inferences…

…. The amount of tokens generated, the amount of inference compute needed is already 100x more than the one-shot examples and the one-shot capabilities of large language models in the beginning. And that’s just the beginning. This is just the beginning. The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future…

……The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level.

Companies such as ServiceNow, Perplexity, Microsoft, and Meta are using NVIDIA’s software and GPUs to achieve lower costs and/or better performance with their inference workloads

ServiceNow tripled inference throughput and cut costs by 66% using NVIDIA TensorRT for its screenshot feature. Perplexity sees 435 million monthly queries and reduced its inference costs 3x with NVIDIA Triton Inference Server and TensorRT-LLM. Microsoft Bing achieved a 5x speed up at major TCO savings for Visual Search across billions of images with NVIDIA TensorRT and acceleration libraries…

…Meta’s cutting-edge Andromeda advertising engine runs on NVIDIA’s Grace Hopper Superchip serving vast quantities of ads across Instagram, Facebook applications. Andromeda harnesses Grace Hopper’s fast interconnect and large memory to boost inference throughput by 3x, enhanced ad personalization and deliver meaningful jumps in monetization and ROI.

NVIDIA has driven a 200x reduction in inference costs in the last 2 years

We’re driven to a 200x reduction in inference costs in just the last 2 years.

Large cloud service providers (CSPs) were half of NVIDIA’s Data Centre revenue in 2024 Q4, and up nearly 2x year-on-year; large CSPs were the first to stand up Blackwell systems

In Q4, large CSPs represented about half of our data center revenue, and these sales increased nearly 2x year-on-year. Large CSPs were some of the first to stand up Blackwell with Azure, GCP, AWS and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI. 

Regional clouds increased as a percentage of NVIDIA’s Data Center revenue in 2024 Q4, driven by AI data center build outs globally; management is seeing countries across the world building AI ecosystems

Regional cloud hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory build-outs globally and rapidly rising demand for AI reasoning models and agents where we’ve launched a 100,000 GB200 cluster-based incidents with NVLink Switch and Quantum-2 InfiniBand…

…Countries across the globe are building their AI ecosystems and demand for compute infrastructure is surging. France’s EUR 200 billion AI investment and the EU’s EUR 200 billion InvestAI initiatives offer a glimpse into the build-out to set redefined global AI infrastructure in the coming years.

NVIDIA’s revenue from consumer internet companies tripled year-on-year in 2024 Q4

Consumer Internet revenue grew 3x year-on-year, driven by an expanding set of generative AI and deep learning use cases. These include recommender systems, vision-language understanding, synthetic data generation, search and agentic AI.

NVIDIA’s revenue from enterprises nearly doubled year-on-year in 2024 Q4, partly with the help of agentic AI demand

Enterprise revenue increased nearly 2x year on accelerating demand for model fine-tuning, RAG and agentic AI workflows and GPU accelerated data processing.

NVIDIA’s management has introduced NIMs (NVIDIA Inference Microservices) focused on AI agents and leading AI agent platform providers are using these tools

We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications, including customer support, fraud detection and product supply chain and inventory management. Leading AI agent platform providers, including SAP and ServiceNow are among the first to use new models.

Healthcare companies are using NVIDIA’s AI products to power healthcare innovation

Health care leaders, IQVIA, Illumina and Mayo Clinic as well as ARC Institute are using NVIDIA AI to speed drug discovery, enhance genomic research and pioneer advanced health care services with generative and agentic AI.

Hyundai will be using NVIDIA’s technologies for the development of AVs (autonomous vehicles); NVIDIA’s automotive revenue had strong growth year-on-year and sequentially in 2024 Q4, driven by ramp in AVs; automotive companies such as Toyota, Aurora, and Continental are working with NVIDIA to deploy AV technologies; NVIDIA’s AV platform has passed 2 of the automotive industry’s foremost authorities for safety and cybersecurity

 At CES, Hyundai Motor Group announced it is adopting NVIDIA technologies to accelerate AV and robotics development and smart factory initiatives…

…Now moving to Automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year-on-year…

…Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis. At CES, we announced Toyota, the world’s largest auto maker will build its next-generation vehicles on NVIDIA Orin running the safety certified NVIDIA DriveOS. We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA DRIVE Thor. Finally, our end-to-end autonomous vehicle platform NVIDIA DRIVE Hyperion has passed industry safety assessments like TÜV SÜD and TÜV Rheinland, 2 of the industry’s foremost authorities for automotive-grade safety and cybersecurity. NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments.

NVIDIA’s management has introduced the NVIDIA Cosmos World Foundation Model platform for the continued development of autonomous robots; Uber is one of the first major technology companies to adopt the NVIDIA Cosmos World Foundation Model platform

At CES, we announced the NVIDIA Cosmos World Foundation Model Platform. Just as language, foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics. Leading robotics and automotive companies, including ridesharing giant Uber, are among the first to adopt the platform.

As a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China is well below levels seen prior to the US government’s export controls; management expects the Chinese market to be very competitive

Now as a percentage of total data center revenue, data center sales in China remained well below levels seen on the onset of export controls. Absent any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive.

NVIDIA’s networking revenue declined sequentially in 2024 Q4, but the networking-attach-rate to GPUs remains robust at 75%; NVIDIA is transitioning to NVLink 72 with Spectrum-X (Spectrum-X is NVIDIA’s Ethernet networking solution); management expects networking revenue to resume growing in 2025 Q1; management sees AI requiring a new class of networking, for which the company’s NVLink, Quantum Infiniband, and Spectrum-X networking solutions are able to provide; large AI data centers, including OpenAI’s Stargate project, will be using Spectrum X

Networking revenue declined 3% sequentially. Our networking attached to GPU compute systems is robust at over 75%. We are transitioning from small NVLink 8 with InfiniBand to large NVLink 72 with Spectrum-X. Spectrum-X and NVLink Switch revenue increased and represents a major new growth vector. We expect networking to return to growth in Q1. AI requires a new class of networking. NVIDIA offers NVLink Switch systems for scale-up compute. For scale out, we offer Quantum InfiniBand for HPC supercomputers and Spectrum-X for Ethernet environments. Spectrum-X enhances the Ethernet for AI computing and has been a huge success. Microsoft Azure, OCI, CoreWeave and others are building large AI factories with Spectrum-X. The first Stargate data centers will use Spectrum-X. Yesterday, Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure. With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.

NVIDIA’s management is seeing 3 scaling laws at play in the development of AI models, namely pre-training scaling, post-training scaling, and test-time compute scaling

There are now multiple scaling laws. There’s the pre-training scaling law, and that’s going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining. And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning, verifiable rewards. The amount of computation you use for post training is actually higher than pretraining. And it’s kind of sensible in the sense that you could, while you’re using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens. AI models are basically generating tokens to train AI models. And that’s post-training. And the third part, this is the part that you mentioned is test-time compute or reasoning, long thinking, inference scaling. They’re all basically the same ideas. And there you have a chain of thought, you’ve search.

NVIDIA’s management thinks the popularity of NVIDIA’s GPUs stems from its fungibility across all kinds of AI model architectures and use cases; NVIDIA’s management thinks that NVIDIA GPUs have an advantage over the ASIC (application-specific integrated circuit) AI chips developed by others because of (1) the general-purpose nature of NVIDIA GPUs, (2) NVIDIA’s rapid product development roadmap, (3) the software stack developed for NVIDIA GPUs that is incredibly hard to replicate

The question is how do you design such an architecture? Some of it — some of the models are auto regressive. Some of the models are diffusion-based. Some of it — some of the times you want your data center to have disaggregated inference. Sometimes it is compacted. And so it’s hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA’s architecture is so popular. We run every model. We are great at training…

…When you have a data center that allows you to configure and use your data center based on are you doing more pretraining now, post training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways. And so we’re seeing, in fact, much, much more concentration of a unified architecture than ever before…

…[Question] We heard a lot about custom ASICs. Can you kind of speak to the balance between custom ASIC and merchant GPU?

[Answer] We build very different things than ASICs, in some ways, completely different in some areas we intercept. We’re different in several ways. One, NVIDIA’S architecture is general whether you’re — you’ve optimized for auto regressive models or diffusion-based models or vision-based models or multimodal models or text models. We’re great in all of it. We’re great at all of it because our software stack is so — our architecture is flexible, our software stack ecosystem is so rich that we’re the initial target of most exciting innovations and algorithms. And so by definition, we’re much, much more general than narrow…

…The third thing I would say is that our performance and our rhythm is so incredibly fast. Remember that these data centers are always fixed in size. They’re fixed in size or they’re fixed in power. And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues. And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4x or 8x higher, your revenues for that gigawatt data center is 8x higher. And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated. And so the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROI…

…The last thing that I would say is the software stack is incredibly hard. Building an ASIC is no different than what we do. We build a new architecture. And the ecosystem that sits on top of our architecture is 10x more complex today than it was 2 years ago. And that’s fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem on top of multiple chips is hard.

NVIDIA’s management thinks that only consumer AI and search currently have well-developed AI use cases, and the next wave will be agentic AI, robotics, and sovereign AI

We’ve really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software. The next wave is coming, agentic AI for enterprise, physical AI for robotics and sovereign AI as different regions build out their AI for their own ecosystems. And so each one of these are barely off the ground, and we can see them.

NVIDIA’s management sees the upcoming Rubin family of GPUs as being a big step-up from the Blackwell family

The next transition will slot right in. Blackwell Ultra will slot right in. We’ve also already revealed and been working very closely with all of our partners on the click after that. And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we’re going to provide a big, huge step-up.

NVIDIA’s management sees AI as having the opportunity to address a larger part of the world’s GDP than any other technology has ever had

No technology has ever had the opportunity to address a larger part of the world’s GDP than AI. No software tool ever has. And so this is now a software tool that can address a much larger part of the world’s GDP more than any time in history.

NVIDIA’s management sees customers still actively using older families of NVIDIA GPUs because of the high level of programmability that CUDA has

People are still using Voltas and Pascals and Amperes. And the reason for that is because there are always things that — because CUDA is so programmable you could use it — one of the major use cases right now is data processing and data curation. You find a circumstance that an AI model is not very good at. You present that circumstance to a vision language model, let’s say, it’s a car. You present that circumstance to a vision language model. The vision language model actually looks at the circumstances and said, “This is what happened and I wasn’t very good at it.” You then take that response — the prompt and you go and prompt an AI model to go find in your whole lake of data, other circumstances like that, whatever that circumstance was. And then you use an AI to do domain randomization and generate a whole bunch of other examples. And then from that, you can go train the model. And so you could use the Amperes to go and do data processing and data curation and machine learning-based search. And then you create the training data set, which you then present to your Hopper systems for training. And so each one of these architectures are completely — they’re all CUDA-compatible and so everything runs on everything. But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past. All of our GPUs are very well employed.

Paycom Software (NYSE: PAYC)

Paycom’s management rolled out an AI agent six months ago to its service team, and has seen higher immediate response rates to clients and eliminated service tickets by 25% from a year ago; Paycom’s AI agent is driving internal efficiencies, higher client satisfaction, and higher Net Promoter Scores

Paycom’s AI agent, which was rolled out to our service team 6 months ago, utilizes our own knowledge-based semantic search model to provide faster responses and help our clients more quickly and consistently than ever before. As responses continuously improve over time, our client interactions become more valuable, and we connect them faster to the right solution. As a result, we are seeing improved immediate response rates and have eliminated service tickets by over 25% compared to a year ago…

…With automations like AI agent, we are realizing internal efficiencies, driving increasing client satisfaction and seeing higher Net Promoter Scores.

PayPal (NASDAQ: PYPL)

One of the focus areas for PayPal’s management in 2025 will be on raising efficiency with the help of AI 

Fourth is efficiency and effectiveness. In 2024, we reduced headcount by 10%. We made deliberate investments in AI and automation, which are critical to our future. This year, we are prioritizing the use of AI to improve the customer experience and drive efficiency and effectiveness within PayPal.

PayPal’s management sees AI being a huge opportunity for PayPal given the company’s volume of data; PayPal is using AI on its customer facing side to more efficiently process customer support cases and interactions with customers (PayPal Assistant has been rolled out and it has cut down phone calls and active events for PayPal); PayPal is using AI to personalise the commerce journey for consumers; PayPal is also using AI for back-office productivity and risk decisions

[Question] The ability to use AI for more operating efficiency. And are those initiatives that are requiring some incremental investment near term? Or are you already seeing sort of a positive ROI from that?

[Answer] AI is opening up a huge opportunity for us. First, at our scale, we saw 26 billion transactions on our platform last year. We have a massive data set that we are actively working and investing in to be able to drive our effectiveness and efficiency…

First, on the customer-facing side, we’re leveraging AI to really become more efficient in our support cases and how we interact with our customers. We see tens of millions of support cases every year, and we’ve rolled out our PayPal Assistant, which is now really cutting down phone calls and active events that we have. 

We also are leveraging AI to personalize the commerce journey, and so working with our merchants to be able to understand and create this really magical experience for consumers. When they show up at a checkout, it’s not just a static button anymore. This really can become a dynamic, personalized button that starts to understand the profile of the consumer, the journey that they’ve been on, perhaps across merchants, and be able to enable a reward or a cash-back offer in the moment or even a Buy Now, Pay Later offer in a dynamic experience…

In addition, we also are looking at our back office and ensuring that not just on the engineering and employee productivity side, but also in things like our risk decisions. We see billions and billions of risk decisions that often, to be honest, we’re very manual in the past. We’re now leveraging AI to be able to understand globally what are the nature of these risk decisions and how do we automate these across both risk models as well as even just ensuring that customers get the right response at the right time in an automated fashion.

Salesforce (NYSE: CRM)

Salesforce ended 2024 (FY2025) with $900 million in Data Cloud and AI ARR (annual recurring revenue), up 120% from a year ago; management has never seen products grow at this rate before, especially Agentforce

We ended this year with $900 million in Data Cloud and AI ARR. It grew 120% year-over-year. We’ve never seen products grow at these levels, especially Agentforce.

Salesforce’s management thinks building digital labour (AI agents) is a much bigger market than just building software

I’m sure you saw those ARK slides that got released over the weekend where she said that she thought this digital labor revolution, which is really like kind of what we’re in here now, this digital labor revolution, this looks like it’s anywhere from a few trillion to $12 trillion. I mean, I kind of agree with her. I think this is much, much bigger than software. I mean, for the last 25 years, we’ve been doing software to help our customers manage their data. That’s very exciting. I think building software that kind of prints and deploys digital workers is more exciting.

Salesforce’s unified platform, under one piece of code, combining customer data and an agentic platform, is what gives Agentforce its accuracy; Agentforce already has 3,000 paying customers just 90 days after going live; management thinks Agentforce is unique in the agentic capabilities it is delivering; Salesforce is Customer Zero for Agentforce; Agentforce has already resolved 380,000 service requests for Salesforce, with an 84% resolution rate, and just 2% of requests require human escalation; Agentforce has accelerated Salesforce’s sales-quoting cycles by 75% and increased AE (account executive) capacity by 7%; Agentforce is helping Salesforce engage more than 50 leads per day, freeing up the sales team for higher-value conversations; management wants every Salesforce customer to be using Agentforce; Data Cloud is at the heart of Agentforce; management is seeing customers across every industry deploying Agentforce; management thinks Salesforce’s agentic technology works better than many other providers, and that other providers are just whitewashing their technology with the “agent” label; Agentforce is driving growth across Salesforce’s portfolio; Salesfroce has prebuilt 170 specialised Agentforce industry skills; Agentforce’s 3,000 customers come from a diverse set of industries

Our formula now really for our customers is this idea that we have these incredible Customer 360 apps. We have this incredible Data Cloud, and this incredible agentic platform. These are the 3 layers. But that it is a deeply unified platform, it’s a deeply unified platform, it’s just one piece of code, that’s what makes it so unique in this market…

…It’s this idea that it’s a deeply unified platform with one piece of code all wrapped in a beautiful layer of trust. And that’s what gives Agentforce this incredible accuracy that we’re seeing…

…Just 90 days after it went live, we’ve already have 3,000 paying Agentforce customers who are experiencing unprecedented levels of productivity, efficiency and cost and cost savings. No one else is delivering at this level of capability…

…We’re seeing some amazing results on Salesforce as Customer Zero for Agentforce. Our digital labor force is resolving tens of thousands of customer service inquiries, freeing our human employees to focus on the most nuanced issues and customer relationships. We’re seeing tremendous momentum and success stories emerge as we execute our vision to make every company, every single company, every customer of ours, an Agentforce company, that is, we want every customer to be an Agentforce customer…

…We also continued phenomenal growth with Data Cloud this year, which is the heart of Agentforce. Data Cloud is the fuel that powers Agentforce and our customers are investing in it…

…We’re seeing customers deploy Agentforce across every industry…

…You got to be aware of the false agent because the false agent is out there where people can use the word agent or they kind of — they’re trying to whitewash all the agent, the thing, everywhere. But the reality is there is the real agents and there are the false agents, and we’re very fortunate to have the real stuff going on here. So we’ve got a lot more groundbreaking AI innovation coming…

…Today, we’re live on Agentforce across service and sales, our business technology organization, customer support and more. And the results are phenomenal. Since launching on our Salesforce help portal in October, Agentforce has autonomously handled 380,000 service requests, achieving an incredible 84% resolution rate and only 2% of the requests require human escalation. And we’re using Agentforce for quoting, accelerating our quoting cycles by more than 75%. In Q4, we increased our AE [account executive] capacity while still driving productivity up 7% year-over-year. Agentforce is transforming how we do outbound prospecting, already engaging more than 50 leads per day with personalized outreach and timely follow-ups, freeing up our teams to focus on high-value conversation. Our reps are participating in thousands of sales coaching training sessions each month…

…Agentforce is revolutionizing how our customers work by bringing AI-powered insights and actions directly into the workflows across the Customer 360 applications. This is driving strong growth across our portfolio. Sales Cloud and Service Cloud both achieved double-digit growth again in Q4. We’re seeing fantastic momentum with Slack, with customers like ZoomInfo, Remarkable and MIMIT Health using Agentforce and Slack to boost productivity…

…We’ve prebuilt over 170 specialized Agentforce industry skills and a team of 400 specialists, supporting transformations across sectors and geographies…

…We closed more than 3,000 paid Agentforce deals in the quarter. As customers continue to harness the value of AI deeply embedded across our unified platform, it is no surprise that these customers average nearly 4 clouds. And these customers came from a diverse set of industries with more than half in technology, manufacturing, financial services and HLS.

Lennar, the USA’s largest homebuilder, has been a Salesforce customer for 8 years and it is deploying Agentforce to fulfill their management’s vision of selling all kinds of new products; jewelry company, Pandora, an existing Salesforce customer, is deploying Agentforce with the aim of handling 30%-60% of its service cases with Agentforce; pharmaceutical giant Pfizer is using Agentforce to augment its sales teams; Singapore-based airline, Singapore Airlines, is now a customer of Agentforce and wants to deliver service through it; Goodyear is using Agentforce to automate and increase the effectiveness of its sales efforts; Accenture is using Agentforce to coach its sales team and expects to achieve higher win rates; Deloitte is using Agentforce and expects to achieve significant productivity gains

We’ve been working with Lennar, the nation’s largest homebuilder. And most of you know Lennar is really an incredible company, and they’ve been a customer of ours for about 8 years…

…You probably know Stuart Miller, Jon Jaffe, amazing CEOs. And those co-CEOs called me and said, “Listen, these guys have done a hackathon around Agentforce. We’ve got 5 use cases. We see incredible opportunities on our margin, incredible opportunities in our revenue. And do you have our back if we’re going to deploy this?” And we said, “Absolutely. We’ve deployed it ourselves,” which is the best evidence that this is real. And they are just incredible, their vision as a homebuilder providing 24/7 support, sales leads through all their digital channels. They’re able to sell all kinds of new products. I think they’re going to sell mortgages and insurance and all kinds of things to their customers. And the cool thing is they’re using our sales product, our service product, marketing, MuleSoft, Slack, Tableau, they use everything. But they are able to leverage it all together by realizing that just by turning it on, they get this incredible Agentforce capability…

…I don’t know how many of you know about Pandora. If you’ve been to a shopping center, you will see the Pandora store. You walk in, they have this gorgeous jewelry. They have these cool charm bracelets. They have amazing products. And if you know their CEO, Alex, he’s absolutely phenomenal…

…They’re in 100 countries. They employ 37,000 people worldwide. And Alex has this great vision to augment their employees with digital labor. And this idea that whether you’re on their website or in their store, or whatever it is, that they’re going to be able to do so much more with Agentforce. They already use — first of all, they already use Commerce Cloud. So if you’ve been to pandora.com and bought their products — and if you have it, by the way, it’s completely worthwhile. It’s great. And you can experience our Commerce Cloud, but it’s deeply integrated with our Service Cloud, with Data Cloud. It’s the one unified platform approach. And now they’re just flipping the switch, turning agents on, and they’re planning to deliver 30% to 60% of their service cases with Agentforce. That is awesome. And I really love Alex’s vision of what’s possible….

…The last customer I really want to hit on, which I’m so excited about, is Pfizer. And Albert is an incredible CEO. They are doing unbelievable things. They’ve been a tremendous customer. But now they’re really going all in on our Life Sciences Cloud…

…And with Agentforce, sales agents, for example, with Pfizer, that’s — they’ve got 20,000 customer-facing employees and customer-facing folks. That is just a radical extension for them with agents…

…I’m sure a lot of you — like, I have flown in Singapore Air. You know what? It’s a great airline. The CEO, Goh, is amazing. And he has a huge vision that also came out of Dreamforce, where — they’ve already delivered probably the best service of any airline in the world — they want to deliver it through agents. So whether you’re doing it with service or sales or marketing or commerce or all the different things that Singapore Air is doing with us, you’re going to be able to do this right on Singapore Air…

…Goodyear is partnering with us on their transformation, using Agentforce to automate and increase the effectiveness of their sales efforts. With Agentforce for Field Service, Goodyear will be able to reduce repair time by assisting technicians with answers to vehicle-related questions and autonomously scheduling field tech appointments…

…Accenture is using Agentforce Sales Coach, which provides personalized coaching and recommendations for sales teams, which is expected to lead to higher win rates. And Deloitte is projecting significant productivity gains and saved workforce hours as they roll out Agentforce over the next few years.

Salesforce’s management expects modest revenue contribution from Agentforce in 2025 (FY2026); contribution from Agentforce is expected to be more meaningful in 2026 (FY2027)

Starting with full fiscal year ’26. We expect revenue of $40.5 billion to $40.9 billion, growth of approximately 7% to 8% year-over-year in nominal and constant currency. And for subscription and support revenue, we expect growth of approximately 9% year-over-year in constant currency…

…On Agentforce, we are incredibly excited about the customer momentum we are seeing. However, the adoption cycle is still early as we focus on deployment with our customers. As a result, we are assuming a modest contribution to revenue in fiscal ’26. We expect the momentum to build throughout the year, driving a more meaningful contribution in fiscal ’27.

Salesforce has long had a mix of per-seat and consumption pricing models; for now, Agentforce is a consumption product, but management sees Agentforce evolving to a mix of per-seat and consumption pricing models; there was a customer that bought Agentforce in 2024 Q4 (FY2025 Q4) along with other Salesforce products and the customer signed a $7 million Agentforce contract and a $13 million contract for the other products; based on early days of engagement with Agentforce customers, management sees significant future upside to Salesforce’s pricing structure; Agentforce’s pricing will also take into account whether Agentforce will bring other human-based clouds into the customer; Agentforce is currently creating some halo around Salesforce’s other products

We’ve kind of started the company out with the per user pricing model, and that’s about humans. We price per human, so you’re kind of pricing per human. And then we have products, though, that are also in the consumption world as well. And of course, those started in the early days, things like our sandboxes, even things like our Commerce Cloud, even our e-mail marketing product, our Marketing Cloud. These are consumption-based products we’ve had for years…

…Now we have these kind of products that are for agents also, and agents are also a consumption model. So when we look at our Data Cloud, for example, that’s a consumption product. Agentforce is a consumption product. But it’s going to be a mix. It’s going to be a mix between what’s going on with our customers with how many humans do they have and then how many agents are they deploying…

…In the quarter, we did a large transaction with a large telecommunications company… we’re rebuilding this telecommunications company. So it’s Sales Cloud, it’s Service Cloud, it’s Marketing Cloud. It’s all of our core clouds, but then also it’s Agentforce. And the Agentforce component, I think, was maybe $7 million in the transaction. So she was buying $7 million of Agentforce. She bought $13 million in our products for humans, and I think that was about $20 million in total…

…We will probably move into the near future from conversations as we price most of our initial deals to universal credit. It will allow our customers far more flexibility in the way they transact with us. But we see this as a significant upside to our pricing structures going forward. And that’s what we’ve seen in the early days with our engagement with customers…

…Here’s a transaction that you’re doing, let’s say, a customer comes in, they’re very interested in building an agentic layer on their company, is that bringing other human-based clouds along with it?…

…[Question] Is Agentforce having a bit of a halo effect around some of your other products, meaning, as we are on the journey to get more monetization from Agentforce, are you seeing pickups or at least higher activity levels in some of your other products?

[Answer] That’s exactly right. And we’re seeing it in the way that our customers are using our technology, new ideas, new workflows, new engagements. We talked about Lennar as an example, their ability to handle leads after hours that they weren’t able to get back to or respond to in a quick time frame are now able to touch and engage with those leads. And that, of course, flows into their Salesforce automation system. And so we are seeing this halo effect with our core technology. It is making every single one of our core apps better as they deliver intelligence, underpinning these applications.

Salesforce’s management sees the combination of apps, data, and agents as the winning combination in an AI-world; management disputes Microsoft’s narrative that software apps will become a dumb database layer in an AI-dominated world, because it is the combination of apps, data, and agents that is important

I don’t know any company that’s 100% agents. I don’t know of any company that doesn’t need automation for its humans. I don’t know of any company that doesn’t need a data cloud where it needs a consistent common data repository for all of its agents to gain their intelligence. And I don’t know of any company that’s not going to need an agentic layer. And that idea of having apps, data and agents, I think, is going to be the winning combination…

…[Question] As part of that shift to agentic technology, there’s been a lot of debate about the SaaS technology and the business model. The SaaS tech stack that you built and pioneered, how does that fit into the agentic world? Is there a risk that SaaS just becomes a CRUD database?

[Answer] I’ve heard that Microsoft narrative, too. So I watched the podcast you watched, and that’s a very interesting idea. Here’s how I look at it, which is, I believe there is kind of a holy trinity here of AI CRM, which is the apps, the data and the agents. And these three things have to kind of work together. And I kind of put my money where our mouth is where we kind of built it and we delivered it. And you can see the 380,000 conversations that we had as point of evidence here in the last 90 days on our service and with a very high resolution rate of 84%. You can go to help.salesforce.com, and you can see that today.

Now Microsoft has had Copilot available for, I think, about 2 years or more than 2 years. And I know that they’re the reseller of OpenAI and they’ve invested, they kind of repackaged this ChatGPT, whatever. But where on their side are they delivering agents? Where in their company have they done this? Are they a best practice? Because I think that while they can say such a thing, do they have humans and agents working together to create customer success? Are they rebalancing their workforce with humans and agents? I think that it’s a very interesting point that, yes, the agentic layer is very important, but it doesn’t operate by itself. It operates with data, with a Data Cloud that has to be federated through your company, to all your data sources. And humans, we’re still here

Salesforce’s management is seeing Agentforce deliver tremendous efficiency in Salesforce’s customer support function that they may rebalance some customer-support roles into other roles; management is currently seeing AI coding tools improve the productivity of Salesforce’s engineering team by 30% and thinks even more productivity can be found; management will not be expanding Salesforce’s engineering team this year, but will grow the sales team

We really are seeing tremendous efficiency with help.salesforce.com. So we may see the opportunity to rebalance some of those folks into sales and marketing and other functions…

…We definitely have seen a lot of efficiency with engineering and with some of the new tools that I’ve seen, especially some of these high-performance coding tools. One of the key members of my staff who’s here in the room with us has just showed me one of his new examples of what we’re able to do with these coding tools, pretty awesome. And we’re not going to hire any new engineers this year. We’re seeing 30% productivity increase on engineering. And we’re going to really continue to ride that up…

…We’re going to grow sales pretty dramatically this year. Brian has got a big vision for how to grow the sales organization. probably another 10% to 20%, I hope, this year because we’re seeing incredible levels of demand.

Salesforce’s management thinks that AI agents is one of the catalysts to drive GDP growth

So if you want productivity to go up and you want GDP to grow up and you want growth, I think that digital labor is going to be one of the catalysts to make that happen.

Shopify (NASDAQ: SHOP)

Shopify launched its first AI-powered search integration with Perplexity in 2024

Last year, we… launched our first AI-powered search integration with Perplexity, enabling new ways for buyers to find merchants.

One of Shopify’s management’s focus areas in 2025 is to continue embracing AI by investing more in Sidekick and other AI capabilities that help merchants launch and grow faster; management wants to shift Shopify towards producing goal-oriented software; management believes Shopify is well-positioned as a leader for commerce in an AI-driven world

We will continue to embrace the transformative potential of AI. This technology is not just a part of the future, it is redefining it. We’ve anticipated this. So we’re already transforming Shopify into a platform where users and machines work seamlessly together. We plan to deepen our investment in Sidekick and other AI capabilities to help not just brand-new merchants to launch, but also to help larger merchants scale faster and drive greater productivity. Our efforts to shift towards more goal-oriented software will further help to streamline operations and improve decision-making. This focus on embracing new ways of thinking and working positions us not only as the platform of choice today, but also as a leader for commerce in the AI-driven era with a relentless focus on cutting-edge technology.

Shopify’s management believes Shopify will be one of the major net beneficiaries in the AI era as the company is leveraging AI really well, such as its partnerships with Perplexity and OpenAI

I actually think Shopify will very much be one of the major net beneficiaries in this new AI era. I think we are widely recognized as one of the best companies that foster long-term partnership. And so when it comes to partnership in AI, whether it’s Perplexity, where we’re now powering their search results with incredible product across the Shopify product catalog or OpenAI where we’re using — we have a direct set of their APIs to help us internally, we are really leveraging it as best as we can.

In terms of utilising AI, Shopify’s management sees 2 angles; the 1st angle is Shopify using AI to help merchants with mundane tasks and allow merchants to focus only on the things they excel at; the 2nd angle is Shopify using AI internally to make developers and customer-support teams more effective (with customer-support teams, Shopify is using AI to handle low-quality conversations with customers)

[Question] A question in regards to AI and the use of AI internally. Over the last year or so, you’ve made significant investments. Where are you seeing it operationally having the most impact? And then what has been the magnitude of productivity gains that you’ve seen?

[Answer] We think about it in sort of 2 ways. The first is from a merchant perspective, how can we make our merchants way more successful, get them to do things faster, more effectively. So things like Sidekick or media editor or Shopify Inbox, Semantic Search, Sidekick, these are things that now — that every merchant should want when they’re not just getting started, but also scaling their business. And those are things that are only available from Shopify. So we’re trying to make some of the more mundane tasks far more easy to do and get them to focus on things that only they can — only the merchants can do. And I think that’s an important aspect of what Shopify will bring…

…Internally, however, this is where it gets really interesting, because not only can we use it to make our developers more effective, but also, if you think about our support organization, now we can ensure that our support team is actually having very high-quality conversations with merchants, whereas a lot of low-quality conversations, things like configuring a domain or a C name or a user name and password issue, that can be handled really elegantly by AI.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s AI accelerators revenue more than tripled in 2024 and was mid-teens percent of overall revenue in 2024, but management expects AI accelerators revenue to double in 2025; management sees really strong AI-related demand in 2025

Revenue from AI accelerators, which we now define as AI GPU, AI ASICs and HBM controller for AI training and inference in the data center, accounted for close to mid-teens percent of our total revenue in 2024. Even after more than tripling in 2024, we forecast our revenue from AI accelerator to double in 2025 as the strong surge in AI-related demand continues…

…[Question] Try to get a bit more clarity on the cloud growth for 2025. I think, longer term, without a doubt, the technology definitely has lots of potential for demand opportunities, but I think — if we look at 2025 and 2026, I think there could be increasing uncertainties coming from maybe [indiscernible] spending, macro or even some of the supply chain challenges. And so I understand the management just provided a pretty good guidance for this year for sales to double. And so if you look at that number, do you think there is still more upside than downside as we go through 2025?

[Answer] I certainly hope there is upside, but I hope I get — my team can supply enough capacity to support it. Does that give you enough hint? 

TSMC’s management saw a mixed year of recovery for the global semiconductor industry in 2024 with strong AI-related demand but mild recovery in other areas

2024 was a mixed year of recovery for the global semiconductor industry. AI-related demand was strong, while other applications saw only a very mild recovery as macroeconomics condition weighed on consumer sentiment and the end market demand. 

TSMC’s management expects mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024 (previous forecast was for 50% CAGR for the 5-years starting from 2024, but off a lower base); management expects AI accelerators to be the strongest growth driver for TSMC’s overall HPC  platform and overall revenue over the next few years

Underpinned by our technology leadership and broader customer base, we now forecast the revenue growth from AI accelerators to approach a mid-40% CAGR for the 5-year period starting off the already higher base of 2024. We expect AI accelerators to be the strongest driver of our HPC platform growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.

TSMC’s management expects 20% revenue CAGR in USD terms in the 5-years starting from 2024, driven by growth across all its platforms; management thinks that in the next few years, TSMC’s smartphone and PC end-markets will have higher silicon content and faster replacement cycle, driven by AI-related demand, which will in turn drive robust demand for TSMC’s chip manufacturing service; the AI-related demand in the smartphone and PC end-markets are related to edge-AI

Looking ahead, as the world’s most reliable and effective capacity provider, TSMC is playing a critical and integral role in the global semiconductor industry. With our technology leadership, manufacturing excellence and customer trust, we are well positioned to address the growth from the industry megatrend of 5G, AI and HPC with our differentiated technologies. For the 5-year period starting from 2024, we expect our long-term revenue growth to approach a 20% CAGR in U.S. dollar term, fueled by all 4 of our growth platform, which are smartphone, HPC, IoT and automotive…

…[Question] I believe that 20% starting from a very — already very high base in 2024 is a really good long-term objective but just wondering that, aside from the strong AI demand, what’s your view on the traditionals, applications like PC and the smartphone, growth and particularly for this year.

[Answer] This year is still mild growth for PC and smartphone, but everything is AI related, all right, so you can start to see why we have confidence to give you a close to 20% CAGR in the next 5 years. AI: You look at a smartphone. They will put AI functionality inside, and not only that. So the silicon content will be increased. In addition to that, actually the replacement cycle will be shortened. And also they need to go into the very advanced technology because of, if you want to put a lot of functionality inside a small chip, you need a much more advanced technology to put those [indiscernible]. Put all together, that even smartphone, the unit growth is almost low single digit, but then the silicon and the replacement cycle and the technology migration, that give us more growth than just unit growth; similar reason for PC…

…On the edge AI, in our observation, we found out that our customers start to put up more neural processing inside. And so we estimated the 5% to 10% more silicon being used. [ Can it be ] every year 5% to 10%? Definitely it is no, right? So they will move to next node, the technology migration. That’s also to TSMC’s advantage. Not only that, I also say that, the replacement cycle, I think it will be shortened because of, when you have a new toy that — with AI functionality inside it, everybody want replacing, replace their smartphone, replace their PCs. And [ I count that one ] much more than the — a mere 5% increase.

TSMC’s upcoming A16 process technology is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads

We will also introduce A16 featuring Super Power Rail or SPR as separate offering. TSMC’s SPR is a innovative, best-in-class backside power delivery solution that is first in the industry to incorporate a novel backside metal scheme that preserves gate density and device width flexibility to maximize product benefit. Compared with N2P, A16 provide a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed, and additional 7% to 10% chip density gain. A16 is the best suitable for specific HPC product with a complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.

TSMC’s management thinks that the US government’s latest AI restrictions will only have a minimal impact on the company’s business

[Question] Overnight, the U.S. seems to put a new framework on restricting China’s AI business, right? So I’m wondering whether that will create some business impact to your China business.

[Answer] We don’t have all analysis yet, but the first look is not significantly. It’s manageable. So that meaning that, my customers who are being restricted [ or something ], we are applying for the special permit for them. And we believe that we have confidence that they will get some permission, so long as they are not in the AI area, okay, especially automotive industry. Or even you talk about crypto mining, yes.

TSMC’s management does not want to reveal the level of demand for AI-related ASICs (application-specific integrated circuits) from the cloud hyperscalers, but they are confident that the demand is real, and that the cloud hyperscalers will be working with TSMC as they all need leading-edge technology for their AI-related ASICs

[Question] Broadcom’s CEO recently laid out a large SAM for AI hyperscalers building out custom silicon. I think he was talking about million clusters from each of the customers he has in the next 2 or 3 years. What’s TSMC’s perspective on all this? 

[Answer] I’m not going to answer the question of the specific number, but let me assure you that, whether it’s ASIC or it’s graphic, they all need a very leading-edge technology. And they’re all working with TSMC, okay, so — and the second one is, is the demand real. Was — is — as a number that my customers said. I will say that the demand is very strong.

AI makes up all of the current demand for CoWoS (chip on wafer on substrate) capacity that TSMC’s management is seeing, but they think non-AI-related demand for CoWoS will come in the near future from CPUs and servers; there are rumours of a cut in orders for CoWoS, but management is not seeing any cuts; it appears that HBM (high bandwidth memory) is the key constraint on AI demand, instead of CoWoS; CoWoS was over 8% of TSMC’s revenue in 2024 and will be over 10% in 2025; CoWoS gross margin is better than before, but still lower than the corporate average

[Question] When can we see non-AI application such as server, smartphone or anything else can be — can start to adopt CoWoS capacity in case there is any fluctuation in the AI demand?

[Answer] Today is all AI focused. And we have a very tight capacity and cannot even meet customers’ need, but whether other products will adopt this kind of CoWoS approach, they will. It’s coming and we know that it’s coming.

[Question] When?

[Answer] It’s coming… On the CPU and on the server chip. Let me give you a hint…

…[Question] About your CoWoS and SoIC capacity ramp. Can you give us more color this year? Because recently there seemed to be a lot of market noises. Some add orders. Some cut orders, so I would like to see your view on the CoWoS ramp.

[Answer] That’s a rumor. I assure you. We are working very hard to meet the requirement of my customers’ demand, so “cut the order,” that won’t happen. We actually continue to increase, so we are — again I will say that. We are working very hard to increase the capacity…

…[Question] A question on AI demand. Is there a scenario where HBM is more of a constraint on the demand, rather than CoWoS which seems to be the biggest constraint at the moment? 

[Answer] I don’t comment on other supplier, but I know that we have a very tight capacity to support the AI demand. I don’t want to say I’m the bottleneck. TSMC, always working very hard with customer to meet their requirement…

…[Question] So we have observed an increasing margin of advanced packaging. Could you remind us the CoWoS contribution of last year? And do you expect the margin to kind of approach the corporate average or even exceed it after the so-called — the value reflection this year?

[Answer] Overall speaking, advanced packaging accounted for over 8% of revenue last year. And it will account for over 10% this year. In terms of gross margins, it is better. It is better than before but still below the corporate average. 

AI makes up all of the current demand for SoIC (system on integrated chips) that TSMC’s management is seeing, but they think non-AI-related demand for SoIC will come in the future

Today, SoIC’s demand is still focused on AI applications, okay? For PC or for other area, it’s coming but not right now.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks Tesla’s FSD (Full Self Driving) technology has grown up a lot in the past few years; management thinks that car-use can grow from 10 hours per week to 55 hours per week with autonomous vehicles; autonomous vehicles can be used for both cargo and people delivery; FSD currently works very well in the USA , and will soon work well everywhere else; the constraint Tesla is currently experiencing with autonomous vehicles is in battery packs; FSD makes traffic commuting safer; FSD is currently on Version 13, and management believes Version 14 will have a significant step-improvement; Tesla has launched the Cortex training cluster at Gigafactory Austin, and it has played a big role in advancing FSD; Tesla will launch unsupervised FSD in June 2025 in Austin; Tesla already has thousands of its cars driving autonomously daily in its factories in Fremont and Texas, and Tesla will soon do that in Austin and elsewhere in the world; Tesla’s solution for autonomous vehicles is a generalised AI solution which does not need high-precision maps; Tesla’s unsupervised FSD work outside of Austin even when it’s launched only in June 2025 in Austin, but management just wants to be cautious; management thinks Tesla will release unsupervised FSD in many parts of the USA by end-2025; management’s safety-standard for FSD is for it to be far, far, far superior to humans; management thinks Tesla will have unsupervised FSD in almost every market this year

For a lot of people, like their experience of Tesla autonomy is like if it’s even a year old, if it’s even 2 years old, it’s like meeting someone when they’re like a toddler and thinking that they’re going to be a toddler forever. But obviously not going to be a toddler forever. They grow up. But if their last experience was like, “Oh, FSD was a toddler.” It’s like, well, it’s grown up now. Have you seen it? It’s like walks and talks…

…My #1 recommendation for anyone who doubts is simply try it. Have you tried it? When is the last time you tried it? And the only people who are skeptical, the only people who are skeptical are those who have not tried it.

So a car goes — a passenger car typically has only about 10 hours of utility per week out of 168, a very small percentage. Once that car is autonomous, my rough estimate is that it is in use for at least 1/3 of the hours per week, so call it, 50, maybe 55 hours of the week. . And it can be used for both cargo delivery and people delivery…

That same asset, the thing that — these things that already exist with no incremental cost change, just a software update, now have 5x or more the utility than they currently have. I think this will be the largest asset value increase in human history…

…So look, the reality of autonomy is upon us. And I repeat my advice, try driving the car or let it drive you. So now it works very well in the U.S., but of course, it will, over time, work just as well everywhere else…

…Our current constraint is battery packs this year but we’re working on addressing that constraint. And I think we will make progress in addressing that constraint…

…So a bit more on full self-driving. Our Q4 vehicle safety report shows continued year-over-year improvement in safety for vehicles. So the safety numbers, if somebody has supervised full self-driving turn on or not, the safety differences are gigantic…

…People have seen the immense improvement with version 13, and with incremental versions in version 13 and then version 14 is going to be yet another step beyond that, that is very significant. We launched the Cortex training cluster at Gigafactory Austin, which was a significant contributor to FSD advancement…

…We’re going to be launching unsupervised full self-driving as a paid service in Austin in June. So I talked to the team. We feel confident in being able to do an initial launch of unsupervised, no one in the car, full self-driving in Austin in June…

…We already have Teslas operating autonomously unsupervised full self-driving at our factory in Fremont, and we’ll soon be doing that at our factory in Texas. So thousands of cars every day are driving with no one in them at our Fremont factory in California, and we’ll soon be doing that in Austin and then elsewhere in the world with the rest of our factories, which is pretty cool. And the cars aren’t just driving to exactly the same spot because, obviously, it all — [ implied ] at the same spot. The cars are actually programmed with where — with what lane they need to park into to be picked up for delivery. So they drive from the factory end of line to their destination parking spot and to be picked up for delivery to customers and then doing this reliably every day, thousands of times a day. It’s pretty cool…

…Our solution is a generalized AI solution. It does not require high-precision maps of locality. So we just want to be cautious. It’s not that it doesn’t work beyond Austin. In fact, it does. We just want to be — put our toe in the water, make sure everything is okay, then put a few more toes in the water, then put a foot in the water with safety of the general public as and those in the car as our top priority…

…I think we will most likely release unsupervised FSD in many regions of the country of the U.S. by the end of this year…

…We’re looking for a safety level that is significantly above the average human driver. So it’s not anywhere like much safer, not like a little bit safer than human, way safer than human. So the standard has to be very high because the moment there’s any kind of accident with an autonomous car, that immediately gets worldwide headlines, even though about 40,000 people die every year in car accidents in the U.S., and most of them don’t even get mentioned anywhere. But if somebody [ scrapes a shed ] within autonomous car, it’s headline news…

…But I think we’ll have unsupervised FSD in almost every market this year, limited simply by regulatory issues, not technical capability. 

Tesla’s management thinks the compute needed for Optimus will be 10x that of autonomous vehicles, even though a humanoid robot has 1,000x more uses than an autonomous vehicle; management has seen the cost of training Optimus (or AI, in general) dropping dramatically over time; management thinks Optimus can produce $10 trillion in revenue, and that will still make the training needs of $500 billion in compute a good investment; management realises their revenue projections for Optimus sound insane, but they believe in it (sounds like a startup founder trying to get funding from VCs); it’s impossible for management to predict the exact timing for Optimus because everything about the robot has to be designed and built from the ground up by Tesla (nothing could be bought off-the-shelf by Tesla), but management thinks Tesla will build a few thousand Optimus robots by end-2025, and that these robots will be doing useful work in Tesla’s factories in the same timeframe; management’s goal is to ramp up Optimus production at a far faster rate than anything has ever been ramped; Optimus can even do delicate things such as play the piano; Optimus is still not design-locked for production; Tesla might be able to deliver Optimus to external clients by 2026 H2; management is confident that at scale, Optimus will be cheaper to produce than a car

The training needs for Optimus, our Optimus humanoid robot are probably at least ultimately 10x what is needed for the car, at least to get to the full range of useful role. You can say like how many different roles are there for a humanoid robot versus a car? A humanoid robot has probably 1,000x more uses and more complex things than in a car. That doesn’t mean the training scales by 1,000 but it’s probably 10x…

…It doesn’t mean like — or Tesla’s going to spend like $500 billion in training compute because we will obviously train Optimus to do enough tasks to match the output of Optimus robots. And obviously, the cost of training is dropping dramatically with time. But it is — it’s one of those things where I think long-term, Optimus will be — Optimus has the potential to be north of $10 trillion in revenue, like it’s really bananas. So that you can obviously afford a lot of training compute in that situation. In fact, even $500 billion training compute in that situation will be quite a good deal…

…With regard to Optimus, obviously, I’m making these revenue predictions that sound absolutely insane, I realize that. But they are — I think they will prove to be accurate…

…There’s a lot of uncertainty on the exact timing because it’s not like a train arriving at the station for Optimus. We are designing the train and the station and in real time while also building the tracks. And sort of like, why didn’t the train arrive exactly at 12:05? And like we’re literally designing the train and the tracks and the station in real-time while like how can we predict this thing with absolute precision? It’s impossible. The normal internal plan calls for roughly 10,000 Optimus robots to be built this year. Will we succeed in building 10,000 exactly by the end of December this year? Probably not. But will we succeed in making several thousand? Yes, I think we will. Will those several thousand Optimus robots be doing useful things by the end of the year? Yes, I’m confident they will do useful things…

…Our goal is to run Optimus production faster than maybe anything has ever been ramped, meaning like aspirationally in order of magnitude, ramp per year. Now if we aspire to an order of magnitude ramp per year, perhaps, we only end up with a half order of magnitude per year. But that’s the kind of growth that we’re talking about. It doesn’t take very many years before we’re making 100 million of these things a year, if you go up by let’s say, a factor by 5x per year…

This is an entirely new supply chain, it’s entirely new technology. There’s nothing off the shelf to use. We tried desperately with Optimus to use any existing motors, any actuators, sensors. Nothing worked for a humanoid robot at any price. We had to design everything from physics-first principles to work for humanoid robot and with the most sophisticated hand that has ever been made before by far. Optimus will be also able to play the piano and be able to thread a needle. I mean this is the level of precision no one has been able to achieve…

…Optimus is not design-locked. So when I say like we’re designing the train as it’s going — we’re redesigning the train as it’s going down the tracks while redesigning the tracks and the train stations…

…I think probably with version 2, it is a very rough guess because there’s so much uncertainty here, very rough guess that we start delivering Optimus robots to companies that are outside of Tesla in maybe the second half of next year, something like that…

I’m confident at 1 million units a year, that the production cost of Optimus will be less than $20,000. If you compare the complexity of Optimus to the complexity of a car, so just the total mass and complexity of Optimus is much less than a car.

The buildout of Cortex accelerated the rollout of FSD Version 13; Tesla has invested $5 billion so far in total AI-related capex

The build-out of Cortex was accelerated because of the role — actually accelerate the rollout of FSD Version 13. Our cumulative AI-related CapEx, including infrastructure, so far has been approximately $5 billion. 

Tesla’s management is seeing significant interest from some car manufacturers in licensing Tesla’s FSD technology; management thinks that car manufacturers without FSD technology will go bust; management will only entertain situations where the volume would be very high

What we’re seeing is at this point, significant interest from a number of major car companies about licensing Tesla full self-driving technology…

…We’re only going to entertain situations where the volume would be very high. Otherwise, it’s just not worth the complexity. And we will not burden our engineering team with laborious discussions with other engineering teams until we obviously have unsupervised full self-driving working throughout the United States. I think the interest level from other manufacturers to license FSD will be extremely high once it is obvious that unless you have FSD, you’re dead.

Compared to Version 13, Version 14 of FSD will have a larger model size, longer context length, more memory, more driving-context, and more data on tricky corner cases

[Question] What technical breakthroughs will define V14 of FSD, given that V13 already covered photon to control? 

[Answer] So continuing to scale the model size a lot. We scale a bunch in V13 but then there’s still room to grow. So we’re going to continue to scale the model size. We’re going to increase the context length even more. The memory sort of like limited right now. We’re going to increase the amount of memory [indiscernible] minutes of context for driving. They’re going to add audio and emergency vehicles better. Add like data of the tricky corner cases that we get from the entire fleet, any interventions or any kind of like user intervention. We just add to the data set. So scaling in basically every access, training compute, [ asset ] size, model size, model context and also all the reinforcement learning objectives.

Tesla has difficulties training AI models for autonomous vehicles in China because the country previously did not allow Tesla to transfer training videos outside of China while the US government did not allow Tesla to do training in China; a workaround Tesla did was to train on publicly available videos of streets in China; Tesla also had to build a simulator for its AI models to train on bus lanes in China because they are complicated 

In China, which is a gigantic market, we do have some challenges because they weren’t — they currently allow us to transfer training video outside of China. And then the U.S. government won’t let us do training in China. So we’re in a bit of a buying there. It’s like a bit of a quandary. So what we’re really solving then is by literally looking at videos of streets in China that are available on the Internet to understand and then feeding that into our video training so that publicly available video of street signs and traffic rules in China can be used for training and then also putting it in a very accurate simulator. And so it will train using SIM for bus lanes in China. Like bus lanes in China, by the way, one of our biggest challenges in making FSD work in China is the bus lanes are very complicated. And there’s like literally like hours of the day that you’re allowed to be there and not be there. And then if you accidently go in at a bus lane at the wrong time, you get an automatic ticket instantly. And so it was kind of a big deal, bus lanes in China. So we put that into our simulator train on that, the car has to know what time of the day it is, read the sign. We’ll get this solved.

Elon Musk knows LiDAR technology really well because he built a LiDAR system for SpaceX that is in-use at the moment, but he thinks LiDAR is simply the wrong technology to use for autonomous vehicles because it has issues, and because humans are driving vehicles simply with our eyes and our biological neural nets

[Question] You’ve said in the past about LiDAR, for EVs, that LiDAR is a crutch, a fool’s errand. I think you even told me once, even if it was free, you’d say you wouldn’t use it. Do you still feel that way?

[Answer] Obviously humans drive without shooting lasers out of their eyes. I mean unless you’re superman. But like humans drive just with passive visual — humans drive with eyes and a neural net — and a brain neural net, sort of biological — so the digital equivalent of eyes and a brain are cameras and digital neural nets or AI. So that’s — the entire road system was designed for passive optical neural nets. That’s how the whole real system was designed and what everyone is expecting, that’s how we expect other cars to behave. So therefore, that is very obviously a solution for full self-driving as a generalized — but the generalized solution for full self-driving as opposed to the very specific neighborhood-by-neighborhood solution, which is very difficult to maintain, which is what our competitors are doing…

…LiDAR has a lot of issues. Like SpaceX Dragon docks with the space station using LiDAR, that’s a program that I personally spearheaded. I don’t have some fundamental bizarre dislike of LiDAR. It’s simply the wrong solution for driving cars on roads…

…I literally designed and built our own red LiDAR. I oversaw the project, the engineering thing. It was my decision to use LiDAR on Dragon. And I oversaw that engineering project directly. So I’m like we literally designed and made a LiDAR to dock with the space station. If I thought it was the right solution for cars, I would do that, but it isn’t.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s management continues to invest in AI and thinks that AI is game-changing for forecasting and insights on identity and measurement; Trade Desk’s AI efforts started in 2017 with Koa, but management sees much bigger opportunities today; management is asking every development team in Trade Desk to look for opportunities to introduce AI into Trade Desk’s platform; there are already hundreds of AI enhancements to Trade Desk’s platform that have been shipped or are going to be shipped in 2025

AI is providing next-level performance in targeting and optimization, but it is also particularly game-changing in forecasting and identity and measurement. We continue to look at our technology stack and ask, where can we inject AI and enhance our product and client outcomes? Over and over again, we are finding new opportunities to make AI investments…

…We started our ML and AI efforts in 2017 with the launch of Koa, but today, the opportunities are much bigger. We’re asking every scrum inside of our company to look for opportunities to inject AI into our platform. Hundreds of enhancements recently shipped and coming in 2025 would not be possible without AI. We must keep the pedal to the metal, not to chest them on stages, which everyone else seems to be doing, but instead to produce results and win share.

Wix (NASDAQ: WIX)

Wix’s AI Website Builder was launched in 2024 and has driven stronger conversion and purchase behaviour from users; more than 1 million sites have been created and published with AI Website Builder; most new Wix users today are creating their websites through Wix’s AI tools and AI Website Builder and these users have higher rates of converting to paid subscribers

2024 was also the year of AI innovation. In addition to the significant number of AI tools introduced, we notably launched our AI Website Builder, the new generation of our previous AI site builder introduced in 2016. The new AI Website Builder continues to drive demonstrably stronger conversion and purchase behavior…

…Over 1 million sites have been created and published with the Website Builder…

…Most new users today are creating their websites through our AI-powered onboarding process and Website Builder which is leading to a meaningful increase in conversion of free users to paid subscriptions, particularly among Self Creators.

Wix’s management launched Wix’s first directly monetised AI product – AI Site Chat – in December 2024; AI Site Chat will help businesses converse with customers round the clock; users of AI Site Chat have free limited access, with an option to pay for additional usage; AI Site Chat’s preliminary results look very promising

In December, we also rolled out our first directly monetized AI product – the AI Site Chat…

…The AI Site-Chat was launched mid-December to Wix users in English, providing businesses with the ability to connect with visitors 24/7, answer their questions, and provide relevant information in real time, even when business owners are unavailable. By enhancing availability and engagement on their websites, the feature empowers businesses to meet the needs of their customers around the clock, ultimately improving the customer experience and driving potential sales. Users have free limited access with the option to upgrade to premium plans for additional usage…

…So if you’re a Wix customer, you can now install a chat, AI-powered chat on your website, and this will handle customer requests, product inquiries and support request. And from — and again, it’s very early in days and the preliminary results, but it looks very promising. 

AI agents and assistants are an important part of management’s product roadmap for Wix in 2025; Wix is testing (1) an AI assistant for its Wix Business Manager dashboard, and (2) Marketing Agent, a directly monetizable AI agent that helps users accomplish marketing tasks; Marketing Agent is the first of a number of specialised AI agents management will roll out in 2025; management intends to test monetisation opportunities with the new AI agents

AI remains a major part of our 2025 product roadmap with particular focus on AI-powered agents and assistants…

… Currently, we are testing our AI Assistant within the Wix Business Manager as well as our AI Marketing Agent.

The AI Assistant in the Wix Business Manager is a seamlessly integrated chat interface within the dashboard. Acting as a trusted aide, this assistant guides users through their management journey by providing answers to questions and valuable insights about their site. With its comprehensive knowledge, the AI Assistant empowers users to better understand and leverage available resources, assisting with site operations and business tasks. For instance, it can suggest content options, address support inquiries, and analyze analytics—all from a single entry point.

The AI Marketing Agent helps businesses to market themselves online by proactively generating tailored marketing plans that align with users’ goals and target audiences. By analyzing data from their website, the AI delivers personalized strategies to enhance SEO, create engaging content, manage social media, run email campaigns and optimize paid advertising—all with minimal effort from the user. This solution not only simplifies marketing but also drives Wix’s monetization strategy, seamlessly guiding users toward high-impact paid advertising and premium marketing solutions. As businesses invest in growing their online presence, Wix benefits through a share of ad spend and premium feature adoption—fueling both user success and revenue growth.

We will continue to release and optimize specialized AI agents that assist our users in building the online presence they envision. We are exploring various monetization strategies as we fully roll out these agents and adoption increases.

Wix’s management is seeing Wix’s gross margin improve because of AI integration in customer care

Creative Subscriptions non-GAAP gross margin improved to 85% in Q4’24 and to 84% for the full year 2024, up from 82% in 2023. Business Solutions non-GAAP gross margin increased to 32% in Q4’24 and to slightly above 30% for the full year 2024. Continued gross margin expansion is the product of multiple years of cost structure optimization and efficiencies from AI integration across our Customer Care operations.

Wix’s management believes the opportunity for Wix in the AI era is bigger than what came before

There’s a lot of discussions about a lot of theories about it. But I really believe that the opportunity there is bigger than anything else because what we have today are going to continue to dramatically evolve into something that is probably more powerful and more enabling for small businesses to be successful. Overall, the Internet has a tendency to do it every 10 years or so, right, in the ’90s, the Internet started and became HTML, then it became images and then later on videos and then it became mobile, right? And I think they became interactive, everything become an application, kind of an application. And I think how website will look at the AI universe is the next step, and I think there’s a lot of exciting things we can offer our users there.

Visa (NYSE: V)

Visa is an early adopter of AI and management continues to drive adoption; Visa has seen material gains in engineering productivity; Visa has deployed AI in many functions, such as analytics, sales, finance, and marketing

We were very early adopters of artificial intelligence, and we continue to drive hard at the adoption of generative AI as we have for the last couple of years. So we’ve been working to embed AI and AI tooling into our company’s operations, I guess, broadly. We’ve seen material gains in productivity, particularly in our engineering teams. We’ve deployed AI tooling in client services, sales, finance, marketing, really everywhere across the company. And we were a very early adopter of applied AI in the analytics and modeling space, very early by like decades, we’ve been using AI in that space. So our data science and risk management teams have, at this point, decades of applied experience with AI, and they’re aggressively adopting the current generations of AI technology to enhance both our internal and our market-facing predictive and detective modeling capabilities. Our product teams are also aggressively adopting gen AI to build and ship new products.

Zoom Communications (NASDAQ: ZM)

Zoom AI Companion’s monthly active users (MAUs) grew 68% quarter-on-quarter; management has added new agentic AI capabilities to Zoom AI Companion; management will launch the Custom AI Companion add-on in April 2025; management will launch AI Companion for clinicians in March 2025; Zoom AI Companion is added into a low-end Zoom subscription plan at no added cost, and customers do not want to leave their subscriptions because of the added benefit of Zoom AI Companion; Zoom will be monetising Zoom AI Companion from April 2025 onwards through the Custom AI Companion add-on; the Custom AI Companion add-on would be $12 a seat when it’s launched in April 2025 and management thinks this price would provide a really compelling TCO (total cost of ownership) for customers; management thinks Custom AI Companion would have a bigger impact on Zoom’s revenue in 2026 (FY2027) than in 2025 (FY2026); see Point 28 for use cases for Custom AI Companion

Growth in monthly active users of Zoom AI Companion has accelerated to 68% quarter-over-quarter, demonstrating the real value AI is providing customers…

As part of AI Companion 2.0, we added advanced agentic capabilities, including memory, reasoning, orchestration and a seamless integration with Microsoft and Google services. In April, we’re launching Custom AI Companion add-on to automate workplace tasks through custom agents. This will personalize AI to fit customer needs, connect with their existing data, and work seamlessly with their third-party tools. We’re also enhancing Zoom Workplace for Clinicians with an upgraded AI Companion that will enable clinical note-taking capabilities and specialized medical features for healthcare providers starting in March…

…If you look at our low SMB customer online buyers, AI Companion is part of that at no additional cost, made our service very sticky and also the customers give a very basic example, like meeting summary, right? It works so well, more and more customers follow the value…

For high end, for sure, and we understand that today’s AI Companion and additional cost we cannot monetize. However, in April, we are going to announce the customized Companion for interested customers. We can monetize…

…[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?

[Answer] In regards to your question about what are sort of the assumptions or what’s the targeting in our [ head ] with the $12 Custom AI Companion SKU. I would say, starting with enterprise customers, obviously, the easiest place to sort of pounce on them is our own customer base and talk about that, but certainly not just limited to that. But we’ll be probably giving a lot more, I would say, at Enterprise Connect, which you can see on the thing there. But I would say we’ve assumed some degree of monetization in FY ’26, I think you’ll see more of it in ’27. And we think that the $12 price point is going to be a really compelling TCO story for our customers, it’s differentiated from what others in the market are pricing now. 

The Zoom Virtual Agent feature will soon be able to handle complex tasks

Zoom Virtual Agent will soon expand reasoning abilities to handle complex tasks while maintaining conversational context for more natural and helpful outcomes.

Zoom’s management believes Zoom is uniquely positioned to win in agentic AI for a few reasons, including Zoom having exception context of users’ ongoing conversations, and Zoom’s federated AI approach where the company can use the best models for each task

We’re uniquely positioned to succeed in agentic AI for several reasons:

● Zoom is a system of engagement for our users with recent information in ongoing conversations. This exceptional context along with user engagement allows us to drive greater value for customers.

● Our federated AI approach lets us combine the best models for each task. We can use specialized small language models where appropriate, while leveraging larger models for more complex reasoning – driving both quality and cost efficiency

Zoom’s management is seeing large businesses want to use Zoom because of the AI features of its products

You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth. 

Zoom’s management sees Zoom’s AI business services as a great way to monetise AI

You take a Contact Center, for example, why we are winning? Because a lot of AI features like AI Expert Assist. AI, a lot of features built into our quality management and so on and so forth. But all those business services, that’s another great way for us to monetize AI.

Zoom’s management thinks Zoom’s cost of ownership with AI is lower than what competitors are offering

And I look at our AI Companion, all those AI Companion core features today at no additional cost, right? And customer really like it because of the quality, they’re getting better and better every quarter and very useful, right? Not like some other competitors, right? They talk about their AI strategy and when customers realize that, wow, it’s very expensive. And the total cost of ownership is not getting better because cost of the value is not [ great ], but also it’s not [ free ] and they always try to increase price.

A good example of a use case for Custom AI Companion

[Question] So in April, when the AI customization, the AI Companion becomes available, I think it’s $11 or $12 a seat. Can you maybe help us understand how you’re thinking about like what’s the real use case?

[Answer] So regarding the Custom AI Combined on use cases, high levels, we give a customer ability to customize their needs. I’ll give a few examples. One feature like we have a Zoom Service Call video clip, and we are going to support the standard template, right? How to support every customer? They have a customized template for each of the users, and this is a part of combining AI Studio, right? And also all kinds of third-party integration, right? And they like they prefer, right, some of those kind of sort of third-party application integration. With their data, with the knowledge, whether the [ big scenery ], a lot of things, right? Each company is different, they would not customized, so we can leverage our combining studio to work together with the customer to support their needs and also at same time commodities.

Zoom’s management expects the cost from AI usage to increase and so that will impact Zoom’s margins in the future, but management is also building efficiencies to offset the higher cost of AI

[Question] As we think about a shift more towards AI contribution, aren’t we shifting more towards a consumption model rather than a seat model over time, why wouldn’t we see margin compression longer term?

[Answer] Around how to think about margins and business models and why we don’t see compression. And what I would say is that — what we expect to see is similar to what you saw in FY ’25, which is we’re seeing obvious increase in cost from AI.  And that we have an ongoing methodical kind of efficiency list to offset, and we certainly expect that broadly to continue into FY ’26. So I think we feel good about our ability to kind of moderate that. There’s other things we do more holistically where we can offset stuff that’s maybe not in AI in our margins, things like [ colos ], et cetera, that we’ve talked about previously. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Microsoft, Paycom Software, PayPal, Salesforce, Shopify, TSMC, Tesla, The Trade Desk, Wix, Visa, and Zoom. Holdings are subject to change at any time.