Still More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

A few weeks ago, I published Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s AI-first ARR (annual recurring revenue) in 2025 Q4 (FY2026 Q1) tripled year-on-year; management thinks Adobe’s AI-first business will be the company’s next $1 billion business

Our new AI-first offerings ending ARR more than tripled year-over-year, reflecting progress against this opportunity with individuals and enterprises alike…

…What we had identified as the AI first sort of book of business. That tripled, but that should be our next $1 billion business.

Adobe’s management thinks the company’s success in AI will be underpinned by its deep understanding of the creativity domains, its access to vast data, its delivery of complex workflows, and its great brand; enterprises are increasingly asking Adobe for help on their AI strategy in their customer experience orchestration; management thinks agentic AI will further enable outcome-focused enterprise workflows, and Adobe is uniquely able to meet the needs of enterprises in these areas; emerging new platforms have always been additive to Adobe’s market opportunity; management intends to integrate Adobe with leading AI platforms including Anthropic, Google, and OpenAI; management is collaborating with global system integrators (GSIs) such as Accenture and Deloitte to drive technological transformation 

Adobe’s continued success in AI will be underpinned by our deep understanding of creativity domains, the vast amount of data to which we have access, delivery of complex workflows driving business outcomes, and a great brand across individuals, small and medium businesses and enterprises…

…, Adobe has always been a trusted partner for enterprises and we’re increasingly being asked to help them drive their AI strategy across customer experience orchestration (CXO) globally. Enterprises are looking to the combination of employees and automation to deliver on the demands of content and marketing at scale. Agentic AI will further enable outcome-focused enterprise workflows as customers look beyond speed to elevate creative differentiation, brand governance, and personalized experiences across channels. Adobe’s end-to-end solutions are uniquely designed to meet these needs at scale…

…Emerging new platforms have always been additive to our market opportunity. In addition to Windows, MAC, iOS, Android, Chrome and EDGE, we intend to integrate with leading AI platforms such as Anthropic, Google, Microsoft, NVIDIA and OpenAI— providing customers with access, choice, and flexibility. We’re jointly driving enterprise transformation at scale in collaboration with global leaders such as Accenture, Cognizant, Deloitte, dentsu, EY, IBM, Infosys, Omnicom, Publicis, PWC, Stagwell, TCS and WPP.

Adobe’s management’s approach with AI is to expand access to AI in Creative Cloud and Acrobat, reach new audiences with Firefly and Express, and automate content production in Firefly Enterprise; AI usage at Adobe is growing quickly, with record generative credit consumption; Adobe’s content automation solutions are seeing record number of API (application programming interface) calls

Our approach is to expand access to AI across our existing audiences in products like Creative Cloud and Acrobat, reach new audiences with products like Firefly and Express, and help automate content production in enterprises with Firefly Enterprise…

…AI usage continues to grow quickly, as measured through record levels of generative credit consumption…

… Our content automation solutions continue to see strong enterprise adoption, as measured through record numbers of API calls.  These metrics highlight that we are executing against our strategy to empower individuals and businesses to create content in new ways in the era of AI.

Adobe’s management’s approach with AI across Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content; AI Assistant MAU doubled year-on-year in 2025 Q4 (FY2026 Q1) and Express MAU tripled; Express is now used in 99% of US Fortune 500 companies; Adobe Acrobat Studio, introduced recently, brings all of Adobe’s AI and creative capabilities into PDF tools, is off to a strong start

Our vision for Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content…

…PDF Spaces transforms collections of files and links into dynamic knowledge hubs that allow you to easily collaborate with others. Acrobat AI Assistant provides users conversational experiences that help them comprehend information faster and more accurately with an individual PDF or across documents in a PDF Space. Our Acrobat and Express integrations empower users to turn content they are consuming into generated presentations, infographics, audio summaries and more. It’s clear that these AI-based capabilities are resonating with users, as AI Assistant MAU doubled year over year and Express MAU tripled year over year. Express is now used in 99% of U.S. Fortune 500 companies.

In Q3, we introduced Adobe Acrobat Studio, a single offering that brings together all these AI and creative capabilities with the PDF tools users know and rely on. Subscription upgrades to offerings that include Acrobat Studio value are off to a strong start across routes to market, including Adobe.com and enterprise license renewals.

Adobe’s management is embedding Adobe products directly into chatbots; management launched Acrobat and Express for ChatGPT in 2025 Q4 (FY2026 Q1); management will soon launch similar integrations into Copilot, Claude, and Gemini; management recently launched a Photoshop conversational editing experience in ChatGPT; brands can now create ads for ChatGPT with Adobe’s tools

We are embedding Adobe’s capabilities directly into new conversational platforms. In Q1, we launched both Acrobat and Express for ChatGPT, significantly expanding the reach of our creativity and productivity workflows. You can expect to see similar integrations into Copilot, Claude and Gemini as those platforms support integrated application experiences…

…Photoshop launched a conversational editing experience in ChatGPT…

…Partnership in the OpenAI initiative to enable brands to create ads for ChatGPT

Adobe’s management’s approach with AI across Creators and Creative Professionals is to empower everyone to create, with Firefly, Adobe’s all-in-one creative AI studio, as the centerpiece; enterprises are increasingly turning to Firefly Enterprise to unlock content automation; Firefly users can access over 30 industry-leading models from both Adobe and leading AI labs; Firefly users can edit and assemble images, videos and audio with prompts and in an integrated way with Photoshop and Express; Firefly’s generative credit consumption was up 45% sequentially in 2025 Q4 (FY2026 Q1); Firefly’s generative credit consumption is skewing toward higher-value modalities, with video generative actions up 8x from a year ago and audio generative actions up 2x; Firefly subscription and credit pack ending ARR was up 75% sequentially in 2025 Q4 (FY2026 Q1); Adobe’s management has continued to add new AI capabilities into Creative Cloud applications, which has led to higher AI usage and in turn, a nice ramp in purchases of Firefly credit packs; Adobe’s Creators & Creative Professionals segment saw the traditional Stock business decline faster than management expected; the entire Firefly ecosystem’s ending ARR exceeded $250 million in 2025 Q4 (FY2026 Q1)

Our strategy for Creators & Creative Professionals is to empower everyone to create – from first-time creators to seasoned professionals to large enterprises seeking to scale content production. Firefly, an all-in-one creative AI studio, is the right tool for the next generation of creators and creative professionals…

…Enterprises are increasingly turning to Firefly Enterprise to unlock a new era of content automation.

Firefly is quickly becoming the go-to destination for content generation, ideation and assembly. Users can generate with over 30 industry-leading models, including Adobe, Google and OpenAI. They can collaboratively ideate with stakeholders in Adobe Firefly Boards. They can edit and assemble image, video and audio using Firefly’s prompt-based editing capabilities with integrated Photoshop and Express web journeys. Firefly momentum is strong, with generative credit consumption growing over 45% quarter over quarter. While that growth is broad-based, generations are skewing toward higher-value modalities, with video generative actions growing more than 8x year over year and audio generative actions doubling year over year, reflecting customers moving deeper into AI-assisted creation across the full creative process. As a result, Firefly subscription and credit pack ending ARR grew 75% quarter over quarter.

Creative Cloud applications continue to embed new AI capabilities, making users far more productive. Photoshop added new partner models and support for higher resolution image generation and editing. Illustrator expanded its generative design capabilities with models from OpenAI, Ideogram, and Google to support frequent vector workflows. Premiere added AI Object Mask, which quickly became one of the most used AI features in the application. As Creative Cloud users increase AI usage, we are seeing purchases of Firefly credit packs ramp nicely…

…While Q1 had many highlights, our traditional Stock business saw a steeper decline than we expected. This shift is playing out more quickly than we had planned for and our focus remains on giving customers meaningful choice between stock and generative AI as they build their creative and marketing workflows…

Firefly ending ARR, across Firefly App, Firefly credit packs, and Firefly Enterprise exceeded $250 million

Firefly Enterprise combines Firefly Services and Firefly Foundry; Firefly Services provides APIs for automated content production workflows, including 3D digital twin workflows, image and video resizing across every social and digital channel, campaign variant generation, and more; Firefly Foundry allows enterprises to build private, deeply tuned AI models trained on their own IP (intellectual property), and gives enterprises a commercially safe model that is able to accurately generate their branded assets; Firefly Enterprise’s new customer acquisition was up 50% in 2025 Q4 (FY2026 Q1) from a year ago; Firefly Foundry recently signed new partnerships in the media & entertainment vertical

Firefly Enterprise, the combination of Firefly Services and Firefly Foundry, is empowering the world’s largest brands to scale content production to unprecedented levels. Firefly Services provide enterprise-grade APIs, giving businesses more than 30 content production capabilities which can be run in automated workflows. These include 3D digital twin workflows for showcasing physical products, image and video resizing across every social and digital channel, and campaign variant generation and assembly for personalized marketing content. Firefly Foundry enables the world’s largest marketing teams and media companies to build private, deeply tuned AI models trained on their own IP. Unlike generic AI models, Firefly Foundry gives enterprises a commercially safe model that understands and is able to accurately generate their branded assets. Together, these products are driving measurable business outcomes, by increasing production scale, accelerating velocity and reducing costs. Firefly Enterprise new customer acquisition grew 50% year over year…

…Firefly Foundry continues to build momentum in the media & entertainment vertical, with partnerships including B5 Studios, Cantina Creative, Creative Artists Agency, United Talent Agency and WME. 

Adobe’s management sees Adobe as the  trusted partner for AI-powered Customer Experience Orchestration (CXO) for enterprises; management recently introduced new agents in Adobe Experience Platform (AEP); management recently expanded AEP’s Agent Orchestrator capabilities; AEP now handles 35 trillion segment evaluations and 70 billion profile activations daily; subscription revenue for AEP and native apps grew 30% year-on-year in 2025 Q4 (FY2026 Q1); traffic to retail sites from LLMs (large language models) was up 7x during the 2025 holiday season; traffic from LLMs to retail sites convert 31% higher and generate 254% more revenue per visit; Adobe has products that help brands engage consumers across their owned properties, search, social media, LLMs and agentic channels; Adobe LLM Optimiser helps enterprises improve their websites’ discoverability by LLMs; Adobe Brand Concierge helps enterprises configure and manage agentic AI experiences on their websites and mobile apps; Adobe is in the process of acquiring Semrush and management expects Semrush to help Adobe provide a comprehensive solution for enterprises to shape brand-image across their own websites, LLMs, and traditional search; 650 customer trials for  Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge are underway; AEP AI Assistant is now used by 70% of all AEP customers;   

Adobe has become the trusted partner for AI-powered Customer Experience Orchestration (CXO) through our thought leadership, rapid innovation, and omnichannel capabilities, while providing the security, reliability, data governance, global scale, and partner ecosystem that enterprises require. 

Adobe’s unified CXO platform provides solutions for brand visibility, content supply chain and customer engagement. Adobe Experience Platform (AEP) is a leading platform for digital customer engagement and brings together new AI-powered apps and agents to transform how businesses build, deliver and optimize marketing campaigns and customer experiences, as well as reduce costs. In Q1, we introduced new AEP Agents along with expanded Agent Orchestrator capabilities, now available to all AEP customers, via a Try and Buy program. The scale of our platform has grown to over 35 trillion segment evaluations and more than 70 billion profile activations per day. Subscription revenue for AEP and native apps grew over 30% year over year, demonstrating continued momentum and value realization…

…According to Adobe Digital Insights, during the 2025 holiday season, traffic to retail sites from LLMs increased nearly 7x, bringing qualified referrals that convert 31% higher and generate 254% more revenue per visit. Adobe’s brand visibility solution, which includes Adobe Experience Manager, Adobe LLM Optimizer and Adobe Brand Concierge, empowers brands to engage consumers across their owned properties, search, social media, LLMs and agentic channels. Adobe LLM Optimizer enables enterprises to enhance the discoverability of their websites by LLMs and significantly increase their organic traffic. Adobe Brand Concierge is an AI-first application enabling businesses to configure and manage agentic AI experiences on their websites and mobile apps to guide consumers from exploration to purchase decisions, using immersive and conversational experiences. We expect our pending acquisition of Semrush will expand our offering to provide marketers with a comprehensive solution to shape how their brands appear across their own websites, LLMs, traditional search and the wider web…

…Strong customer demand for our agentic web offerings with over 650 customer trials underway for Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge…

…Continued adoption and momentum for AEP AI Assistant with 70% of all AEP customers using the agentic capabilities;

Adobe’s management recently delivered innovation that enabled GenStudio-created content assets to flow directly into activation workflows across Adobe’s stack and some of the largest 3rd-party advertising platforms; Adobe GenStudio’s family of products saw ending ARR grow 30% year-on-year in 2025 Q4 (FY2026 Q1)

GenStudio is our comprehensive content supply chain offering, spanning content ideation, creation, production, and activation…

… In Q1, we delivered breakthrough innovations enabling GenStudiocreated assets to flow directly into activation workflows across the Adobe stack and a broad ecosystem of advertising platforms including Amazon Ads, Google, LinkedIn, and Meta. Ending ARR for the Adobe GenStudio family of products grew over 30% year over year as the world’s leading brands and agencies increasingly turn to Adobe to power their content supply chain.

Adobe’s management thinks that only 2-3 really large LLMs (large language models) will succeed because people are not interested in the model but the workflows; management thinks it’s the right strategic move for Adobe to provide a choice of models because customers can then use the right models for the right use cases; management thinks it’s a win-win for Adobe and the model providers for Adobe to be providing different models because the model providers want access to customers while Adobe wants different model-capabilities

My take on the model side would be as follows, which is they’re going to be 2 or 3 really large language models that actually succeed. All of these individual models that exist, small model companies in 1 part of a media ecosystem, I just don’t see how long term they survive because people aren’t interested in just the model, they’re interested in the workflow. And so for us, offering customers with that choice was actually very strategic because we can actually then provide for all of our creative customers the right model for the right case because these all have different brands…

…As it relates to the support of all these models, I think it’s a win-win. They would like access to customers, which Adobe has, and we would like access to these different models because they have different brand attributes. And I think if you look at the larger companies like Google, we’re actually with them and with Nano banana. It’s been a great partnership because we are providing them with a lot of customers and they’re providing us with great technology.

Okta (NASDAQ: OKTA)

Okta’s management thinks the market for securing AI agents is still early; management thinks that Okta is well positioned to help companies secure their AI agents; 91% of organisations surveyed by Okta are using AI, but only 10% have a governance strategy for their use of AI; when management is speaking to customers, they are asking how Okta can help them manage agents securely; management thinks that the surface area for threat actors increases as AI becomes embedded in more workflows and automations; management sees AI agents as a new identity type, and securing identities is Okta’s expertise; Okta can secure the entire agentic lifecycle and gives customers the freedom to deploy agents without any ecosystem lock-in; Okta’s solutions for securing AI agents, Auth0 for AI Agents and Okta for AI agents, treats AI agents similarly as human users; management believes that AI agents are the future of software; Okta for AI Agents became available in early access only in January 2026; Okta’s solutions can enable organisations to observe, govern, and secure the entire life cycle of an AI agent; management thinks identity is even more important in the agentic world than before; management thinks Okta for AI Agents is more unique and differentiated than Auth0 for AI Agents; Okta for AI Agents can help customers understand what different agents are doing;

I mentioned that our portfolio of new products now includes our AI products, Auth0 for AI Agents and Okta for AI Agents. It is still early for this developing market, but as the leading modern identity solution for workforce and customer identity, Okta is uniquely positioned to help organizations combat the growing security threat that AI agents represent. The reality is that the AI revolution has moved faster than today’s security frameworks. According to Okta’s AI at Work report, 91% of surveyed organizations are already using AI but only 10% have a governance strategy in place.

In meetings that I have had with customers and prospects over the past six months, the vast majority of the conversations revolve around their AI initiatives and how Okta can help them build and manage agents securely. As AI becomes embedded in more workflows and automations, the growing number of exploitable entry points—from nonhuman identities to unsecured integrations—expands the attack surface for threat actors. It is clear that in order to get AI right, you have to get identity right. Okta was built to meet this challenge…

…AI agents are simply a new identity type, and protecting them is a natural extension of what we do best. Okta’s neutral and independent identity solution is uniquely positioned to secure and govern the entire agentic lifecycle and gives customers the freedom to deploy on any agent without ecosystem lock-in, all while strengthening their security posture. Our two-pronged solution with Auth0 and Okta for AI Agents treats AI agents with the same importance as humans and gives customers everything they need to secure this powerful new technology. 

We are still in the early stages, but we believe that in a few years, agents and agentic systems will not be the exception to how enterprise software is built and operated. They will be the rule. We believe that AI agents represent nothing less than the future of software…

…Okta for AI Agents, which became available in early access in January…

…With our solutions, developers, administrators and IT teams can ensure that the entire life cycle of an AI agent from initial design through active deployment is observable, governable and secure…

…Identity is at the center of — traditionally, in legacy technology, it was always at the center. And in this agentic world going forward, it’s becoming clear to everyone, it’s even a bigger deal than it was before…

…[Question] It seems like you’ve got a real competitive advantage on the Auth0 side. Could you maybe compare, and contrast initial takes for sales cycles, competitive dynamics and velocity of each? I know it’s still early stages, but is Okta for AI Agents in a more competitive market?

[Answer] I think Okta for AI Agents is more unique and more differentiated than maybe we would have expected. I think Auth0 for AI Agents is unique and differentiated as well. But I think maybe the sentiment you’re expressing is it’s different than what we’re seeing. Customers need a solution that’s pre-integrated to all these agentic systems. I mean there’s no good way for customers to even understand what all these vendors are doing in agentic. There’s no catalog of systems that says, Salesforce is doing this. ServiceNow is doing this, AgentCore is this, Google is doing this, Microsoft is doing this. And that’s what Okta for AI Agents does. And then on top of that, models connections and has policy for connections that connects users to different agents, agents to systems.

A financial services platform company is an existing Auth0 customer and it picked Auth0 for AI Agents to build AI agents; the financial services platform found Auth0 for AI Agents offered enterprise-grade identity for humans and agents, and secure access to 3rd-party MCP (model context protocol) servers

An existing Auth0 customer is building AI agents as part of their leading financial services platform. These agents will help the firm’s advisers make better and faster decisions, but to do so, the agents need access to sensitive customer information, which must be least-privileged. And they need to work with existing systems and third-party services inside the financial institution. The customer picked Auth0 for AI Agents as it met their stringent requirements for a secure, extensible platform to build and deploy agentic systems. They needed a solution that offered enterprise-grade identity for humans and agents while providing secure access to third-party MCP servers, all while acting as a single source of truth.

A global business and technology services provider is rolling out AI agents across multiple agent platforms and chose Okta for AI Agents to manage identities for its growing sprawl of agents; Okta is an independent agent-agnostic platform

Another notable deal that included Okta for AI Agents, which became available in early access in January was with a top global business and technology services provider. They chose Okta for AI Agents to help them discover, control and govern identities for their growing sprawl of agents. Rolling out AI agents across multiple agent platforms is key to their ongoing transformation and centralizing agentic identities in an independent agent-agnostic platform like Okta will strengthen their cybersecurity posture.

Okta for AI Agents and Auth0 for AI Agents contribute very little revenue at the moment because they are still very young products, but management thinks they can be a huge source of upside in the coming years; Okta for AI Agents and Auth0 for AI Agents will lead to higher growth in current RPO before it flows down to revenue

Okta for AI Agents is not even generally available yet, and Auth0 for AI Agents is — just was generally available at the beginning of the quarter. So it’s off to a huge start. Now the relative number is small compared to our $3 billion revenue run rate. But looking forward to next year, we’re very, very excited about the potential of these products…

…Because the agentic products are so new, it’s tough to pour too much into our assumptions about growth in terms of guidance. But I think those things could be a huge source of upside over and above the guidance in the years ahead…

…We’re not thinking about this as an opportunity just for FY ’27. This is an opportunity to be accretive to growth for FY ’28, ’29. And we’ll see the results, as you guys know, in current RPO first before we see it in revenue…

There is some confusion that Okta’s customers have between identity infrastructure and identity security; identity infrastructure and identity security are separate things, and Okta is the only company that does both; management sees both identity infrastructure and identity security as being really important for the agentic market; management is not seeing any big change in the competitive landscape for Okta in the agentic market for identity infrastructure and identity security

I think the biggest confusion people have is the distinction between identity infrastructure and identity security. And they hear the word identity, and they think if you’re sitting on top of identity and detecting threats and blocking threats, you’re also identity infrastructure. So that’s one of the big confusions. And when you look at the agentic market, they’re both really important. It’s the identity security, making sure the agents are monitored and checked that they can’t go out of bounds. But just the infrastructure, just the ability for the agents to connect and just for tracking and visibility, that’s an infrastructure play. And we’re the only company that really does both. It’s at the security layer and the infrastructure layer. So I think that is maybe a little bit of a confusion and something that we’re working hard to make sure everyone understands the advantage of that position as well…

…From an Okta standpoint, we’re not seeing any material change in the competitive behavior in our transactions yet. Of course, we’re keeping our eye on the landscape.

Okta’s management has been speaking to customers, and they think there are 2 ways to charge for agents, (1) a multiplier on a person who uses agents, and (2) a fee that is based on the number of connections a non-human-connected agent has; it’s still early days for the pricing model Okta will adopt, but management sees the pricing as a nice step up for the company

We have these conversations with our 20,000 customers, we get really rapid feedback on how we can capture value, what would be most valuable for them, easy for them to consume. So it’s really a strategic advantage. We have this feedback loop, and we’ve actually structured the go-to-market team for AI agents to capture that feedback rapidly and feed it right back into the product teams. And what we’re seeing is that there’s really 2 ways that we charge for agents. One is like a multiplier on a person. So in the model where a human identity uses a number of agents to augment their work, there’s a multiplier on that agent or on that — what they pay for a person to what they pay for agents. And then also, there’s a — if the agent is not coupled to a person, there’s a — we sell it based on the number of connections the agent makes because that’s really the value. They want to secure those connections and filter on fine-grain access to all the back-end systems and the SaaS applications and the custom applications and data warehouses the agent connects to as they get more — the agent is more valuable as it has more fine-grained access to different things and it’s more secure. So there’s a multiple based on that. The pricing we’re working with these customers on is pretty early. So we’re — it’s a nice step up.

From a hypothetical point of view, Okta’s management thinks it’s really difficult and costly to vibe code a competing product to what Okta has built over the years because the vibe coder (1) needs to ensure there are no vulnerabilities and the product can scale, (2) is likely to incur significant inference costs, and (3) will suffer major costs if things wrong; Okta’s management is hearing customers share similar views as what they have when it comes to vibe coding; management is paranoid about competition from vibe coding and Okta is using LLMs and coding tools to build in the as fast possible; customers are telling Okta that they do not want to use startups for securing AI agents and they do not want to use just one provider for agents

[Question] When you look at what you’ve built over the years and the data that you’re sitting on, can you talk about sort of the structural advantages that you see over maybe some upstarts or some vibe coding alternatives?

[Answer] I think if you want to build what any SaaS company has done or what Okta has done, it’s years and years of hardening and making sure there’s no vulnerabilities and making sure it scales and it’s reliable. And it’s — if you — I don’t know what the inference cost to build that would be, but it would be pretty significant inference cost. And then if you flip it around, you just think about what’s the price of getting it wrong. And if getting it wrong, it’s hard to validate. It’s hard to prove you have it right. And if it’s wrong, you have a major security breach or you’re down and none of your agents or none of your people can access systems. So the cost of getting it wrong hypothetically and actually just the cost to do it theoretically, if it was even possible theoretically with an LLM or a tool would be pretty high. And that cost could change over time. We don’t know… But when you talk to customers and you hear their challenges and their opportunities, they — a lot of the same things are echoed. They want to identify key infrastructure pillars, and they want to standardize on them. And they see that as the unlock to hundreds of other decisions and hundreds of other builds versus buy decisions they have to make. And they’re putting foundational security, foundational identity in this bucket of things that they want to partner with a leader and trust it and go on top of that and figure everything else out. That’s what they’re telling me. And it kind of matches up with what I would think about hypothetically…

…We are paranoid. And we’re making sure that we are using all the latest technologies, LLMs, coding tools to make sure we have not only something that’s resilient and secure but has the best features and the best capabilities. And so we’re making sure that we build things internally as fast as anyone could build them because we — make no mistakes, the prize here that the whole industry is going after, which is this agentic future where digital labor is part of the TAM is a massive prize. And everyone is at some level; big picture is going to be going after this prize. And it’s exciting because it’s greatly expanded the TAM of what Okta could be…

…They’re reticent to trust a start-up with this critical piece of foundation because they know there’s going to be M&A, and they know there’s going to be start-ups going away. There are so many start-ups playing in this space that there’s bound to be a lot of failure, and they don’t want to build their whole foundation around something and have it be pulled out from under them. And the other factor that is in their minds is that they don’t want to be locked in. Think about — what’s happening at agentic and what’s happening in this world, these foundational models are moving incredibly fast. And its Anthropic foundational model that has the leap ahead and then it’s OpenAI and then it’s an open-source model and then it’s — and that’s going to continue for many years. And they don’t want to be locked into a certain stack and a certain set of tools. So they’re reticent to trust their foundational security with one provider, one platform. And back to the start-ups, they know that a bunch of these start-ups are going to get bought by the big players, so they’re thinking, even if I go with a start-up now, it’s going to get sold and then we’d be locked into Microsoft, and they don’t really want that.

Okta’s management thinks the proliferation of AI agents could massively expand Okta’s total addressable market (TAM); management thinks the SIEM (Security Information and Event Management) market is changing because of AI agents

Think about identity and what it’s been in the past. It’s roughly $20 billion TAM right now in terms of what people spend on the vendor data. We talk about an $80 billion TAM. I mean this could be bigger than — this could be the biggest part of cyber in a few years for sure. And it could be even bigger than that if you really think about the infrastructure that stitches together the entire agentic enterprise and is the plumbing that makes it run…

…The SIEM market is transitioning to be not just a platform for logging in and doing authentication authorization, but it’s a platform for customers building agentic interfaces to their customers and to agents coming into their systems. So Auth0 for AI Agents, that’s what it is. It’s a token vault. It helps agentic login. It helps customers hook other AI tools up to their customer login. And so I think over time, that market is evolving into something that’s hugely impactful and value delivering for our customers.

Okta’s management is working with standards bodies in building solutions for securing AI agents, but they do not think that there will be only one set of standards that will dominate

They’re all trying to do a ton of things and make their services more agentic and more compelling and security and the ability to have them be more enterprise-ready is on their list, but we have to convince them to get it higher on their list. So it’s not like a competing standard is like a prioritization thing. But remember, we are — we want to provide this identity infrastructure and make sure that we give people this solid foundation to build upon. And that’s going to require standardization just because it’s not going to — you can’t use a standard piece of foundation if everyone is doing their own things in a different way, which is why we’re working with standards bodies in general. It’s not just Cross App Access, but it’s an important part of the equation. But I wouldn’t say like the whole war rests on one specific standards body or standards battle. I think it will be an evolutionary thing over the next several years.

Sea Ltd (NYSE: SE)

Monee’s credit business grew in 2025 because of its AI-driven improvements in risk underwriting capabilities; management is experimenting with transformer-based AI models to assess credit risks and the experiments are showing very good performance

Our credit business expansion in 2025 was made possible by improvement in our risk underwriting capabilities. This improvement tapped on our rich ecosystem data and advancement in AI. Over the year, we made good progress training our risk models to better understand and map how user behavior evolves over time. We are better able to access individual repayment capacity alongside evolving market risk and dynamically adjust the credit limits as needed. Enhancing our models precision and performance enabled us to scale rapidly in 2025, while still maintaining a stable risk profile…

…We’re experimenting with the new AI — new risk model with the transformer structure as well to do a sort of a long sequence data training fit into our model to utilize many of the e-commerce data that we are not able to use in the traditional risk modeling, and it has been showing us very good performance.

Sea’s management has directed a lot of investments into AI for the Shopee business; for each AI investment in Shopee, management looks at the ROI (return on investment); Sea has used AI to improve the take rate on its advertising business; management recently rolled out multi-modal search for Shopee and the roll-out has delivered clear ROI; management is using AI to help sellers on Shopee; customers are able to talk to Shopee’s sellers with the help of AI and this helps sellers upsell and reduce manpower costs; Shopee has AI-powered tools for sellers to create pictures, videos, and descriptions of their products, and the tools have a fairly positive ROI

I think if you look at the e-commerce side, we do spend quite a lot of effort on the AI. I think you mentioned about AI investment there. For every — for the investment on the e-commerce for AI, we also look at the positive return of investment across the initiatives.

For example, if you look at one of the area we spend on AI is our search recommendation and also ad systems. The uplift on our ad take rate is a consequence of many of our AI efforts. For example, how do we actually expand the description for our products, we can understand the product better. For example, how can we expand the queries from the users, we can understand user intention better. Recently, we also rolled out a multimodal search in our platform as well. So user can search a picture plus a long description, and we are able to serve that just similar to how Gemini or ChatGPT would do. I think all those AI investment has a clear ROI.

We also spent quite a lot of effort using AI to help our sellers. For example, if you go to many of our countries, you can talk to the sellers with the help of AI already. So we built an AI chatbot for our sellers. Our sellers can customize it for their own purposes. This will help the seller to reduce their manpower and also make it not only reduce cost, but also have the better upsell for the buyers. And we also have tools for the seller to create videos and picture descriptions for their products, et cetera. All those typically come with a fairly positive return on investment for our ecosystems.

Tencent (OTC: TCEHY)

AI is benefitting Tencent’s game content development, user engagement, and marketing efficiency; management believes that Tencent’s business has a high degree of resilience in the age of AI because of (1) network effects, (2) a connection between the digital and physical world, (3) licensing requirements, (4) unique resources, (5) low take rates, and (6) proprietary data; AI can enable faster game development, but the gaming industry is already in a state of oversupply and it will be game-quality, which depends on human creativity, that will be the key success factor; management thinks games will benefit from AI as people will have more time on hand; 

AI contributes meaningfully to game content development, user engagement, and marketing efficiency. Video Accounts total time spent increased over 20% on upgraded recommendation algorithms and enriched content ecosystem. Our marketing services revenue growth outperforms the industry, benefiting from our upgraded ad tech model and newly introduced automatic campaign solution, AiM Plus…

…AI will affect every part of the technology industry, but some products and services are inherently more resilient than others. We believe that some of the characteristics of resilience would include network effects arising from consumer to consumer to content creator, and consumer to business interactions in descending order of strength. That’s number one. Number two, deep supply chain integration linking the worlds of bits with the world of atoms. Number three, stringent regulatory and licensing requirements. Number four, scarce or unique resources, including physical and intellectual properties. Number five, tick rates that are low compared to value provided or cost of switching. And number six, private data that is closed and interactive in nature. Using these criteria, we look across our major existing businesses. Our conclusion, which is supported by usage trends, is that each one of them has got a high degree of inherent resistance.

In particular, for our communication services, including Weixin, QQ, and Tencent Meeting, people use them to connect and interact with other people, largely their families, friends, and colleagues, and business partners. We believe this need for human interaction, together with the network effects and closed nature of the data arising from these interactions, have resulted in communication services being extremely sticky in the face of competing non-AI services in the past and will continue to be resilient versus AI-based services in the future.

Moving on to our games. They are also very resilient as our multiplayer games, especially PVP games, also enjoy network effects. Similar to sports, they are team-based in nature, and players play with and against other players. Just as people prefer to participate themselves or watch the teams they support compete in sports rather than watching AI sports, game players continue to enjoy the interaction with other humans that our games provide…

…While AI will enable more games to be made faster, the game industry is already in a position of excess supply, with 200,000 new games on mobile and 18,000 new games released on Steam every year. The limiting factor is that new games need to be high quality and more innovative than the best existing games, which in turn requires human creativity on top of cutting-edge technology. Game is a natural beneficiary of AI proliferation, also when people have more time at hand.

Our fintech services are also resilient as they depend on difficult to secure and retain licenses which are limited in nature and also set the boundary on how innovations can be introduced in an industry. We have also invested decades building a payment network of difficult to replicate rails into partner banks, merchants, and connecting them with more than 1 billion consumers, which brings its own network effects. Our mobile payment take rates are already among the lowest in the world, which we believe makes competing with us on price highly uneconomical.

Tencent’s management is deploying AI to strengthen the company’s core businesses; management thinks Tencent is at the forefront in China and globally in strengthening its core businesses with AI; Tencent is using generative AI in its games business to speed up content production, acquire new users, retain existing users, and improve the gameplay experience; Tencent is using generative AI in its marketing services to improve ad conversions and user experiences, allow advertisers to create more ads, and provide the AiM Plus automated advertising campaign solution; Tencent is using AI to enhance content recommendation for Video Accounts; Tencent is using AI to improve content production efficiency for digital content; Tencent is providing AI agents within its enterprise software products; Tencent is using AI in the Fintech business to improve credit scoring and fraud detection; management has integrated AI into Weixin to enhance the user experience in a wide range of areas; the improved user experience in Weixin include AI agents which autonomously interact on behalf of users within Weixin functionalities (see Point 3 for more on using Hunyuan to build AI agents in Weixin); management thinks the trend of AI agents, such as OpenClaw, being controlled through users’ existing communication apps, mean that Weixin and QQ, will be the most efficient place for users to interact with AI agents; management thinks Tencent is already seeing vey good ROI (return on investment) when applying AI to the company’s existing businesses

We believe that in each of our core businesses, we are now at the forefront of their respective industries in China and often globally in utilizing AI with positive initial results demonstrated by user engagement and revenue trends.

In games, we are deploying generative AI to accelerate in-game content production, enabling us to produce more content within our big games. We’re using generative AI to facilitate new user acquisition and existing user retention through measures such as targeted ads and personalized daily highlight reels. We’re enriching the core gameplay experience with AI features such as virtual teammates in PVP games and realistic non-player characters in PVE games. These initiatives are one reason why Tencent’s games are more and more evergreen, and our revenue growth of 22% in 2025 outperformed the 7% growth of the global games industry.

For marketing services, we scaled up our advertising foundation model to provide more relevant ads to more targeted users, boosting ad conversions for advertisers and providing better user experiences at the same time. We provide generative AI-powered ad creative solutions, enabling advertisers to create more ads which are more relevant to smaller set of users and more efficiently. We introduced our automated ad campaign solution, AiM Plus, under which advertisers can automate targeting, bidding, and placement, improving their return on marketing investments and increasing their budget allocation to us. These initiatives contributed substantially to Tencent’s marketing services revenue growth of 19% in 2025, outstripping the overall China ad industry growth of 14%.

For Video Accounts, deploying a longer sequence AI model which captures more of a user’s signals to enhance content recommendation is boosting user growth, engagement, and content distribution. Total time spent on Video Accounts increased more than 20% in 2025, and Video Accounts is now the second-largest short video service by DAU in China.

For digital contents, we utilize AI in content production, improving production workflow efficiency, and providing visually compelling special effects. AI also helps in content distribution through more intelligent content recommendations across music, videos, and literature.

We’re using AI in enterprise software to provide features such as AI agents that can take notes on and summarize concurrent meetings for users, and AI agents that generate intelligent summaries of customer service history for merchants. Our enterprise software products, WeCom and Tencent Meeting, are leaders in their categories in China in terms of usage and revenue.

For Fintech, we utilize lightweight AI models to enhance credit scoring processes and facilitate fraud detection, contributing to us sustaining better than industry non-performing loan rates…

…We have also integrated AI to enhance a range of existing user experiences within Weixin, including content consumption, information retrieval, and merchandise recommendation and customer service. We’re building AI agents which autonomously interact on behalf of users within Weixin functionalities, especially Mini Programs. The excitement around OpenClaw illustrates that people recognize AI can unlock computer use capabilities to improve their daily lives but also illustrate the risks around unleashing unsupervised AI. We want AI agents in Weixin to deliver AI productivity that’s beneficial to the general public as well as early adopters, and which will boost ecosystem activity and naturally generate revenue…

…OpenClaw is upgrading AI from thinking to doing via autonomous workflows and continuous task execution. Users control this new generation of AI tools through command line interfaces in their existing communication apps, which generally means Weixin and QQ in China, as it’s the most efficient for users to interact with digital agents in a place and format where they are already interacting with human contacts…

…We have already seen very good ROIs when we apply AI into our existing businesses, right? You know, so if you look at the breakdown of our financials, you know, if you look at the financials on a combined basis and then sort of we break it out and saying, oh, you know, these are the financials with existing businesses plus the investment into AI for supporting these businesses, right? You know, the growth is actually quite strong and if you exclude the investment in new AI products, then you know, the operating leverage is clearly there.

Tencent’s management sees substantial opportunities from configuring a strong foundational model for the company’s core customer-facing use cases; management thinks Tencent is not at the forefront when developing frontier models, but the company has revamped its AI-building capabilities; version 3 of Tencent’s foundation model, Hunyuan, is now in testing and it is a step-improvement compared to version 2; management thinks Tencent’s 3D text-to-image and world models are early category leaders; management believes that users of AI agents will have access to multiple foundation models, but integrating Hunyuan with Weixin will enable Weixin to have unique agentic capabilities; management spent RMB 7 billion on HunYuan and Yuanbao in 2025 Q4 alone, and RMB 18 billion in 2025, and expects to double the investment in 2026; management is confident that the investments in HunYuan and Yuanbao will lead to monetisation; management thinks the AI race is not just one race of model-building, but there are many different races taking place, so they are not worried about Tencent being relatively late; management believes that HunYuan will eventually be a SOTA (state of the art) model in the future

At the foundation model layer, we see substantial opportunities from combining a strong foundation model with configuration for core user cases such as chatbot, coding, multimodal, and agentic applications. 

Although we’re not the first mover in large language models, having already revamped our team, improved our data quality, and rebuilt our AI infrastructure for pre-training and reinforcement learning, we’re now iterating more intelligent models at a faster pace. HunYuan 3.0 is in internal testing and currently represents a bigger step in capabilities versus HunYuan 2.0 than HunYuan 2.0 was versus HunYuan 1.0.

For multimodal capabilities, our 3D text-to-image and world models are early category leaders and will increasingly benefit from leveraging our proprietary data and abundant use cases…

…AI agents are currently powered by a multiplicity of foundation models, and we expect that users at the application level will continue to have access to a range of models. However, improving the performance of HunYuan will enable us to offer new, unique to Weixin agentic capabilities. The Weixin and HunYuan teams will work increasingly closely together going forward…

…Our spending on our two biggest new AI products, HunYuan and Yuanbao, was CNY 7 billion in the Q4 of 2025 and CNY 18 billion for the full year. These figures are only for HunYuan and Yuanbao and exclude AI initiatives supporting our existing products and services, as well as exclude costs arising from providing GPUs to external customers via Tencent Cloud. We expect to more than double these investments in HunYuan, Yuanbao, and other new AI products in 2026, which we intend to fund from increasing earnings from our core businesses…

…Over time, we’re confident that monetization will follow usage for these new AI products…

…[Question] I have one question regarding the comment quite a few times that we mentioned that we are not a first mover or we are even a latecomer in AI. In the U.S., we have also observed that it’s becoming very difficult for some of the latecomers to catch up, even for those that have very high resources in terms of compute, talents, and data. How does management get comfortable and confident that we won’t be following the same path in terms of, you know, lagging behind, not able to catch up and around areas on compute modeled applications?

[Answer] If you are playing just one game, then basically it’s hard to sort of, you know, catch up on one game, right? You know, if you view AI as sort of, you know, a multiple of different games, then, you know, there are new opportunities, new frontier that’s opened all the time… All these elements can be packaged together, you know, in the new race of AI. It’s not sort of, you know, one race. It’s actually sort of, you know, a world of many, many races… I think, you know, that will, you know, increasingly manifest itself and as a result, there will be a lot of opportunities for different players to come up and innovate from behind. I’m not sort of, you know, very worried about, you know, being late, but I’d be worried about, you know, if we’re not innovating fast enough…

…Our HunYuan 3.0 is gonna be much better than HunYuan 2.0, and that’s actually just the starting point. I think, you know, over time, we’ll be able to iterate the training of our model faster and, you know, I’m very confident that, you know, if we focus on that, you know, we’ll reach SOTA at some point in time.

Tencent’s management thinks building AI chatbots is not the best way to use AI to help people; management thinks AI chatbots are competing with internet search; management is still finding product-market-fit for Tencent’s chatbot, Yuanbao; management will be deploying HunYuan 3.0 in Yuanbao in the near future and they think this will improve Yuanbao’s user experience; Tencent’s management is seeing that consumers in China are not willing to pay for AI subscriptions, unlike in the USA; management thinks Tencent’s consumer AI products, when introduced to Chinese consumers, will have to be seen as investments upfront because the company can’t charge for them at the moment, but management still thinks the AI products will generate a very attractive return over time; see Some observers in Chinese tech are single-mindedly focused on AI chatbots as the only means for bringing AI to users. We believe this mindset is overly simplistic because AI can help people in a multitude of ways beyond powering an information advice app. We believe that AI chatbot applications are largely competing with search applications rather than with every other application. For Yuanbao, our own AI chatbot app, we’re focused on finding product market fit and use cases which belong in chatbot AI app. We’re rapidly iterating Yuanbao to enhance its user experience by providing better search integration, improved speech recognition, easier access to multimodal capabilities, and exploration around group chat, which we believe will increase usage and user retention of the app. In the coming months, as we deploy HunYuan 3.0 in Yuanbao, we believe the core user experience will step up further…

…You know, we would be seeing new investments first, right? You know, there’s not that much of a revenue, especially in the context of China. Unlike in the U.S. where you can actually get consumers to pay subscriptions and you can get companies to pay for, you know, coding agents at a very high cost. In China, those are not sort of that available. I think these will present themselves as investments upfront. Over time, we believe, you know, we’ll be able to generate revenue from these new AI products and they would generate, you know, very attractive return for us over time.

Tencent’s management has introduced productivity-enhancing AI tools for OpenClaw; management sees OpenClaw as a decentralised model for how AI works, beyond just having two major chatbots; management thinks that users of OpenClaw will want OpenClaw to work with multiple models

Speaking of OpenClaw, we have introduced a number of AI tools for enhancing productivity, including WorkBuddy, QClaw, and Tencent Cloud Lighthouse. We provide downloadable skills to easily put these tools to use from our SkillHub…

…I think OpenClaw is actually a very exciting concept, right? You know, it actually sort of presents a decentralized model or a decentralized regime for, you know, how AI works in this world…

…For some time, right, AI seems to be sort of, you know, everybody is trying to fight to become the AI, AGI hegemon or monopoly. You know, there seems to be a point in it which like people said, “Oh, if there’s one model which is AGI, then, you know, it would rule over everybody,” right? You know, the reality is it’s not, right? You know, you have multiple models becoming, you know, very strong and, you know, they specialize in different kinds of activities, right? One in chatbot, the other one in coding, and the other one in multimodal. You also have open source, which are, you know, pretty good. You have a lot of other models which sort of, you know, fast followers too. Then there was a time in which, you know, in the two C world [referring to ChatGPT and Claude], there seems to be, the chatbot being sort of, you know, the single entry point. Now with Claw, you can see, you know, it opens up a completely decentralized regime where, you know, many companies can have their own Claw, and the Claw can be using all kinds of different models…

…If you use these OpenClaws, then you know you go into them, and you have a choice. Do you want to use, you know, model A, which is, you know, very high performance and high price per token, or, you know, model Z that’s medium performance and very low price per token, or models, you know, B through Y in the middle? You know, that’s part of the appeal of OpenClaw. You know, HunYuan is, you know, one of those models that is available. You know, we believe with the capabilities of the HunYuan team now in place, that going forward, HunYuan will get better faster, and therefore consumers will naturally increasingly opt to use HunYuan. I don’t think it will be a monopoly situation.

Tencent’s management thinks the company’s investments in AI will follow a similar experience with Tencent Cloud; Tencent Cloud was a late entrant into cloud services in China, but management was patient and knew that Tencent Cloud had scale right from the start; Tencent Cloud focused on high-quality services starting in 2022 which pressured revenue growth for some time, but Tencent Cloud ended up achieving operating profit breakeven in 2024; Tencent Cloud faced revenue headwinds in 2025 because of GPU-supply constraints, but it still grew revenue and earnings; Tencent Cloud is facing a better pricing environment in recent party because of AI demand; management has ordered a substantially higher volume of compute for Tencent Cloud in 2026, which would facilitate revenue growth; cloud services providers in China were suffering for years because the supply of infrastructure was ample, but the supply is now constrained; management will be passing Tencent Cloud’s higher supply costs to customers

I would like to present a case study on Tencent Cloud as the latest example on how we develop our services into market leaders with economic returns over time. That would follow games, payments, and long-form video. We expect it will be the same for our new AI products. Tencent Cloud was a relative late entrant in cloud services. However, we committed to a patient and long-term investment strategy, believing that it had scale from the start due to Tencent itself being the biggest single end user for a range of technology infrastructure in China, and that it could provide differentiated services arising from Tencent’s unique insights, ecosystem, and capabilities. For example, we believe that we were the first cloud service provider in China to fully recognize the stepped-up capabilities of AMD’s recent generations of CPUs, becoming AMD’s largest partner in the country, and that our cloud video streaming service is the industry leader in terms of streaming quality. 

After a period where Tencent Cloud prioritized the revenue growth somewhat misguided by other industry participants, in 2022, we aggressively restructured Tencent Cloud to focus on high-quality services rather than chasing high revenue but low-value-added activities such as reselling and customizing projects. This pivot cost us several quarters of revenue growth, but it enabled Tencent Cloud to achieve operating profit breakeven in 2024, up from significant losses in prior years. During 2025, although Tencent Cloud continued to face revenue headwinds due to limited availability of GPU for external customers as we prioritize our internal needs, it grew revenue and sharply improved earnings, achieving CNY 5 billion adjusted operating profit. In recent months, we’re seeing a better pricing environment, especially for memory and CPU, which, along with robust AI demand and overseas expansion, allowing Tencent Cloud to grow revenue at a faster rate. Moving through the year, we have ordered a substantially higher volume of compute, which should also facilitate revenue growth…

…For years the industry has suffered because the cloud services providers in China were operating at very low margins. One of the reasons they operated at very low margins was because, you know, if there was a new entrant or if the customers wanted to source infrastructure directly, they were able to telephone the supplier and, you know, order the infrastructure that they wanted from the supplier of, you know, CPU or GPU or DRAM. You know, that’s no longer the case. You know, now, the supply is booked out months, quarters, in some cases, years in advance. You know, the supplier is prioritizing the biggest, most regular customers, which are the hyperscalers such as ourselves. Therefore, you know, the smaller cloud providers no longer have certainty that they can source supply, and they need to come to the hyperscalers. You know, the hyperscalers have been operating at low margins and so, you know, when the demand picks up, then, you know, we almost sort of as an industry have no choice but to pass through higher prices. You have seen a number of price increases in China cloud in the last 24 hours as a result…

…We seek to deliver, you know, more value through, you know, enrichment. Enrichment means that, you know, at a minimum, if you have, you know, compute, you can rent it out bare metal and you get a certain low price and low margin. You know, preferably you rent it out. You subdivide it and virtualize it into tokens, and then you get a higher price and higher margin per unit of compute. Ideally, you bundle it into a platform as a service or software as a service. Then you can get, you know, the best pricing and the best margins. That’s part of the journey that we’ve been on, and that’s part of, you know, how Tencent Cloud has moved from a very substantial losses four years ago to pretty substantial profits last year.

Tencent’s management added Tencent CodeBuddy to Weixin’s developer toolkit, enabling developers to create mini-programs using natural language; management provided developers of AI native mini-programs with free compute resources

For Mini Programs, total user time spent increased over 20% year-on-year, driven by workplace productivity tools, mini-games, and novels. We added Tencent CodeBuddy to our developer toolkit, enabling developers to create mini-programs using natural language input, and we provided developers of AI native mini-programs with free compute resources.

Tencent’s management is using AI in Delta Force to improve user engagement and development efficiency

Delta Force leverages AI coding for development efficiency and deploys AI-powered companions to enhance user engagement. 

The Marketing Services segment’s revenue was up 17% year-on-year in 2025 Q4, driven by improved ad targeting, expansion of closed-loop marketing services, and tailoring of ad formats for specific advertiser use cases; management will be deepening collaboration of Marketing Services with e-commerce platforms; management has increased the inventory for video ads and Video Accounts; Weixin Search’s overall query volume grew rapidly in 2025 Q4 because of AI enhancements to search results, driving commercial query volume

For marketing services, revenue increased 17% year-on-year to CNY 41 billion. We experienced rapid growth from the internet services and local services categories, partially offset by slower growth from the e-commerce category due to platforms temporarily shifting budget from marketing to subsidies, and also from the financial services category due to the impact of policy changes affecting online lending during the quarter. Growth drivers included improved ad targeting, expanding our closed loop marketing services, and tailoring ad formats for specific advertiser use cases, such as ads that are playable previews of the mini games being advertised.

Entering 2026, we have deepened collaboration with e-commerce platforms, facilitating their merchants advertising within Tencent, and we’ve increased the inventory for rewarded video ads and Video Accounts, which have contributed to faster year-on-year marketing services revenue growth in the Q1 to date versus in the Q4 of last year.

At a product level, Video Accounts total time spent increased due to upgrades to the content recommendation algorithm, enabling faster growth in ad impressions while our ad load remained lower than peers. Better conversion rates contributed to more marketing spending for Mini Shops merchants. For Mini Programs, consumers engaging more with mini-games and mini-dramas attracted more marketing spend from the mini-game and mini-drama studios. Weixin Search overall query volume grew at a rapid rate due to AI enhancements to search results, driving growth in commercial query volume, while search pricing also increased.

Tencent’s management has obtained additional AI compute through leasing, through purchasing imported GPUs (likely referring to NVIDIA’s GPUs), and through purchasing domestic GPUs; the priority use-cases for Tencent’s AI compute is for HunYuan and the company’s new AI products; management currently does not want Tencent to design its own AI chips; management thinks there are many options for AI inference chips in China, and this has brought down the cost of inference chips; management wants Tencent to leverage the best training chips to build models

In terms of GPU constraints then, we’ve been quite actively provisioning, more compute, and that will be coming on stream, progressively, and increasingly quickly through this year, especially the H2 of the year. You know, that additional compute comes from leasing capacity. It comes from us purchasing, higher-end imported GPUs which are now becoming available again, and it comes from us purchasing, the increasing quantity of, domestically China-designed, GPUs. In terms of utilizing those, the compute for different use cases, you know, the priority right now is, you know, HunYuan and our new AI products more generally…

…[Question] We’re seeing a growing number of your tech peers are prioritizing the development of in-house chip design capabilities. I’m just curious where in-house chip development fits into Tencent’s own AI priorities.

[Answer] I think at this point of time, it’s not the most critical thing that we’ll be focused on. So if you look at the chip, you know, there is, you know, a difference between training chip and inference chip, right? You know, and for training chip, it’s actually very, very difficult to design and you manufacture, and you actually want to have access to the most state-of-the-art, you know, training chips to the extent possible and in the most flexible way so that, you know, you can actually sort of keep training for the best model. 

And then, you know, if you’re talking about inference, right, you know, I think inference, it’s mostly for cost. I think for cost at this point in time, there’s actually a lot of different suppliers in China, which is actually very different from, let’s say, in the training space, right, where there’s essentially one player or two players who can actually command a very, very high margin, right? You know, in the inference world, people basically sort of, you know, are earning much lower margin, and there are many more solutions and, you know, options. So, I think, you know, the key for us is actually sort of leverage the best training chips to train the best model at this point in time, and there’s a lot of value in being focused.

Tencent’s management thinks it’s really difficult right now to tell which layer of the AI technology stack will be commodities

[Question] If we think about the AI stack between, you know, the models, the orchestration layer, the application layer and so on, which parts would you say are most critical for Tencent to be best in breed versus, you know, areas where we think these will be commoditized?

[Answer] I think at this point in time, it’s actually very dynamic, right? You know, you’re in a fast-moving market. I think, you know, it’s very difficult for someone to say sort of, you know, oh, you know, there will be one layer more important than the others, right? You know, I think, you know, we have the resources, we have the people, we have the team to actually invest in all these layers.

It’s currently not possible to use AI to build games completely from scratch

There is not yet the capability to create games, you know, completely from scratch using AI for a number of reasons that we can get into.

Tencent’s management is seeing AI create demand for memory chips in two ways, namely, (1) GPUs requiring high memory capacity, and (2) AI creating software that requires memory to execute

You know, when people utilize the agentic tools that we’ve been discussing, they’re using them and they create software. You know, that software, you know, then primarily, it needs to be executed. When it executes, most of it is not executing on a GPU. It’s executing on CPU, and then as it executes, it creates, you know, memory demands. It’s not just, you know, GPU, DRAM, HBM where we’re seeing demand picking up. It’s also, you know, CPU. It’s, you know, regular RAM. It’s SSD. It’s hard disk drive.

Veeva Systems (NYSE: VEEV)

Veeva’s management thinks core systems of record such as Veeva will incorporate and work seamlessly with AI and not be replaced by it; Anthropic’s recent launch of Claude for Life Sciences has Veeva as a launch partner; management thinks LLM (large language model) providers’ launches of life sciences products will not cannibalise Veeva’s products; management thinks AI is a very positive thing for Veeva because it helps Veeva create and improve its software faster; management thinks core systems of records will be used by both agents and humans; management thinks it’s still early days of AI and it will play out over 10-20 years; management thinks the LLM providers and Veeva will have a symbiotic relationship; management thinks the LLM providers will not be interested in industry-specific software

There’s a lot of hype and fear that AI will replace today’s software systems. The reality is, not all software

is the same. Core systems of record like Veeva, SAP, and Workday are essential and will incorporate and

work seamlessly with AI, not be replaced by…

…[Question] Anthropic made a lot of noise when they launched Claude for Life Sciences and signed up a lot of deals and maybe lost in that was Veeva is an enabling and launch partner of Claude for Life Sciences. So Peter, how should we be thinking about the opportunity for Veeva to work with Anthropic, OpenAI, all the different kind of model providers out there, provide your domain expertise, provide the workflow expertise and kind of have a rising tide lifts all boats situation rather than obviously the current market view of it being more cannibalistic?

[Answer] I certainly don’t view it being cannibalistic for Veeva, absolutely not. I mean let me state that clearly. AI is a very positive thing..

…And these core systems are going to be used by agents as well as human users. Yes, that’s new. But these systems are essential, and they’re not going away…

…So we’re really in these early days of AI and people get a lot of hyper and they think it’s going to play out over 1 or 2 months. It’s not. It’s going to play out over 10 or 20 years…

…Specifically for Veeva, AI, that’s going to help us create and improve our core systems faster than before. So that’s where it will help our software development but not at the expense of quality, predictability, regulatory compliance and the real value that customers depend on…

…Anthropic or OpenAI and others, that’s an engine, and their engine will be used for a lot of things. They will be used by the Veeva applications or by custom applications that customers develop. So yes, it’s good for those large model providers. Now they have to watch their profitability, et cetera, but they’re an engine in the new wave of cloud computing. So that’s the new AWS, et cetera. So it’s a good business there. But just as AWS itself and also Microsoft Azure, Google Cloud, et cetera, that was very good business for those hyperscalers. But I think what sometimes gets lost, that actually enabled Veeva. You couldn’t have built the industry Claude for Life Sciences. You couldn’t have built those long tail of applications without those cloud infrastructure providers. And it’s the same way here with these large language models. Veeva could not build the AI applications that we’re going to build without these foundational LLMs. So I don’t know if I’ll use this word correctly. I think the word is symbiotic. I think so…

…I don’t think the AI vendors are really making industry-specific software applications, right? It takes a lot of dedication and effort to do that. So I think it’s a very symbiotic relationship. Just like the cloud area, yes, Amazon didn’t make industry-specific applications either. I don’t really see — why would somebody like Anthropic do that, right? They’re going to make broad applications and applications for coding itself, et cetera. That’s what I feel would happen.

Veeva’s management thinks the agentic layer will provide far broader value than LLMs (large language models); management thinks AI agents is a substantial opportunity for Veeva; Veeva has Vault CRM Free Text Agent that captures rich, compliant call notes; Veeva has PromoMats agents that deliver approved content faster; management will be introducing regulatory and safety agents in 2026 (FY2027); management thinks building industry-specific AI is difficult and requires proprietary data, sophisticated logic, domain expertise, and more; management thinks Veeva’s agents, if built well, can provide a lot of value to customers; management thinks Veeva is in a great position to lead in industry-AI for the life sciences industry; management is making great progress on Veeva’s first two AI agents for safety, and they will be launched in April 2026; management is pleased with the progress of PromoMats agents; there are early adopters who are live with PromoMats agents; management thinks their approach to data is resonating with the life sciences industry when building AI use cases; customers are excited about the PromoMats (Promotional Materials) agents because the agents really work and the customers have been burnt by failed AI experiments; management is seeing PromoMats agents delivering very clear ROI (return on investment) for customers; the two AI agents for safety that will be launched in April 2026 provides clear value for customers because they automate workflows that would require expensive labour; management thinks it’s still early to nail down the right pricing model, but Veeva will be going with a token-based pricing model; management is seeing most customers go with Veeva’s agents instead of them building their own agents with Veeva AI

While the major large language models are the catalyst for this shift, the agentic layer provides far broader and more diverse value. The agentic transformation underway represents a substantial opportunity for Veeva and life sciences. With our core systems of record spanning the industry’s most critical functions and unique datasets, we can deliver industry-specific AI deeply integrated into our core applications. 

For example, Vault CRM Free Text Agent captures rich, compliant call notes for deeper customer insights. PromoMats agents help deliver approved content faster. Regulatory and safety agents coming this year can streamline health authority interactions and safety case processing. And this is just the beginning.

Building reliable industry-specific AI across a wide range of use cases for a highly regulated industry is hard. It takes time, focus, and the right skills. It integrates proprietary datasets, sophisticated logic, validated processes, and depends on specialized domain expertise and safeguards to maintain compliance and data integrity. If done well, our agents will provide significant value for customers and Veeva.  

It’s early days for industry AI, and we are in a great position to lead. We have a well-established life sciences cloud that’s expanding to connect the industry, strong momentum with Veeva AI, and much more innovation on the way…

…We are also making great progress on our first two Veeva AI Agents in safety, Case Intake and Case Narrative coming in April. Customer interest is high as the industry looks to AI to drive efficiency in safety case processing…

…I am also pleased with the progress of Veeva AI for PromoMats. A number of early adopters are now live, more projects are underway, and the success of these agents is generating a lot of interest…

…Our unique and modern approach to data is resonating with the industry, providing a harmonized data foundation that fits seamlessly with our commercial software. High quality, standardized and connected data is critical for speed and efficiency and is a required foundation for AI…

…For example, in the promotional materials management area, and they’re pretty excited like that I can have a winning AI application that really works and is really durable and is from Veeva because they’ve been — a lot of them have been burned on a lot of experiments, but it’s not easy for customers to admit failed experiments because that’s just the dynamics. You don’t like to admit that. And failed is too hard of a word. Sometimes the experiment doesn’t work out, but it’s not a failure. You got a lot of learnings. But the experiments that can actually scale, they’re rare so far, and they know Veeva’s — we won’t do things unless we can scale them…

…[Question] Can you maybe speak to early proof points that you’re seeing on AI agents that, I guess, you’re planning to roll out over the course of the year? Are there any sort of ROI or tidbits from clients that you’re hearing that you can kind of comment on ahead of these releases?

[Answer] The one that’s farthest along, and we have multiple projects underway, is the commercial content area. And that — the ROI is just very clear. It’s faster content, lower cost to create that content, and that’s what it’s all about. Lower cost to create that content, I won’t quote specific numbers, but that’s pretty clear to quantify. Faster content just means better launches. That means that drives the top line before the patent on that product expires. So I get asked by that — by customers all the time. They know in the age of really omni-channel experience for their customers, which are patients and health care providers, omni-channel experience that includes AI doctors and large language models, the speed that you can get your content out there in a compliant way is just going to be critical. So the old way of approving content is just not going to suffice anymore…

…In terms of AI, it’s pretty clear there in — there’s a lot of human processing of case intake and case narrative generation that’s done by people. That’s not necessarily that high risk, but it has to be done well. And it’s expensive to hire those people, and it’s not easy. So in safety, it’s just very clear. It’s about replacing that type of labor with automation, with AI software…

…It is, as you said, still quite early. As we’re starting this year, we’re really expecting to be using a token-based pricing model, and so that gives us a little bit of predictability around the margin profile. But that may evolve over time…

…[Question] Within Veeva AI, what is the mix of customer adoption you’re seeing right now between prepackaged agents that you’ve built and custom agents that they’re building using Veeva AI?

[Answer] The bulk of it is with our agents that we’re designing. So part of it is our — I guess, our agents are probably a little more robust than our custom tooling right now. But if you look at our agents, there’s detailed work in the agents, right? There’s detailed data curation. There’s detailed testing pipelines. There’s a lot of logic in the agents, right? When we talk about AI agents, there’s a lot of logic, specific logic written in our Java code that’s hard that needs great product management. So in general, customers would rather get that solution rather than build that themselves.

Veeva’s management is not seeing AI-considerations being a major theme with the company’s customer-wins in 2025 Q4 (FY2026 Q4)

[Question] I wanted to ask if Veeva is starting to see some programs funded maybe in the name of AI readiness. I would imagine for a top 20 to commit to Veeva in any of the R&D areas, RTSM, quality, safety, it would seem you’re going eyes wide open into really viewing Veeva as a future foundation for everything AI related that is to come. And so I’m wondering if there’s an AI influence that you’re starting to see that’s contributing to the strong demand here at year-end.

[Answer] I wouldn’t say that’s a broad theme. There are cases, and it varies by area. More of the theme is, hey, we need core systems that will scale, either their existing systems are aging. So we talked about a top 20 safety win. There, their existing systems, because they were doing other things over the past years and just lots of deferred maintenance and that was going to become a critical risk for the company, so they have to get that in. There are sometimes where it will help our data business. They’re trying to clean up their clean reference data because they know AI is not going to work because, okay, garbage in, garbage out. So there’s a little bit of that, but more it’s just modernizing, getting rid of legacy and looking for increased automation. AI is — really, the goal there is automation, right? That’s the goal. But AI is not the only way you do automation. Part of it is you do automation through a system to have clean workflow. So it’s a driver, but I wouldn’t say it’s a major driver.

Veeva’s management is seeing life sciences companies group AI players into 4 buckets, namely, (1) the LLM providers, (2) the point solution providers, (3) their own in-house development teams, and (4) core application providers such as Veeva; when life science companies talk to Veeva about AI, they want Veeva to provide more AI solutions that are tightly integrated with their core systems because they trust the company; Veeva’s management thinks the company’s customers really want it to win in AI applications

They bucket into 3 — maybe 4 types of people that might be able to help them. One is the infrastructure providers, the LLM providers themselves, Anthropic, OpenAI, Microsoft in that camp, Amazon, NVIDIA, those types of things, what — how can they be leveraged there? And then they would look for point solution providers. There’s a specialized group of people in the specialized department, and they can do this proof of concept or maybe you scale it for me here. And then there’s their own employees doing custom software, and then there’s system integrators. And then you get the core application people like Veeva, like Workday, like SAP…

…When they’re generally talking to us, they want us to provide more AI solutions that are tightly integrated with their core systems because they trust Veeva, and they know we deliver quality and really know when we say something is going to work, it’s going to work, right, because our reputation is on the line versus a small start-up can just say whatever they want…

…Our customers really want us to win in AI applications. And so we have a right to win, and we just have to execute.

Veeva’s management thinks the real bottlenecks in life sciences is not the pace of drug discovery, but finding patients for clinical trials, and the pace of a patient getting the right drug for treatment; management thinks these bottlenecks are where AI can play the biggest role, and where Veeva can help; management thinks AI cannot really speed up clinical trials

[Question] Given how mission-critical this is and maybe how much it can be tied not just to better revenue outcomes but more importantly, better patient and better health care outcomes and better societal outcomes, do you see an opportunity to not just automate and drive faster time to value and efficiency but even leveraging AI within the Veeva platform to allow for better drug development, safer drugs out of the market, basically better outcomes rather than just faster time to value?

[Answer] Drug discovery is one thing, and there’s a lot of focus on that. And yes, that will get faster, but that’s not the real bottleneck. The real bottleneck is the clinical trial, the experiment that’s done in the human. And we’re always going to have to do those experiments in the human, and the human biology runs at the same speed. So that always has to be done, and the bottleneck now is finding the patients around the world that can get in those trials. So that’s one.

But the biggest bottleneck by far is there’s a patient somewhere out there in the world. They’re diagnosed with something by a doctor. How long did it take them to get diagnosed? And when did they get the right medicine that will best treat them? That’s where 90% of the value in life sciences is lost, because of that impediment, the basics of is the patient informed. Can they get to the right doctor? Is the right doctor informed? Is the payer informed? It’s — that’s where 90% of the value is lost. And I said value is lost, but on the other side, there’s a lot of people who don’t get treated correctly or timely around the world. And that affects productivity. That affects their family…

…So this is really important for us, and AI can definitely, definitely, definitely bridge that gap. AI doctors and large language models can help bridge that gap between doctors and patients, so maybe that 90% inefficiency goes down to 50%, and that will be a tremendous boom. And yes, Veeva will definitely play a part in that by connecting our customers, the industry to its external ecosystem. And its external ecosystems are clinical researchers, patients and doctors and regulators. And the industry is not well connected, and AI is going to provide a better method to do that…

…About AI speeding up clinical trials, I think AI can speed up some maybe in the start-up and in the close down but not that much really. It’s still based on the clinical protocol of the medicine, which is based on the time of the human body it takes to deal with that medicine and to prove it out and then the patient recruitment, which I don’t think is actually an AI problem, the patient recruitment. So speed it up some but not so much in clinical trials.

Veeva’s broad product suite is an advantage for customers when they are trying to implement AI

Let’s say they’re doing something with us in safety and they start doing an AI solution with us in safety. And 2 years from now, they go with us in clinical data management, and a year later, they put in an AI solution for clinical data management. Well, that AI solution is going to work with their safety solution pretty much out of the box. And that’s a benefit they never planned for they’re going to get. So I think customers start to see that it kind of fits together with Veeva.

Veeva’s management thinks customers are starting to realise that Veeva is the only company that can provide AI solutions that are also connected to all their other systems; management thinks customers are also starting to realise it’s not so easy to build and maintain their own AI solutions

But I think they’re starting to realize if you if you want to have a potential future where you have a great core safety system that has safety AI on top of it and is connected to your other systems in your company, Veeva is the only place you’re going to do that unless you’re going to build it yourself. I think most people are starting also to realize now that it’s not that easy to build and maintain these things themselves. So that’s kind of what’s leaning into our favor on the AI.

Wix (NASDAQ: WIX)

Wix’s management thinks AI and the acquisition of Base44 has dramatically expanded Wix’s market opportunity; the addition of Base44 has allowed users to build applications, content, and websites that are much more powerful and sophisticated than before

What started as a simple do-it-yourself website builder has grown into the leading online presence creation platform serving not just self creators, but also businesses of all sizes as well as professional designers and developers. In recent years, the web has undoubtedly become much more AI-first. That shift is redefining how and what people build online. AI has dramatically expanded the world of what is possible and created new dimensions that had not existed before. As a result, Wix.com Ltd.’s market opportunity today is exponentially larger than in 2025, primarily driven by our expansion into the application space facilitated by our acquisition of Base44…

…With the addition of Base44 to our platform, users can now build tailored software applications, smart mobile applications, pro-level visual content, and, of course, websites, but so much more powerful and sophisticated than ever before. These are all things you can create on Wix.com Ltd. today, which is incredible, but the possibilities ahead are much, much bigger.

Wix Harmony is a first-of-its kind website builder that blends visual editing with vibe coding; Wix Harmony is an AI layer that spans the entire Wix experience; Wix Harmony was launched in English in January 2026 and management will expand Wix Harmony globally in other languages; management is very pleased with the early conversion and monetisation of Wix Harmony; management intends to make Wix Harmony the default Wix experience for new and existing users over time; management expects negligible AI inference costs associated with Wix Harmony in 2026; management is not seeing Wix Harmony and Base44 cannibalise each other’s customer base; management built Wix Harmony for the self-creator market; users of Wix Harmony are using it for the same purposes as the old Wix; Wix Harmony currently does not support a database, but will soon do so; early users of Wix Harmony have better conversion, faster monetization, and higher ARPU (average revenue per user)

Wix Harmony is the first-of-its-kind website creation platform that blends intuitive visual editing with the flexibility and power of Vibe coding. Wix Harmony provides a unified AI layer that spans across the full Wix.com Ltd. experience, allowing for a real AI partner to be with you every step of the way as you create, manage, and grow an online presence or business. After launching in English in January, we are now expanding Wix Harmony globally in other languages, and I am very pleased with the early performance we are seeing, particularly across conversion and monetization metrics. We believe Wix Harmony has the potential to fundamentally reshape how individuals and small businesses build and scale online, not just on Wix.com Ltd., but across the Internet as it becomes increasingly AI-driven. Over time, we plan to gradually make Harmony the default experience for new and existing users, an evolution we anticipate will drive meaningful long-term impact across conversion, engagement, retention, and monetization…

…Negligible AI inference costs associated with Wix Harmony as a result of proactive infrastructure optimization completed last year…

…[Question] Just stepping back, what types of businesses or applications are you seeing users set up with Base 44? And how much crossover is there with what you see on Wix.com Ltd.’s core platform?

[Answer] We do not see any kind of competition, and you can see that they are very mostly different usage also, as you can see now. Clearly, Harmony is accelerating, Base 44 is accelerating. So, obviously, we do not think they take from each other…

…Harmony is a product we built for the self creators…

…We are pretty much seeing everybody using Harmony that was using Wix.com Ltd. before. So it is everything from personal websites to the hair salon website to large company and enterprises, so pretty much everybody. At this stage, Harmony does not support a database, but that will be added soon…

…[Question] On Harmony, just curious what the early cohort KPIs that you are seeing there in terms of conversion, ARPU, attach rate, churn, relative to the traditional cohorts and how durable you see these KPIs across your geos?

[Answer] We see a very good performance of the new cohorts. We actually see a better conversion, faster monetization, and also higher ARPU. So we believe, we hope that this strong trend will continue. Again, I think that it is too early, but we feel very positive about the first reaction and performance of this product.

Base44 expands Wix’s reach into vibe coding; Base44’s user base is scaling rapidly, with the number of new Base44 users today nearly 2/3 of the number of new Wix users; Base44 has reached $100 million of ARR (annualised recurring revenue) just 1 year after its founding and 9 months after Wix’s acquisition (Base44’s ARR was just a few million dollars when Wix acquired it); management is starting to see Base44 being used by enterprises from different industries to build their own software solutions; Base44’s current growth is completely organic as Base44 has no sales team; management believes the potential for vibe coding still lies ahead as the technology reaches the broader online population; 1/3 of Base44’s AI inference costs today are for free users; Base44 has positive non-GAAP gross margin today; management thinks Base44 has a tROI (time return on investment) of less than one year; management thinks there is a great opportunity for partners to use Base44 in the future; Base44 is driving users who joined Wix 10-15 years ago to become paid users

The second new pillar of our strategy is Base44, our leading Vibe coding platform that expands our reach into the vast world of software creation and significantly grows our TAM…

…Base44’s user base is scaling rapidly. Today, the number of new users joining Base44 is nearly two-thirds of the number of new users joining Wix.com Ltd…

…Just one year after Moar founded the company and nine months after our acquisition, Base44 recently reached approximately $100,000,000 of ARR, placing it among the fastest growing software platforms in history. While Base44 is already emerging as a top platform to build lightweight personal projects, we are seeing adoption from a growing community of businesses and enterprise-sized organizations too. Companies in the tech, banking, and healthcare industries, as well as government organizations and nonprofits, are using Base44 to build customized software solutions. We are seeing users develop their own CRM capabilities, product and project management tools, ERP systems, workflow automation frameworks, and financial reporting applications.

Importantly, this momentum and growth is completely organic. With no sales team at Base 44 today, self-propelled adoption by enterprise-size organizations demonstrates the strength of the platform as well as our successful marketing execution…

…I believe the real potential still lies ahead as Vibe coding permeates beyond early tech-forward adopters to the broad online population…

…Base44 finished the year with approximately $59 million of ARR, above our expectations at the time of acquisition. Excitingly, Base44 recently reached approximately $100 million in ARR, a major milestone that underscores our rapid growth and growing market leadership. Strong ARR growth was driven by product innovation that has resonated, a rapidly expanding user base, improving conversion and consistent upgrade and renewal trends…

…Approximately one third of Base44’s AI inference costs today is attributed to token consumption of free users…

….Even after incorporating AI-related costs associated with free users into cost of revenue, Base44’s non-GAAP gross margin is positive today and is expected to improve as the year progresses…

…Base is a very young company, very young product. And, by the way, this is why we are very also conservative about the guidance. But right now, based on the information that we have, based on the history that we already have, we are looking at less than one year of tROI and this is how we manage the acquisition cost…

…Base44 has a ton of interesting things for our partners that they can actually use for their customers, and it is more revenue stream for them. So we believe that although right now most of it is self creator-led, we believe that it is a great opportunity also for partners to use Base44 in the future…

…Base 44 is a very young product, on the Wix.com Ltd. cohorts, we are seeing people who are converting who joined us ten or fifteen years ago. That is amazing

Wix’s partnership with OpenAI is not built on APIs in the standard way, but rather, it’s built on two AIs that are collaborating

[Question] In addition to the apps partnership with OpenAI, do you see potential opportunities in terms of how Wix.com Ltd. websites are navigated and searched by OpenAI in the future, particularly ChatGPT?

[Answer] It is not APIs in the standard way, it is essentially two intelligences that are discussing and working together to give you a website. And that is a fantastic pattern that can be grown a lot.

Wix’s management has given Wix users the ability to open their websites for LLMs to crawl and read if they want to; Wix users can even give LLMs more content than what is offered over a website

As for how OpenAI or any other LLM can read Wix.com Ltd. sites, we support pretty much everything. We support, of course, make text. If our customers choose so, we can make the text visible and easy to crawl and built in a way that is very easy for the LLMs to process. And we also have ways so we can give the LLMs more than just the content that we normally offer over the website, because LLMs like to read a lot of content, when humans tend to want to read less.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, Okta, Salesforce, Sea, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 December 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 December 2025:

1. 100% IRR in Ice Cream – Joe Raymond

Imagine finding a 70-year-old ice cream company with a teens ROE trading for less than net cash and only 4x earnings.

Talk about mouthwatering!

That’s exactly the position Jim Mitchell found himself in in 1990…

…One such illiquid gem was Eskimo Pie Corporation (EPIE), a neat company with an interesting history…

…When Jim Mitchell started buying EPIE in 1990, Reynolds Metals owned 88% of the outstanding shares. The other 12% traded OTC.

Thus, Eskimo Pie was controlled, illiquid, non-reporting, and unlisted.

Music to Mitchell’s ears!

The business itself was perfectly satisfactory…

…Average annual operating profit was $1.6 million and ROE was in the low-teens. Aside from a blip in 1986 and ’87, EPIE earned a healthy profit every year. By the end of 1990, the company had a cash reserve of $12 million…

…I can’t think of a single case where buying a decent, stable business with a multi-decade history of profitability at a negative EV, single digit earnings multiple, and huge discount to book value hasn’t resulted in a home run return.

And Eskimo Pie certainly classifies as a home run.

Reynolds Metals decided to spin off EPIE and complete an IPO in 1992, less than two years after Jim started buying the stock.

Returns are typically favorable when you can buy a non-marketed minority interest on the pink sheets and later sell that same asset once it’s listed after a promoted IPO.

Mitchell Partners made 6.6x on Eskimo Pie in 19 months.

2. Weird Events, Part 2: Some quick hits $ADVM $DXLG $PETS $PGRE $WBD – Andrew Walker

ADVM was a tiny little biotech company that announced a deal to get bought by Eli Lilly in late October…

…The stock closed at $4.18/share the day before the merger was announced, and ADVM sold for $3.56/share in cash plus a CVR. The CVR could be potentially very valuable; if both milestones hit, it would be worth another $8.91/share. That is, of course, a big if; the tender docs valued the CVR at $1.72/share for a risk adjust fair value of the whole acquisition of $5.28/share ($3.56 in cash plus the risk adjusted CVR)…

…I wanted to highlight two things related to insider purchases and stock grants I don’t think I’ve ever seen in a merger before:

  • On the night of December 8th (i.e. after market close on the last day the stock traded), the CEO and COO filed form 4s that showed they had bought, in total, ~178k shares on the last two days the stock traded. That purchase is not a small purchase; ADVM had ~22m shares outstanding, so on the last few days of trading the CEO and COO bought almost 1% of the company on the open market. It also materially increased their ownership; the form 4 from the CEO noted he owned ~201k shares after the purchases, and he bought ~128k shares…. so more than 60% of his ownership came on these last second purchases. The COO buys similarly stocked him up; he ended up with ~80k shares and he bought 50k of them right before the merger closed.
  • Why am I highlighting it? I’ve just never seen insiders so eager to get their hands on stock right into merger close before. Given that this merger has an enormous CVR component to it, I think it’s interesting that the insiders weren’t blacked out from buying before the deal closed. I’m also a little disappointed they timed their form 4s to come after the stock had stopped trading; if I saw a CEO and COO trying so desperately to increase their ownership in a CVR / right before a merger close, I can assure you I would not have had a basically meaningless position!
  • After the merger closed, ADVM filed a bunch of form 4s for directors and insiders. That’s not unheard of… but what is weird is that the COO, CMO, CEO, and CFO all had a PSU share acquisition listed in the filings. These are not small grants; the CEO was granted 500k PSUs, which is more than 2% of the company! If you read the footnotes of the form 4, it notes that the PSUs were granted on September 12 to vest two days after the completion of a change of control or a significant out-licensing.
  • Why am I highlighting this? PSUs granted to encourage a change of control obviously aren’t weird…. but these are enormous grants and I do not believe they were disclosed until the merger had closed (there were no form 4s filed in September or October, the only two 8ks filed in September and October make no mention of the PSUs, and I don’t see it in the Q3 10-Q…. that basically covers the whole range of filings, so unless I’m missing something I have no clue where else they could have been disclosed!). That is…. strange on a whole host of levels…

…There was a really weird day on August 11. WOW was supposed to report earnings that morning; instead, they delayed earnings till after market close. After the market closed, WOW announced a definitive deal to go private alongside their earnings. Ever since then, I’ve had my eye open for companies that delay their earnings out of no where from morning to afternoon.

It happened again last week. DXLG was originally scheduled to report earnings on the morning of December 4. After market on December 3, they pushed earnings from the 4th to the morning of the 11th. On the morning of the 11th, they pushed earnings to after market on the 11th….. at which time they announced a merger of equals with FullBeauty.

What was particularly interesting here is DXLG had an activist (Fund 1) who had offered to buy them for $3/share last December, so you could have some idea the company was in play when they delayed earnings multiple times.

Obviously I will be on high alert for the next delayed earnings set up!

3. Exclusive: How China built its ‘Manhattan Project’ to rival the West in AI chips – Fanny Potkin

In a high-security Shenzhen laboratory, Chinese scientists have built what Washington has spent years trying to prevent: a prototype of a machine capable of producing the cutting-edge semiconductor chips that power artificial intelligence, smartphones and weapons central to Western military dominance, Reuters has learned.

Completed in early 2025 and now undergoing testing, the prototype fills nearly an entire factory floor. It was built by a team of former engineers from Dutch semiconductor giant ASML (ASML.AS), opens new tab who reverse-engineered the company’s extreme ultraviolet lithography machines or EUVs, according to two people with knowledge of the project…

…Nevertheless, China still faces major technical challenges, particularly in replicating the precision optical systems that Western suppliers produce.

The availability of parts from older ASML machines on secondary markets has allowed China to build a domestic prototype, with the government setting a goal of producing working chips on the prototype by 2028, according to the two people.

But those close to the project say a more realistic target is 2030, which is still years earlier than the decade that analysts believed it would take China to match the West on chips…

…Chinese electronics giant Huawei plays a key role coordinating a web of companies and state research institutes across the country involving thousands of engineers, according to the two people and a third source.

The people described it as China’s version of the Manhattan Project, the U.S. wartime effort to develop the atomic bomb…

…Until now, only one company has mastered EUV technology: ASML, headquartered in Veldhoven, Netherlands. Its machines, which cost around $250 million, are indispensable for manufacturing the most advanced chips designed by companies like Nvidia and AMD—and produced by chipmakers such as TSMC, Intel, and Samsung.

ASML built its first working prototype of EUV technology in 2001, and told Reuters it took nearly two decades and billions of euros in R&D spending before it produced its first commercially-available chips in 2019…

… One veteran Chinese engineer from ASML recruited to the project was surprised to find that his generous signing bonus came with an identification card issued under a false name, according to one of the people, who was familiar with his recruitment.

Once inside, he recognized other former ASML colleagues who were also working under aliases and was instructed to use their fake names at work to maintain secrecy, the person said. Another person independently confirmed that recruits were given fake IDs to conceal their identities from other workers inside the secure facility.

The guidance was clear, the two people said: Classified under national security, no one outside the compound could know what they were building—or that they were there at all.

The team includes recently retired, Chinese-born former ASML engineers and scientists—prime recruitment targets because they possess sensitive technical knowledge but face fewer professional constraints after leaving the company, the people said…

…ASML’s most advanced EUV systems are roughly the size of a school bus, and weigh 180 tons. After failed attempts to replicate its size, the prototype inside the Shenzhen lab became many times larger to improve its power, according to the two people.

The Chinese prototype is crude compared to ASML’s machines but operational enough for testing, the people said.

China’s prototype lags behind ASML’s machines largely because researchers have struggled to obtain optical systems like those from Germany’s Carl Zeiss AG, one of ASML’s key suppliers, the two people said.

4. The Hermès heist: how an heir to the luxury dynasty was swindled out of $15bn of shares – Avantika Chilkoti

Its founder, Nicolas Puech, was the largest individual shareholder in Hermès, a luxury-goods firm. From 2004 he owned nearly 6% of the company, a stake that would now be worth €13bn ($15bn). Puech, who is part of the Hermès family, has no children. The entirety of his vast fortune was destined for the Isocrates foundation, which he had set up in 2011 on the advice of his Swiss banker of 24 years, Eric Freymond…

…Bernard Arnault, the founder of LVMH, is credited with transforming the luxury sector from a smattering of small labels into a multi-billion-dollar global industry. He has assembled his empire by taking over smaller businesses including Louis Vuitton, Dior, Moët & Chandon, and assorted watchmakers and jewellers, earning him the nickname, “the wolf in cashmere”. In 1999 Arnault tried to acquire Gucci, but failed. Then Hermès caught his eye…

…Arnault’s team got in touch with Freymond and the pair met in secret on several occasions to negotiate a deal. Puech often joined them. Freymond was tasked with identifying family members keen to sell their shares and discreetly transferring their stakes to Arnault. Puech’s part in the affair remains unclear. In court documents, Puech is quoted as saying he saw no “objection” to the deal but never agreed to sell his own stake (which would have been worth around €500m at the end of 2008)…

…LVMH never managed to accumulate enough Hermès stock to block decisions made by the family. The Hermès heirs rallied together to prevent a takeover. On December 5th 2010 they announced the creation of a new family holding company, H51, into which dozens of heirs deposited more than 50% of the firm’s capital, more or less locking up their equity for the next 20 years. (In 2022 the deadline was extended to 2041.)

Meanwhile, the French financial regulator, Autorité des Marchés Financiers (AMF), opened an investigation into the acquisition of Hermès stock by LVMH. In June 2013 it concluded that the information LVMH had provided was insufficiently accurate, precise and sincere. The AMF fined LVMH €8m (€2m less than the maximum possible fine). This paled into insignificance next to the €3.8bn in capital gains that LVMH reported on its investment in Hermès, thanks to the rise in the company’s share price. (When asked to comment on the matters raised in this article, LVMH shared a press release that it issued last week after renewed interest in its dealings with Hermès: “LVMH and its shareholder [sic] firmly reiterate that they have never, at any time, diverted shares of Hermès International in any manner and that they hold no ‘hidden’ shares—contrary to the implications put forward by Mr Nicolas Puech, who has chosen to turn to the French courts after being dismissed on numerous occasions by the Swiss judiciary.”…

…The failure of the tie-up with LVMH was a huge disappointment for Freymond, who had expected to pocket a small fortune for his services. According to Glitz, a French publication, he filed a complaint against Arnault claiming 10% of the capital gains LVMH made on its Hermès stock. Freymond reportedly employed private detectives to investigate what he believed to be backroom dealing by Arnault and provided evidence which he claimed showed that LVMH, despite Arnault’s denials, had indeed planned to take over Hermès. Freymond, says Glitz, withdrew his complaint in 2019.

Despite everything, Puech stood by his banker. Charlotte Bilger, a judge who oversaw Hermès’s criminal complaint for several years, told me that Puech was “in complete denial” and even wrote to the court asking her to stop pursuing the case against Freymond. “He seemed to be someone who was easily manipulated,” said Bilger. She compared Puech to Prince Myshkin, the guileless hero of Fyodor Dostoyevsky’s novel, “The Idiot”…

…After decades of denial, Freymond admitted to the magistrates that he had sold Puech’s shares to LVMH. He said that Puech was “perfectly informed” and met Arnault 14 times, including at Arnault’s apartment in Paris and his chateau in Bordeaux. “It was Mr Puech who made the decision, who was enthusiastic and eager to move forward for the simple reason that he had a score to settle with his family,” claimed Freymond.

This, too, Puech strenuously denied. He acknowledged that he had met Arnault several times and said that Arnault had given him presents including a travel bag. Arnault had been “friendly”, he added. “He told me, ‘Just call me Bernard.’” But Puech maintained he never agreed to sell his shares. “Often, I assumed that Mr Freymond had spoken to Mr Arnault before and I would arrive somewhat as a figurehead, as an important member of the Hermès family,” he said. The Parisian investigators found that millions of shares belonging to Puech were sold in 2008, in some cases for less than €100 per share. The stock is now worth more than 20 times that.

Where exactly Puech’s bearer shares ended up may remain a mystery for ever. In 2014, after the AMF investigation into the stock acquisitions had been completed, LVMH and Hermès reached a truce. LVMH agreed to hand all its Hermès stock to its own shareholders: two Hermès shares for every 41 in LVMH.

The Hermès shares that were scattered between LVMH’s shareholders are impossible to trace. Christian Dior, the largest investor in LVMH, distributed the stock to its own investors. The Arnaults, who ended up with 8.5% of Hermès, began to sell off their stake, according to data from company reports. They handed over much of what was left in 2017 as a step in LVMH taking full control of Dior.

An audit commissioned by Puech’s lawyers established that he still had 535,899 Hermès shares at the end of 2013. But those were progressively sold, so that by 2021 he no longer had any shares in his family firm.

It appears that Freymond funnelled over €100m of assets out of Puech’s accounts, often to benefit himself and his circle. Documents cited by the Parisian magistrates show that transfers of 200,000 Hermès shares and €26.4m were made to Noor Capital, an Emirati investment firm managed by an associate of Freymond’s, Olivier Couriol, who has been named in press reports in connection with fraud and money laundering. Another €25.8m of Puech’s money was put into Hydroma, a Canadian firm with hydrogen projects in Mali—in a series of small purchases, made in quick succession at increasing prices, that a magistrate described as “quite unusual”. (Couriol could not be reached for comment.)

Freymond also opened various joint bank accounts with Puech, depositing €35.8m at one private Swiss bank. Freymond said this money was used to fund the pair’s travels and “common projects”. Puech said he had no knowledge of any joint accounts…

…Puech, once among the world’s richest men, now appears to be worse off than his caretaker. According to documents reviewed by the magistrates, the 82-year-old is penniless. He doesn’t even own the house in the Swiss Alps. Earlier this month Reuters reported that Puech had lodged a civil case against Arnault in Paris in May (when I asked LVMH if its boss had been summoned by magistrates in the ongoing criminal case, it declined to comment).

5. John E. Olson, Analyst Who Was an Early Skeptic of Enron, Dies at 83 – James R. Hagerty

He just didn’t get it. That was the verdict of senior Enron executives on John E. Olson, a securities analyst at Merrill Lynch.

When Enron was flying high in the 1990s, Olson was one of the few analysts who was publicly skeptical about the outlook for the company, an operator of gas pipelines that had diversified into a complex array of businesses, including electricity sales, a power plant in India, and derivative contracts allowing traders to bet on weather patterns.

Olson, who died of cancer Dec. 9 at the age of 83, found the company’s financial statements too opaque to explain exactly how it was making the profits it reported. While most analysts rated the company’s stock a strong buy, Olson called it the equivalent of a hold, a rating widely understood as a polite way to recommend selling.

In the spring of 1998, Enron executives complained to investment bankers at Merrill and threatened to cut that firm out of a lucrative role in a securities offering. A few months later, Olson left Merrill. He said the firm had threatened to take away his stock options and other benefits if he didn’t retire early. Merrill executives said his job had been eliminated in a restructuring. (Merrill itself was sold to Bank of America in 2008 during the financial crisis.)

In any case, Merrill raised its ratings on Enron. A Merrill investment banker sent an internal memo in January 1999 saying that relations between the two firms had been patched up, clearing the way for more investment-banking fees, according to documents later released by a Senate subcommittee…

…Six months later, in December 2001, Enron collapsed into bankruptcy. Top Enron executives eventually were found guilty of fraud that concealed enormous financial risks.

Though he was consistently skeptical, Olson was surprised by Enron’s sudden collapse. In September 2001, after Enron shares had fallen about 70% from peak levels, he saw the stock as a bargain and raised his rating to strong buy, less than three months before the bankruptcy filing. Olson explained later that he had thought there was still a solid trading business to be salvaged. “You couldn’t see how bad some of the failures were,” he told the Washington Post, “because they’d buried the bodies.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and TSMC. Holdings are subject to change at any time.

An Easy Rule-of-Thumb to Avoid Frauds

Check a company’s net income and cash flow.

I recently wrote briefly about Intellego Technologies (SSE: INT) and how the company had possibly committed fraud:

“Earlier this week, the company’s CEO was arrested on “suspicion of gross fraud”, and SEK 100 million of its cash reserves were seized by Swedish authorities. The trading of Intellego Technologies’ shares on the Swedish stock market was also suspended.

Although not much is known yet of the apparent misdeeds conducted by the CEO, the “gross fraud” is “related to [Intellego Technolgoies’] press releases and quarterly reports in 2025.””

There is one tell-tale sign that Intellego Technologies’ management was highly likely to have engaged in chicanery: A huge discrepancy between the company’s net income and operating cash flow, as shown in Table 1. This is a well-known flag for possible financial-wrongdoing by a company that is described in forensic accountant Howard Schilit’s book, Financial Shenanigans.

Table 1; Source: TIKR

Not every company whose net income and operating cash flow diverges for some time is fraudulent. Netflix* is a great counter example. Table 2 shows the company’s net income and operating cash flow over the past decade. The two financial numbers took very different paths from 2015 to 2019 before eventually converging.

Table 2; Source: TIKR

Nonetheless, a good rule of thumb to avoid a company in the stock market that is conducting fraud is to watch its net income and operating cash flow. If a company’s net income looks much better than its operating cash flow for some time, it pays to look beneath the hood.

*There are no certainties in the world of finance, so there is still a very remote possibility that Netflix is a fraud, although the probability decreases with each passing year. In any case, Netflix’s net income and operating cash flow took different paths from 2015 to 2019 because the company was investing heavily in developing its in-house content library, which required cash upfront.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Netflix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 24 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 August 2025:

1. The deep transformation of China’s consumption structure: a complex picture beyond “downshifting” – Robert Wu and Dongfan Ma

From a macro and traditional industry perspective, China’s consumer market does show signs of weakness:

Growth slowdown: Over the past three years, the annualized growth of total retail sales of consumer goods has fallen significantly compared to the ~10% seen between 2010 and 2020, highlighting weaker macro consumption momentum.

Pressure on traditional sectors: In 2024, the catering industry in Beijing and Shanghai saw profit declines of 80–90%. Hotel average daily rates kept falling, and airline ticket prices dropped consistently between 2024–2025. Together, these figures underpin the concerns about sluggish consumption.

Yet, another set of data paints a very different picture.

Entertainment boom: The concert economy remains in an extremely overheated state, with shows across genres selling out instantly — acting as the “contrarian” force in the consumption market.

Non-essential consumption growth: Products like Pop Mart’s designer toys or Lao Pu Gold’s jewelry — both considered non-essentials — are seeing robust growth, defying the conventional wisdom that such categories should be hit hardest during consumption downgrades.

Segment upgrades: Pet-related spending remains strong, with treats and premium pet food turning into hotspots, suggesting stable or even rising purchasing power among certain groups.

Lower-tier market vitality: Categories like household goods in third- and fourth-tier cities continue to show resilient demand for quality.

This contradiction makes clear that a single pessimistic lens is no longer sufficient to describe the reality of China’s consumer market. At its core lies a deeper structural transformation…

…What China’s consumer market is undergoing is not a simple story of expansion or contraction, but a profound structural transformation characterized by multiple forces:

Channel: Social and livestream commerce is displacing offline and traditional e-commerce.

Supply: Flexible chains and rapid product iteration are overtaking traditional production models.

Market: Downward tier integration reshaping consumption layers.

Corporate Strategy: A shift from “ad-driven + distributor networks” to “private domain operations + digital reach.”

If we focus only on traditional offline retail, distributor-based brands, or oversupplied catering chains, the picture appears bleak — a “consumption winter.” But if we turn to social commerce (already nearly 10% of retail, still growing at 30% annually), new brand growth, and supply chain-enabled rapid iteration, we see instead a “consumption spring.”

2. AI x Commerce – Justine Moore and Alex Rampell

The internet’s most profitable business model has always been simple: running search ads on monetizable queries. When you search “how many protons are in a cesium atom,” Google makes no money. When you search “best tennis racket,” it prints cash…

…Google could lose 95% of search volume and still grow revenue –as  long as it retains the valuable queries, which are largely commerce related…

…The nature of an impulse buy means that you won’t be doing research in advance or consulting with an expert, so there’s limited opportunity for AI agents to play a role. However, the algorithms that guide your attention will continue to improve, enabling advertisers to target you with the right product at the right time. And it will be easier for brands to create hyper-personalized marketing materials that draw you in…

…You probably already have brands and SKUs that you know and love when it comes to everyday essentials, so an AI research agent won’t be particularly helpful unless you’re adding a new product to the lineup (like if you get a dog and need to pick their food). But AI should play a role when it comes to sourcing and purchasing items. For example, if you regularly get the same laundry detergent, your AI agent could monitor and buy on your behalf if the price dips below a certain level…

…Lifestyle purchases – when you’re purchasing items that you don’t buy regularly (especially if they’re a bit more spendy, like a luxury handbag), you’re likely going to want to evaluate various options to make sure you’re picking the best one. But researching and aggregating the choices, and ranking them across various criteria, is time-consuming. Imagine deputizing an AI agent to do the grunt work for you and come back with a recommendation that explains why a specific SKU is the perfect choice for you based on your past purchases, what it knows about your preferences, and even things like your body type and what colors look best with your eyes…

…Functional purchases – these items are important because they are typically (1) a meaningful financial investment, and (2) a product you’ll use every day, likely over several years. This means that you want to feel very confident that the product meets your needs and will hold up over time. You may feel comfortable purchasing a product that your AI research agent recommends. But you’ll likely want to have a more in-depth conversation with a subject-matter expert (an AI “consultant”) about different options…

… Life purchases – there are only a few “life purchases” you’ll make (e.g. a home, car, wedding, or college education). These are expensive and meaningful, so you’ll likely spend months – if not years – evaluating options. You’ll do your own research online, but there’s a decent chance that you’ll also speak with experts and try out the options (e.g. touring wedding venues or homes, test driving a car, visiting a college). It’s hard to imagine people fully outsourcing these decisions to AI…

…As agents become the new interface for buying, both platforms are well-positioned — Amazon with end-to-end control, Shopify perhaps more so with distributed ownership across millions of stores and growing consumer touchpoints. It doesn’t matter if a consumer search starts with Google or ChatGPT if the destination merchant is hosted by Shopify…

…AI’s potential is first and foremost bottlenecked by content, not compute. Most product reviews are noisy, gamed, or overly polarized. Agents need access to structured, trustworthy, real-time feedback. Let’s say you’re looking for the “best” blender. In a perfect world, your AI would order every blender, test them all for a week in your kitchen (with your home robot!), decide which one you like best, and then send the rest back. But today AI just summarizes the web, and cannot turn shilled junk into honest analysis…

…The best AI-native experiences will capture data directly in the user journey that contributes to better recommendations. Imagine an AI agent that infers information about what to recommend to you (or others) from data that’s not typically present on product description pages or reviews. This could be direct (e.g. next time you open the app, it asks you a few specific questions about your last purchase), or more passive (e.g. it looks at how long you linger on a specific item or feature and maybe even asks follow-ups if you’re hesitating).

Until these foundations are in place, LLMs will remain clever summarizers — not true commercial agents. But this is happening fast.

3. Why zero-click panic is overblown – Mike Elgan

The idea is that when you want information, you go to an AI chatbot like GPT-5, ask a question, get an answer, and move on with your life without clicking through to the websites that monetize with advertising or subscriptions. And even when you “Google it,” Google’s direct answers, knowledge panels, and AI overviews often give users a zero-click answer.

The crisis: AI companies are getting rich by giving away other people’s content for free. Every time someone gets an answer from a chatbot instead of visiting a website, that’s money being transferred from content creators to AI companies. The media ecosystem will be strangled by this “zero-click crisis.”

But the trend might not turn out as bad as some think.

The reason is that while most people might turn out to be zero-clickers, a minority of people are likely to keep on clicking…

…Most importantly for people who care about quality information — AI provides a narrow, generic and average worldview.

In other words, on that last point, getting your information about the world from AI will make you average, not exceptional. And some people will want to be exceptional.

Many, but certainly not most, information-seeking people will continue to click through to original sources, seek out original sources, follow original sources, pay for original sources and patronize advertising…

…Let’s take a look at the advertising that everyone points to when gnashing teeth about the zero-click crisis.

Well over 99% of Google users who click through to content websites never buy anything from the ads they see on those sites.

Far less than 1% of Google users (between 0.3%–0.6%) do sometimes buy something after seeing an ad.

That tiny minority pays for all the content that every Google user sees. More than 99% get a free ride, subsidized by the people who buy the ads…

…For the past century, advertiser-supported content has been paid for entirely by a small minority of people with the means and desire to buy the advertised products.

I suspect our zero-click future will look a lot like our most-people-don’t-buy-the-advertised-product past.

In other words, the zero-click people are the same majority of people who used to click through to ad-supported or subscription-supported content sites and then never buy or subscribe to anything.

If a non-contributor stays on the ChatGPT website and never pays for the content, or if a non-contributor clicks through to an ad-supported website and never buys the advertised products — what’s the difference?

Content supporters — people who buy ads and especially people who pay subscriptions — will continue to support quality content with their wallets.

The minority who want exceptional, rather than average, information will have to seek out that exceptional information, subscribe to it and (as people who buy things) will be seen as extremely valuable to advertisers.

4. Bitcoin treasuries – Oliver Sung

In case you’ve missed the financial news, Bitcoin treasuries (some call them “digital asset treasuries,” or “DATs”; others dub them “crypto holdcos”; still others abbreviate them to “BTCOs”) are simply companies that buy Bitcoin and park it on their balance sheet. Any company could do this, but the point is that a pure-play Bitcoin treasury shouldn’t have much of an operating business attached, making the entity a vehicle to “invest in” (or rather “hold”) Bitcoin through a corporate wrapper…

…The whale of Bitcoin treasuries is Strategy—formerly MicroStrategy—led by Michael Saylor. He pioneered the model, having now amassed 630k Bitcoin (as of Q22025), or 3% of all Bitcoin ever to be in existence…

…With help from ZIRP and a volatile stock, Saylor discovered he could issue 0% (or close to it) convertible bonds to fund further Bitcoin purchases. If you ask why Saylor wouldn’t just issue equity instead, the answer is that the convertibles were issued at a premium and wouldn’t dilute the share count before they came in-the-money. That’s when he found his masterstroke: To keep being able to raise money to fuel his newly-discovered perpetual motion machine, in marketing newly issued Strategy securities at premiums to the share price, he, ironically, had to borrow a term from conventional finance which Bitcoin certainly lacked: yield.

“Bitcoin yield” is not to be confused with the yield earned on your cash flow-generating assets. No, Bitcoin yield is the period-to-period percentage change in the ratio between the company’s Bitcoin holdings and its diluted shares. In other words, it’s the change in Bitcoin per share. But it’s a smokescreen—another way to say that new investors fund “yield” for old investors. The yield that reaches old investors comes straight from newcomers’ pockets. Because the “Ponzi” label has been thrown around Bitcoin forever, this is easily brushed off by Bitcoiners. But here, it fits not Bitcoin itself. Ponzi, in this case, is the definition of how Strategy and other Bitcoin treasuries operate: publicly boasting Bitcoin yield as shareholder value, while obfuscating the fact that the yield stems not from any operations but from new investors hoping to get a high Bitcoin yield themselves…

…Many of the zombie companies, persuaded by the promise of easy money and good ol’ wealth transfer, pulled it off—perhaps to their own surprise—enriching insiders in the process.

Metaplanet, formerly known as Red Planet Japan, is a former budget hotel operator in Japan turned aggressive Bitcoin treasury. Since pivoting in 2024, it has expanded its share count by some 400%, with the market cap reaching almost $7bn at its peak from $13mn, currently priced at 2x its Bitcoin holdings. Metaplanet counts Eric Trump, the son of the US president, as strategic adviser.

While The Smarter Web Company, a web designer, isn’t the first and only UK-listed company to do this (there are about a dozen), it certainly was a pioneer. Shortly after its shares were admitted to trading on the Aquis Stock Exchange in April this year, the company announced a 10-year Bitcoin treasury plan. From a market cap of GBP3.7mn at the time of listing, shares of SWC quickly exploded past GBP1bn (now sitting at GBP550mn).

And unsurprisingly, the POTUS jumped on the bandwagon too. After minting a monumental amount of money and legalized bribes from launching $Trump coin three days before inauguration, the President wasn’t done squeezing crypto. Trump Media recently raised $2.4bn to buy Bitcoin, modelled after Saylor’s blueprint (and personally recommended to the Trumps by Saylor himself), which followed the President’s establishment of a US Strategic Bitcoin Reserve that currently holds 200k Bitcoins. The President owns 40% of Trump Media with an implied market value of ~$2bn…

…As for Saylor’s Bitcoin treasury valuation model illustrated above (Bitcoin NAV + Bitcoin $ gain x multiple), it’s absurd. The premise—that the appreciation of Bitcoin should be treated like recurring profit and capitalized accordingly—is lunacy. It’s like saying that because you expect the $500k house you live in (let’s say it’s your entire net worth) to appreciate to $550k next year, your net worth is not $500k, and not $550k, but a whole $2mn with a 30x multiple on the appreciation. It doesn’t surprise me that Saylor believes this nonsense, since he, having missed econ class 101 by the evidence of this clip, thinks that cash, which is priced at the risk-free rate, carries a cost of capital of 15% (then proceeding to botch basic math by saying 12% of $325bn is $32bn).

I wish the world would allocate its precious resources and brainpower to more productive pockets of the economy than what we discussed today. I know that’s wishful thinking. Stuff like this happens all the time, but speculation has clearly raised the stakes since the pandemic. The writing on the wall hasn’t dried yet. Saylor et al’s vision for Bitcoin treasuries is that the scheme runs far enough that Bitcoin approaches “hyperbitcoinization”: the point where sponsors believe the price stabilizes (some peg it at $10-20mn per coin). The pools of fiat are so vast that the sponsors aren’t anywhere close to running out of convincing new buyers of these products, and so are willing to floor the pedal to make these things more ingrained in the financial system. (I think you know what that implies.) It sure helps keep the scheme going when people—usually Gen Zs—run around hyping Strategy as an “infinite money glitch” and Saylor himself calling it a “quadratically reflexive engineered instrument”. (You can’t make this stuff up.)

The whole thing raises an odd paradox: How are all of the Bitcoin treasuries going to buy more Bitcoin if every big holder of Bitcoin can cash in bigger by launching their own Bitcoin treasuries? If there’s a massive wealth transfer to be taken simply by moving Bitcoins onto public markets, then everyone with a pile of Bitcoins will want that premium for themselves.

Now for what you’ve been waiting for: how do you bank on this? The answer is, I won’t. I wouldn’t short any type of absurdity in a million years—not even with long-dated options…

…And if you’re already long invested in Strategy or any new shiny Bitcoin treasury, the best action you can take is to copy what the insiders and promoters are doing: sell.

“On the one hand, we’ve capitalized on the most innovative technology and capital asset in the history of mankind. On the other hand, we’re possibly the most misunderstood and undervalued stock in the US and potentially in the world.”—Michael Saylor

5. Constraints, and challenges of value capture in the AI race – Abdullah Al-Rezwan

Another bit that I thought was interesting in the Acquired interview was their point about how they think about creating leverage through AI:

…we always like to say the way we think about an AI first company is we’re building a machine to produce happy customers…And I think that’s important because it’s like if something comes off the assembly line of machine that’s malformed, you don’t just fix that thing. You say what part of the machine broke to produce the malformed item.

And so just as it relates to, for example software engineering, we have this philosophy like when cursor, which is the most popular co-pilot for software engineers to like write code and now having some sort of more agentic flavors of it, if it produces incorrect code, our philosophy is don’t fix the code, fix the context that cursor had that produced the bad code. And I think that’s a big difference when you’re trying to make like a company driven by AI. So essentially, if you just fix the code, you’re not adding leverage. If you go back and say, what context did this coding AI not have that had it had it, it would have produced the correct code. So I don’t want to pretend we’re perfect here, but that’s the way we think about it. I really like thinking of our business as a machine…

…The Information pointed out yesterday how the token price seems to be stable in recent months compared to the last couple of years. The subscription model just doesn’t seem appropriate in many of the use cases. For example, this Reddit post points out how one dev basically consumed $50k worth of tokens while paying $200 for the monthly subscription. This is, of course, a business model problem…

…It may be tempting to think it won’t be that difficult to capture value over time. While I have no doubt that SOTA model developers will get better at it, there is a long list of revolutionary technology which had hard time capturing the value. Let me share a personal example. Recently, I opted for “ChatGPT Pro” subscription ($200/month) just to see if there is a noticeable difference between Plus and Pro subscription. One of my family members asked me to run a query that had important career implications for her. After I sent ChatGPT Pro’s response, she was really glad and was telling me that it would probably cost her $1,000 to get such information if not for ChatGPT. At first, I thought even $200/month could be considered incredible value if it can solve at least one such problem in every couple of months. The only problem is when I ran the same query on Gemini 2.5 Pro for which I pay $20/month, it also came up with a very, very good response. ChatGPT Pro was slightly better in some marginal details, but now I was starting to feel $200/month wasn’t worth for those marginal improvement.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Gemini), Amazon, and Shopify. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q1 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q1). In it, I shared commentary in earnings conference calls for the first quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management sees the Firefly App as a place for creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control; the Firefly app also supports 3rd-party models from Google, OpenAI, and others, with more coming soon; Firefly is attracting new customers for Adobe with first-time subscribers up 30% sequentially in 2025 Q1 (FY2025 Q2); management recently rolled out new Firefly offerings such as the (1) Firefly Image Model 4 for life-like images, (2) Firefly Image Model 4 Ultra for impeccable detail, (3) Firefly Video Model; users of Firefly can now collaborate with other users through Firefly Boards; management is monetising Firefly through new Firefly App subscription plans (ranging from US$10 per month to US$200 per month) and the Creative Cloud Pro plan; traffic to the Firefly App was up 30% sequentially in 2025 Q1 (FY2025 Q2); paid subscriptions to the Firefly App nearly doubled sequentially in 2025 Q1 (FY2025 Q2); Firefly has powered 24 billion generations (20 billion in 2024 Q4) since its launch in March 2023; management believes that the only commercially safe way to build AI models is to do with content where the creators are willing participants in the process and this is how Firefly was trained; companies are choosing Firefly because of its commercial safety; management thinks Firefly will be the ultimate creative destination because even if it’s used only for ideation, users will want intellectual property that is safe for production; management sees Creative Cloud Pro (CC Pro) as the place for where Adobe’s AI and generative capabilities will increasingly be best available

The Firefly App is a new destination for AI-assisted content ideation, creation and production with Adobe’s comprehensive family of commercially safe Firefly creative models and an expansive ecosystem of third-party models. Firefly empowers creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control, iterate on their creations through Adobe’s creative apps and seamlessly deliver them into production. Our support for third-party models, including from Google, OpenAI and Black Forest Labs gives creators the flexibility to choose the AI that works best for them, with Firefly upholding our standards for IP safety and transparency…

…The Firefly App is attracting new users to the Adobe franchise with first-time subscribers growing 30% quarter-over-quarter…

…Earlier this quarter, we launched the new Firefly Image Model 4 for life-like images and the Firefly Image Model 4 Ultra for impeccable detail in complex visuals. We also made the Firefly Video Model generally available for the first time, empowering creators to generate 4K footage from text prompts and images with unprecedented creative control and extend video clips in our tools like Premiere Pro…

…In addition to supporting our own Firefly Models, the Firefly App now supports a growing family of third-party models for creative ideation. Firefly offers the flexibility to explore the diverse aesthetic styles of Google’s Imagen and Veo models OpenAI’s GPT-image model and Black Forest Labs’ Flux image model with Runway, Ideogram, Fal.ai, Luma and Pika coming soon. With the release of the Firefly Boards public beta earlier this quarter, creators can now ideate and collaborate when generating content with Firefly and our third-party models.

To monetize this incredible innovation, we have introduced a comprehensive set of offerings aimed at new and existing creators and creative professionals across all routes to market. The new Firefly App subscription plans are ideal for creators starting their creative journey and are now globally available. Creative Cloud Pro, which combines Creative Cloud All Apps and the Firefly App represents the best value for content creation and is now available in North America. Creative Cloud Pro will be released in other geographies over the next few months…

…Traffic to the Firefly App grew over 30 percent quarter over quarter and paid subscriptions nearly doubled in the same period…

…Excitement for and adoption of generative AI innovation, such as Generative Fill in Photoshop Generative Remove in Lightroom, Generative Expand in Illustrator, Generative Extend in Premiere Pro, video generation in the Firefly App and production workflows in Firefly Services, continues to accelerate with over 24 billion cumulative generations exiting Q2…

…One of the core things that we believe from the very beginning is that the right transparent and really the only commercially safe way to build these models is to do it on a set of content that — where the contributors are themselves excited and willing participants in the process. And so we have trained our Firefly models, as many of you know, on Stock and other content that we have access to. We do have a contributor fund that pays out to those individuals. And as a result, we feel like we’re in a very advantaged position when it comes to people choosing models. I’ll say, especially in enterprises, we see a lot of companies selecting Firefly partially because of the quality, partially because of the controllability of it but also very, very strongly because of the commercial safety of it…

…I think Firefly, with the support for all of those models, will be the ultimate creative destination. And to, I think, punctuate what David said, in the enterprise, the value proposition that we have resonates because even if you use it for ideation, you’re not going to use something that’s not being designed to be the intellectual property being correct for production…

…We can meet the needs of creators with the new Firefly plans that we’ve released recently, whether it’s Firefly Standard for, in the U.S., $10; Firefly Pro for $20; or Firefly Premium, which is unlimited access to video generation as well, for $200 a month…

…All of the AI and generative capability will increasingly be best available for our customers through the CC Pro application.

Adobe’s management sees marketing professionals being required to create huge amounts of personalised content and this is where Adobe’s AI-powered vertical solutions can help; management is seeing increasing demand from customers for personalisation capabilities in Adobe’s Digital Experience suite of products

Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile apps, e-mail, websites, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom commercially safe models and purpose-built agents tailored to address the inefficiencies of the content supply chain. Marketing practitioners, Chief Marketing Officers and Chief Digital Officers need solutions that enable them to acquire, engage and delight customers across a variety of channels and geographies. 

Adobe’s strategy is to deliver a comprehensive marketing technology platform leveraging AI to offer vertical solutions that integrate content, customer data and profiles across journeys in both B2B and B2C industries. Adobe GenStudio and Firefly Services are revolutionizing the content supply chain across enterprises, empowering marketers to activate personalized on-brand content across millions of touch points. For marketing professionals, Adobe Experience Platform and apps and purpose-built agents are redefining the future of customer connection by enabling real-time orchestration of content, data and journeys…

At our scale, the bigger metric that we track is in DX. Let’s talk about DX. And in DX, how much of this technology that we have been delivering is being adopted? What is the scale at which we’re driving, whether it’s campaigns, whether it’s engagement through e-mail or SMS, the amount of transactions that are going through AEP and apps? And all of that is because the agility of marketing and the ability to personalize these experiences with customers is dramatically increasing. So that’s one underlying trend that we clearly see. And the demand for that is only increasing and not decreasing. 

Adobe’s AI-influenced ARR (annual recurring revenue) is in the billions; Adobe’s AI-first products is tracking ahead of management’s target of $250 million in ending ARR by end-FY2025; management thinks Adobe is still very early in AI monetisation and feels good about it

While our AI influenced ARR is already contributing billions of dollars, our AI book of business from AI-first products, such as Acrobat AI assistant, Firefly App and Services and GenStudio for Performance Marketing is tracking ahead of the $250 million ending ARR target by the end of fiscal 2025…

…It’s very early in terms of the AI monetization, but we’re very advanced in terms of how much innovation we’ve delivered. And so it feels really good right now.

Adobe’s management thinks the infusion of conversational experiences in Adobe Acrobat and generative AI models in Express is allowing users to combine the 2 products in novel ways; Adobe’s Acrobat and Express products have combined monthly active users of more than 700 million, up 25% year-on-year; Express capabilities within Acrobat saw adoption grow 3x sequentially and 11x year-on-year in 2025 Q1 (FY2025 Q2); there’s a 75% increase in students gaining access to Acrobat AI Assistant and/or Express premium plans; Acrobat AI Assistant and Express added 35,000 new businesses in 2025 Q1 (FY2025 Q2), with Express adding 8,000; monthly active users (MAUs) in Acrobat’s AI Assistant and Express’s generative AI grew 3x year-in-year in 2025 Q1 (FY2025 Q2); Acrobat AI Assistant saw number of questions asked nearly doubling sequentially in 2025 Q1 (FY2025 Q2)

Our investments in conversational experiences in Acrobat and generative AI models in Express allow users to combine the 2 products in novel ways that empower users to accelerate their time to insight and ability to create compelling presentations. Sales professionals can gather industry reports on a prospect, use AI system to quickly identify effective sales conversations and automatically generate a pitch deck with Express. A social media marketer can ask AI Assistant for help identifying buying behaviors in market research documents and use that information to create better TikTok videos in Express…

…We’re seeing steady growth across our family of Acrobat and Express products with combined monthly active user growth accelerating to over 25% year-over-year and crossing 700 million monthly active users as Acrobat users increasingly rely on Acrobat AI Assistant to enhance content consumption and Express to create richer PDFs, customized presentations and animated designs. Due to increasing customer demand for creative functionality through Acrobat, we saw an approximately 3x quarter-over-quarter and approximately 11x year-over-year increase in the adoption of Express capabilities within Acrobat…

…With students, we’re driving over 75% year-over-year increase in students gaining access to Acrobat AI Assistant and/or Express premium plans. These products are also seeing strong adoption by businesses with over 35,000 new businesses added in Q2. Express alone added around 8,000 new businesses this quarter, approximately 6x growth year-over-year including companies such as Microsoft, ServiceNow, Workday, Intuit and top sports leagues like MLB, the NFL and Premier League…

…Use of generative AI features continues to grow quickly with AI Assistant MAU in Acrobat and generative AI MAU in Express growing over 3x year over year; Acrobat AI Assistant engagement continues to accelerate with the number of questions asked nearly doubling quarter over quarter;

Adobe’s management has launched GenStudio Foundation to provide visibility and actionable insights into campaign plans, projects and assets; Adobe has GenStudio for Performance Marketing for users to create on-brand content for websites and social media; GenStudio for Performance Marketing grew 45% sequentially in 2025 Q1 (FY2025 Q2); management thinks Adobe can work well with Meta even though Meta is increasing usage of AI to automate advertising creation

We launched GenStudio Foundation, a unified interface to bring together data from our full suite of content supply chain applications providing visibility and actionable insights into campaign plans, projects and assets. GenStudio for Performance Marketing empowers teams to create their own on-brand content, supporting ad creation and activation for Google, LinkedIn, Meta, Microsoft, Snap and TikTok…

…Momentum for GenStudio for Performance Marketing with growth of over 45 percent quarter over quarter…

…[Question] With respect to outside of your traditional competitive environment, maybe just coopetition with vendors like Meta where it — at least it’s a little harder for some investors to understand given their increasing usage of AI to automate kind of ad creation and campaign optimization. To what extent does that overlap versus partner with some of the GenStudio offerings?

[Answer] In terms of the ad platforms, obviously, their primary goal is to grow the ad revenue. The best way to do that is to make sure that the creative is optimized and the ROI from the advertisers’ perspective, is clear to the advertisers, which is where our marketing stack and everything that we’re doing around GenStudio for Performance Marketing comes together really well.

Adobe’s management is seeing high enterprise demand for and adoption of Firefly Services and Custom Models for marketing use cases; solutions that customers desire from Firefly Services and Custom Models include video reframe and support for 3rd-party models; Adobe collaborated with Coca-Cola to develop the AI-powered Project Fizzion on Firefly Services and Custom Models; Project Fizzion can scale creative output up to 10x faster while reducing misinterpretation of brand guidelines in AI content; Firefly Services and Custom Models within the GenStudio solution had 4x year-on-year growth in ARR (annual recurring revenue) in 2025 Q1 (FY2025 Q2)

We’re seeing high enterprise demand for and adoption of Firefly Services and Custom Models to automate and scale on-brand content production for marketing use cases…

…We are building on the momentum behind Firefly Services and Custom Models, addressing additional highly desired solutions, including video reframe and support of third-party models for automation and cost efficiency.

With The Coca-Cola Company, we co-developed a new AI-powered design intelligence system called Project Fizzion, built on Firefly Services and Custom Models. Project Fizzion is designed to scale creative output up to 10x faster while tackling the common challenge of misinterpreting brand guidelines in AI-powered content…

…Continued demand for Firefly Services and Custom Models as part of the GenStudio solution, resulting in 4x year-over-year ARR growth.

The Adobe Experience Platform (AEP) has the AEP AI Assistant that allows users to interact with data through natural language; management has introduced native AI agents into AEP that can orchestrate customer journeys in real time; the NFL (National Football League) in the USA is using AEP to enable all 32 clubs in the league to scale personalized fan touch points across different channels; management recently introduced 11 AI agents, including the most recent Product Support Agent, to improve Adobe’s customers’ customer-experience; the AI agents leverage the Adobe Experience Platform; companies such as Wegmans Food Markets and dentsu Merkle are already using Product Support Agent; AEP’s subscription revenue grew 40% year-on-year in 2025 Q1 (FY2025 Q2); 

Adobe Experience Platform and native applications are central to delivering unified, personalized customer experiences. With the introduction of AEP AI Assistant, we’ve extended the platform’s value by enabling teams across the business to interact with data through natural language, streamlining ingestion, insight generation, audience segmentation and experience delivery. Building on this momentum, we are now expanding AEP with native AI agents that intelligently orchestrate customer journeys in real time. These innovations empower our customers to leverage their first-party customer data and deliver more relevant high-impact advertising experiences rooted in direct customer relationships.

The National Football League expanded our global partnership combining content data and journeys to deliver a new level of AI-powered fan experiences. Adobe will enable all 32 clubs to scale personalized fan touch points across NFL channels through project management, audience and campaign development, creative production and performance optimization.

At Adobe Summit in March, we introduced the Adobe AI platform with an agentic layer to scale Customer Experience Orchestration. We unveiled 10 agents purpose built for creative, marketing and technology teams that leverage Adobe Experience Platform to act intelligently and in alignment with business goals. These agents coordinate across systems to accelerate the delivery of exceptional experiences. We recently launched a Product Support Agent to help enterprises anticipate, troubleshoot and resolve operational issues.

Customers like Wegmans Food Markets, and dentsu Merkle are already using it to streamline onboarding and feature deployment and drive faster resolutions and greater efficiency…

…Strong demand for AEP and native apps, with Q2 subscription revenue growing over 40 percent year over year.

Adobe’s core creative business subscription revenue has been accelerating over the past few quarters, driven by AI features

In terms of the pricing part of that equation, we talked about the increased value that we have in Creative Cloud Pro. That gives us some opportunity to match the value we’re providing with the pricing, and then in terms of the value is around Firefly Services and GenStudio. So that’s really the growth algorithm. The thing to note is that as we go down this path, some of this will take some time to play out because we have — for the quantity side, we have premium and lower-priced offers. But we’re starting to see the early signs of that. And if you do the math — and I’ll maybe turn it over to Dan. If you do the math, our core creative business subscription revenue has been accelerating over the past few quarters…

…If you take a look at the supplemental disclosure that we provided between the subscription revenue for creative and marketing professionals, the subscription revenue for DX, you can pretty quickly derive what the subscription revenue is for the Creative and Creative Pro audience that we serve. And I think what you’ll see is, in the current quarter, it growing 10.1% year-over-year, which is up from 10% in Q1. And when you think about the acceleration over the last 4 or 5 quarters, in the year ago period, that same 10.1% would have been about 7.9%, so just over 2% acceleration over the last 4 quarters.

Adobe’s management is not looking to increase Adobe’s headcount dramatically because employees are using AI to become more efficient

We’re not really looking to grow our head count very dramatically. We are finding a lot more efficiency. People are using AI to be more efficient within the enterprise.

Meituan (OTC: MPNGY)

Meituan’s management sees 3 layers in AI, which are infrastructure, products, and work; Meituan has made good progress in 2025 Q1 in all 3 AI layers

When we talk about AI, I think there are at least 3 layers, the AI infrastructure and the AI in products and AI at work. So that’s how we view AI. And this quarter, we iterate our foundation large language model, and we have launched a new AI application and services for external users. At the same time, we also enhanced the suite of employee productivity to boost our own efficiency and improve the work experience. So it’s fair to say we have made good progress on all 3 fronts.

Meituan continued tweaking its foundational LLM (large language model) in 2025 Q1; Meituan launched a new AI application and services for external users in 2025 Q1; Meituan’s in-house large language model named Longcat can now seamlessly switch between reasoning and non-reasoning modes; Longcat’s performance in both reasoning and non-reasoning modes is at the leading edge; Meituan updated its voice interaction model, Longcat F, in 2025 Q1 and its performance now closely approaches OpenAI’s GPT-4o

On AI infrastructure. We continue to increase our investment for large language model and allocating resources not only to infrastructure CapEx but also to recruiting top-tier AI talents and to ensure our foundation of large language model is among the best tier in China. And during this quarter, we made continuous upgrade to our LongCat, large language model. The enhanced model can now seamlessly switch between reasoning and non-reasoning modes with the performance in both modes reaching the caliber of China’s leading models. Now we have also updated our end-to-end voice interaction model, LongCat F. So this updated model demonstrate advanced capabilities in understanding nuanced information, including the emotion or contextual environments and engaging in natural voice conversation. So it performance closely approach that of GPT-4o.

Meituan will soon launch an AI-powered business assistant for the food service industry; the food service industry’s AI business assistant will help with dish selection, new store location selection, menu development, and store operations

Next month in June, we plan to launch Kangaroo [foreign language]. It will be an AI-powered business decision assistant for the food service industry. It will act as an intelligent operational assistant for food service merchants and industry professionals covering 4 key scenarios, the cuisine dish selection and the new store location selection and menu development and store operations.

A key priority in Meituan’s AI initiatives is to use AI to enhance employee productivity and the workplace experience; about 52% of new code in Meituan is generated by AI, with over 90% of team members in some teams using AI coding tools intensively; management’s goal is to gradually achieve 100% adoption of AI coding tools across all engineers; Meituan has a no-code platform that is widely adopted internally, with 62% of product managers and 28% of business analysts using it; management has launched the no-code platform for public users free of charge; public users have created 9,410 applications with the no-code platform, with 1,600 of then published and used actively

We believe developing internal AI tools, as AI at work. We want to use AI to enhance employee productivity and the workplace experience. That remains a key priority in our AI initiative. So in the last quarter, we continued to improve the AI coding capabilities for engineers and actively promote internal adoption of AI coding. So currently, about 52% of new code in our company is generated by AI. And in some R&D teams, over 90% of the team members use AI coding tools intensively. And our goal is to gradually achieve 100% adoption across all engineers.

And we have our own no-code platform, and it’s for all employees and it has been widely adopted internally. The no-code platform allows user to quickly generate applications through natural language dialogue without requiring prior coding experience. And no-code is now used by all professional roles within our company, including product managers, user experience designers, business analysts, HR and finance staff. They leverage no-code for creating product prototypes, interactive pages and efficiency tools, with 62% of product managers and 28% of business analysts using the no-code platform internally. Last week, we launched the no-code platform for public users free of charge. And the URL is nocode.cn. And users can bring various creative ideas to life without adding coding skill…

…On nocode.cn, users have created 9,410 applications, with more than 1,600 of them published and in active use. 

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks the company’s document model database more accurately reflects the messiness of real-world data and provides customers with greater flexibility, faster time to market and the ability to scale without re-architecting; management thinks MongoDB is exceptionally well-positioned as AI changes application-development and business operations, because AI applications require unstructured data; management sees MongoDB as having 3 things that modern AI applications need – which are (1) real-time data, (2) powerful search, and (3) smart retrieval – all in 1 platform; management thinks MongoDB’s integration of embeddings, text search, vector search, and operational data is a unique differentiator for developers when building AI applications

MongoDB’s document model and the associated platform enables developers to more easily represent the messiness of real-world data, which includes understanding relationships between structured and unstructured data and managing data that is constantly evolving and changing. This fundamental architectural advantage provides customers greater flexibility, faster time to market and the ability to scale without re-architecting…

…As AI redefines how applications are built and how businesses operate, MongoDB is exceptionally well positioned. Real-world AI applications require high-quality, context-rich and offer unstructured data to deliver trustworthy outputs…

…MongoDB now brings together 3 things that modern AI-powered applications need: Real-time data, powerful search and smart retrieval. By combining these into one platform, we make it dramatically easier for developers to build intelligence responsive apps without stitching together multiple systems…

…We have best-in-class Voyage embeddings to improve the accuracy of these results to help people get comfortable with using AI. And by integrating text search, vector search and embeddings and operational data, that’s a unique differentiator. It makes the developer’s life easy, reduces cost and complexity. And so we feel we’re well positioned for this, but it’s still early as most enterprises are still early in the adoption of AI.

MongoDB’s management sees competitors retrofitting JSON and vector support on existing relational (or tabular) databases but the retrofits fail in production for AI, unlike MongoDB’s approach of being a native JSON and document-model database; management thinks that the fact that the retrofitting is happening indicates that tabular architecture databases do not suit AI applications; management thinks that recent Postgres acquisitions made by Databricks and Snowflake show that OLTP (online transaction processing) or operational data stores are the strategic high ground for AI applications, and they are where AI inference happens; management thinks inference is the big market for AI applications; management thinks the acquisitions by Databricks and Snowflake show that it is really hard to build an OLTP datastore; management thinks the acquisitions by Databricks and Snowflake are not a big deal; management thinks that both relational databases and document databases can win; management sees the popularity of Postgres as a function of the consolidation of the SQL database market; management thinks that comparing MongoDB purely with Postgres is incomplete, it should be comparing MongoDB with Postgres plus many other services

In their desire to keep up with evolving customer needs, some vendors are retrofitting their products such as adding JSON or Vector support as afterthoughts, which are superficial and brittle. This is a passive admission that MongoDB’s approach of using JSON and the docu model is the best way to model real-world data. These features may check the box, but they fall apart in production, leading to performance bottlenecks, operational headaches and spiraling infrastructure costs. Fundamentally, these vendors are constrained by the relational underpinnings. It’s important to understand that superficial compatibility with modern data types is not the same as deeply integrated production-grade functionality. MongoDB, by contrast, was purpose-built to address these needs natively…

…[Question] If you look this week at — we saw Snowflake kind of moved and — make the move towards Postgres. We saw Databricks kind of doing something there. Can you kind of frame that?

[Answer] I think the moves by both Databricks and Snowflake, I think, validate one thing that OLTP or the operational data store is the strategic high ground, especially for AI. That’s where inference happens. Inference is the big market. That’s where everyone wants to go, and you need to have an operational data store to do that. And I think the other thing it points out is building organically an OLTP store is really hard, especially when you need to meet the requirements of enterprise scale, availability, resiliency and security. And both organizations had signaled that they were working on organic approaches. Snowflake talked about Unistore, Databricks have talked about their own organic efforts, and it’s clear that they couldn’t make it happen. So this is not an easy task.

The second point I’d make is that just because they’re buying a small Postgres companies, I think — and Neon, I would say, was in the vibe coating space. And I would say Crunchy Data is a small relational company based in South Carolina. I would say that it’s not clear to me why the world needs a 15th or 16th Postgres derivative database. I think we’ll find that out. And I think there’s also some noise about how Neon is 80% of its instances are provisioned via code. I should point out that nearly 80% of MongoDB instances on Atlas are provisioned via code. And so we do that to help our customers provision and scale clusters very, very quickly…

…We believe that the fact that Postgres and other relational platforms are now adding JSON is a faceted mission that the core Tabular architecture just doesn’t get the job done in the world of AI. Developers need to be able to model the real-world data, which is complex, messy, nested, which means it has highly interdependent relationships and is constantly evolving and changing. And then when you look at the fact that they’ve bolted on these capabilities, if you add a document size greater than 2 kilobytes, it’s going to deliver a very poor performance…

…[Question] A key part of the bull narrative for Mongo has been that document databases would steadily take share from relational and then Mongo would become the default general-purpose database for modern apps. I guess my question is, does the rising popularity of Postgres among developers and a strong ecosystem it has, as we see from stuff like what Databricks did and what the cloud guys were doing. Does that suggest that relational just may have greater long-term relevance than initially anticipated?

[Answer] One is that this is a big market. It’s a $100 billion-plus market, so there can be multiple winners, right? Second, the Postgres popularity is really a function of the consolidation of the SQL market. People are leaving Oracle, leaving SQL Server, leaving MySQL and going to Postgres…

…A lot of people compare MongoDB to Postgres, and that’s actually a false comparison. By us embedding keyword search, by us embedding a native vector search, by us embedding, embedding models, you’re really comparing MongoDB to Postgres plus Elastic plus Pinecone plus something like Cohere…

…I would tell you that Postgres is a tabular database, much like all relational databases.

MongoDB’s management is hearing from customers that high-accuracy is important in AI adoption; MongoDB’s acquisition of Voyage helps MongoDB meet customers’ need for accuracy in AI applications; Voyage has leading embedding and reranking models that allow users to feed their data into AI models; Voyage’s latest release, Voyage 3.5, outperforms the next best embedding model and reduces storage costs by more than 80%; management will soon enable MongoDB users to seamlessly generate embeddings from data sitting within MongoDB in a private preview

We continually hear from large enterprises that high accuracy is a critical requirement to drive wide-scale adoption of AI. Our recent acquisition of Voyage AI enhances our ability to serve this need. Embeddings are the bridge between a large language model and a customer’s private data. Voyages leading embedding and reranking models allow customers to feed precise and relevant context into LLMs, significantly improving the accuracy and reliability of the output of AI applications…

…With the release of Voyage 3.5, we’ve taken another step forward, meaningfully outperforming the next best embedding models while reducing storage costs by more than 80%…

…We acquired Voyage. That’s going to be natively part of the platform. We’re going to — later this month, we will enable people to seamlessly generate embeddings from data sitting inside MongoDB, and that will be in private preview. So that’s within 4 months of the acquisition.

Startups and enterprises are using MongoDB for their AI applications; MongoDB has some high-profile AI customers using its platform

Start-ups and mature companies are using MongoDB to help to deliver the next wave of AI-powered applications to their customers, including Cursor, Haleon, Vonage, the Financial Times and LG Uplus…

…We have some high-profile AI customers already on our platform and lots of other smaller customers.

MongoDB’s management continues to see enterprises being early in the adoption of AI; management thinks that the barriers to adoption of AI are limited skills with AI and lack of trust in AI because of the risk of hallucination; some early use cases that management has seen with AI are around round operating efficiency, chatbots, and domain-specific software; management thinks that the real enduring value will come when enterprises build custom AI apps, because there is no competitive advantage for an enterprise in using an AI application that the enterprise’s competitors can also use

We see thousands of customers building thousands of apps on MongoDB, and that’s growing quarter-over-quarter. We are seeing some high-profile, well-known AI companies. I mentioned Cursor on the call, and there’s some — a few other high-profile companies who are building on top of MongoDB. And obviously, those businesses are really taking off. But what we see is that enterprises are still early in the adoption of AI. The barriers include there’s a limited set of skills and experience with AI, trust with AI systems that are probabilistic, which is another way of saying the risk of hallucinations. And so we see obviously some early use cases around operating efficiency, chatbots, cogen and domain-specific ISVs like Harvey, but that customers are using…

…But the real enduring value will come when people start building custom AI apps. And the point I want to make is that anyone can use an ISV to run their business, but that doesn’t give them a competitive advantage because their competitors could use the same ISV. What really gives them a competitive advantage is building custom solutions around using AI to transform their business, whether it is to seize new opportunities to respond to new threats to drive more operating efficiency.

Examples of messy real-world data that are really difficult to work with in relational databases but that are easy with MongoDB

If you want to model the message that has attachments or reactions or part of the threaded conversation, how do you do that in a structured table? If you want to deal with adding new fields or new values and all that, how do you — for example, if you have a user who has something multiple phone numbers, how do you model that quickly? How do you deal with nested structures, right, where a customer record could have — include past orders each with their own line items and order history. Like how do you do that with a — it’s much more difficult where you can model that so much more easily in MongoDB, how do you deal with like messy, inconsistent data that there is no uniformity to.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s Data Center revenue again had incredibly strong growth in 2025 Q1, driven by AI factory build outs and the ramp of the Blackwell family of chips

Data Center revenue of $39 billion grew 73% year-on-year…

…AI factory build-outs are driving significant revenue…

…Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete.

AI workloads on NVIDIA’s chips have now transitioned strongly to inference; NVIDIA’s management is seeing a huge jump in inference demand; major NVIDIA customers, such as OpenAI, Microsoft, and Google, are seeing huge leaps in AI token generation; Microsoft processed 100 trillion tokens in 2025 Q1, up 5x year-on-year; inference-serving startups have tripled their token generation rate and revenues

AI workloads have transitioned strongly to inference…

…We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis…

…Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.

The US government recently issued export controls to China on NVIDIA’s H20 chips, which caused the company to write-off the value of the chips the company can no longer sell; NVIDIA’s management believes China’s AI accelerator market will exceed US$50 billion; management thinks NVIDIA’s loss of access to the Chinese market will harm the company’s business, and benefit the company’s competitors in China and elsewhere; as a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China was below management’s expectations in 2025 Q1 and was down sequentially; management expects a large decline in China data center revenue in 2025 Q2; Singapore is used by many of NVIDIA’s large customers for centralized invoicing and the NVIDIA products billed under Singapore are shipped elsewhere; nearly all of NVIDIA’s H100, H200, and Blackwell Data Center revenue billed to Singapore was for orders from US customers; management sees that half of the world’s AI researchers are based in China; management thinks that the AI platform that wins China will lead globally; because of the US government’s latest export controls, the Chinese AI market is effectively closed to the US; management sees China moving on with AI with or without the US, and the export controls weakening the US’s position; management thinks the US government’s assumption that China cannot make AI chips is clearly wrong; management sees China’s DeepSeek and Qwen as among the best open-source AI models, and these models have gained traction outside of China; management thinks the US wins when top open-source models, even those from China, are built on American infrastructure

On April 9, the U.S. government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory. In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide…

…China as a percentage of our Data Center revenue was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 billed revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200, and Blackwell Data Center compute revenue billed to Singapore was for orders from U.S.-based customers…

…With half of the world’s AI researchers based there, the platform that wins China is positioned to lead globally. Today, however, the $50 billion China market is effectively closed to U.S. industry…

…China’s AI moves on with or without U.S. chips. It has the compute to train and deploy advanced models. The question is not whether China will have AI, it already does. The question is whether one of the world’s largest AI markets will run on American platforms. Shielding Chinese chip makers from U.S. competition only strengthens them abroad and weakens America’s position. Export restrictions have spurred China’s innovation and scale…

…The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers win AI wins AI. Export controls should strengthen U.S. platforms, not drive half of the world’s AI talent to rivals…

…, DeepSeek and Qwen from China are among the most — among the best open source AI models. Released freely, they’ve gained traction across the U.S., Europe and beyond…

…DeepSeek also underscores the strategic value of open source AI. When popular models are trained and optimized on U.S. platforms, it drives usage, feedback and continuous improvement, reinforcing American leadership across the stack. U.S. platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure.

Blackwell’s ramp is the fastest product ramp in NVIDIA’s history; management believes the introduction of the GB200 NVL architecture within the Blackwell family allows users to achieve the lowest cost per inference token; management has seen a significant improvement in manufacturing yields for the GB200 NVL; GB200 NVL is now generally available; hyperscalers are deploying 1,000 NVL72 racks, or 72,000 Blackwell GPUs, on a weekly basis, and are on track to increase their deployment-pace in 2025 Q2; Microsoft has already deployed tens of thousands of Blackwell GPUs for OpenAI, and Microsoft is ramping up to hundreds of thousands of Blackwell GPUs; major CSPs (cloud services providers) are already sampling GB300 systems, with production expected later in 2025 Q2; the GB300’s design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200; software optimisations have already improved the performance of the Blackwell family by 1.5x in May 2025; NVIDIA has brought the Blackwell family of chips to mainstream gaming; compared to the Hopper family, the Blackwell family of chips has 40x higher speed and throughput, which is critical in driving down the cost of inference

Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center-scale workloads and to achieve the lowest cost per inference token. While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 NVL racks are now generally available for model builders, enterprises and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers…

…Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields…

…While Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone…

…This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti, starting at just $299. The RTX 5060 also debuted in laptops, starting at $1,099. These systems doubled the frame rate and slashed latency. These GeForce RTX 5060 and 5060 Ti desktop GPUs and laptops are now available…

…Compared to Hopper, Grace Blackwell is some 40x higher speed and throughput, compared. And so this is going to be a huge, huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time.

NVIDIA Dynamo can increase the AI inference throughput of Blackwell NVL72 by 30x for AI reasoning models; Capital One reduced its AI chatbot’s latency by 5x with Dynamo

NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry. Developer engagements increased, with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo.

In the latest MLPerf inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our 8-GPU 200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance per GPU as well as 9x more GPUs all connected on a single NVLink domain.

NVIDIA’s CUDA software ecosystem has improved the inference performance of the Hopper family of chips by 4x over 2 years

We increased the inference performance of Hopper by 4x over 2 years. This is the benefit of NVIDIA’s programmable CUDA architecture and rich ecosystem.

There were nearly 100 NVIDIA-powered AI factories in flight in 2025 Q1, up 2-fold year-on-year; the number of GPUs in each AI factory also doubled from a year ago; management has line of sight to tens of gigawatts of AI data center projects requiring NVIDIA AI infrastructure; there are many more AI factories that have yet to be announced

The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a twofold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period…

…We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not-too-distant future…

…In the remarks, Colette mentioned there’s some 100 AI factories being built. There’s a whole bunch that haven’t been announced.

NVIDIA’s management sees AI agents as a new digital workforce that can handle simple as well very complex tasks; management has used the Llama model architecture to build the Llama Nemotron family of open reasoning models for agentic AI; the Nemotron models are available as NVIDIA inference microservices (NIMs); management has improved the accuracy and inference speed of the Nemotron by 20% and 5x, respectively; large enterprises including Accenture and Microsoft are using Nemotron

We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs, or NVIDIA inference microservices, with multiple sizes to meet diverse deployment needs. Our post-training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft are transforming work with our reasoning models.

Cisco used NVIDIA NeMo microservices improve its code assistant’s accuracy by 40% and improve response time by 10x; NASDAQ used NVIDIA NeMo to improve the accuracy and response time of its AI platform’s search capabilities by 30% each; Shell used NVIDIA NeMo to reduce the training time of its custom LLM by 20% and improved its accuracy by 30%

NVIDIA NeMo microservices are generally available across industries that are being leveraged by leading enterprises to build, optimize and scale AI applications. With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. NASDAQ realized a 30% improvement in accuracy and response time in its AI platform’s search capabilities. And Shell’s custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo’s parallelism techniques accelerated model training time by 20% when compared to other frameworks.

Yum! Brands will use NVIDIA AI on 500 of its restaurants this year, before expanding to 61,000 restaurants over time, to improve operations; Cybersecurity companies such as Crowdstrike are using NVIDIA AI for agentic workflows; Crowdstrike achieved 2x faster detection with 50% less compute cost through NVIDIA AI

We also announced a partnership with Yum! Brands, the world’s largest restaurant company to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order-taking, optimize operations and enhance service across its restaurants. For AI-powered cybersecurity, leading companies like Check Point, CrowdStrike and Palo Alto Networks are using NVIDIA’s AI security and software stack to build, optimize and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.

NVIDIA’s networking revenue increased sequentially in 2025 Q1; NVLink 72 offers 14x the bandwidth of PCIe Gen 5; NVLink 72 can carry 130 terabytes per second of bandwidth in a single rack (the world’s peak internet traffic is also around 130 terabytes per second); NVLink shipments in 2025 Q1 exceeded $1 billion; NVIDIA recently announced NVLink Fusion, which (1) allows hyperscalers to connect semi-custom CCUs (close control units) to NVIDIA racks, (2) allows ASIC and CPU providers to connect to NVIDIA racks; management thinks Spectrum-X (NVIDIA’s Ethernet networking solution) offers the highest throughput and lowest latency networking solution for AI; Spectrum-X had strong sequential and year-on-year growth; Spectrum-X is widely adopted by major CSPs and consumer internet companies; Google Cloud and Meta became Spectrum-X customers in 2025 Q1; NVIDIA has introduced silicon photonic switches to Spectrum-X and Quantum-X, which increases an AI factory’s power efficiency by 3.5x, network resiliency by 10x, and time-to-market by 1.3x; management sees NVIDIA has having 3, maybe 4, networking platforms right now; latency matters a lot in AI, so achieving low latency in AI networking is important; Spectrum-X has improved the utilisation of Ethernet in AI clusters by 50%-90%

Sequential growth in networking resumed in Q1 with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. 

We created the world’s fastest switch, NVLink, for scale up. Our NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink 72 carries 130 terabytes per second of bandwidth in a single rack, equivalent to the entirety of the world’s peak Internet traffic. NVLink is a new growth vector and is off to a great start with Q1 shipments exceeding $1 billion.

At COMPUTEX, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies and Astera Labs as well as CPU suppliers such as Fujitsu and Qualcomm, to leverage and relink Fusion to connect our respective ecosystems.

For scale out, our enhanced Ethernet offerings deliver the highest throughput, lowest latency networking for AI. Spectrum-X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including CoreWeave, Microsoft Azure and Oracle Cloud and xAI. This quarter, we added Google Cloud and Meta to the growing list of Spectrum-X customers. We introduced Spectrum-X and Quantum-X silicon photonics switches, featuring the world’s most advanced co-packaged optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasing power efficiency by 3.5x and network resiliency by 10x, while accelerating customer time to market by 1.3x…

…We now have 3 networking platforms, maybe 4. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do. Scaling out is easier to do, but scaling up is hard to do. And that platform is called NVLink… In addition to InfiniBand, we also have Spectrum-X… the last one is BlueField, which is our control plane…

…In the case of AI, you have a lot of computers working together. And the traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking and it wants to get work done as quickly as possible, and you’ve got a whole bunch of nodes working together…

…We enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet. And as a result, we improved the utilization of Ethernet in these clusters, these clusters are gigantic, from as low as 50% to as high as 85%, 90%. And so the difference is, if you had a cluster that’s $10 billion, and you improved its effectiveness by 40%, that’s worth $4 billion. It’s incredible. And so Spectrum-X has been really, quite frankly, a home run.

NVIDIA’s GeForce is the largest AI personal computing footprint for developers; NVIDIA added AI laptop models in 2025 Q1 that can run Microsoft’s CoPilot+; NVIDIA’s DGX Spark and DGX Station delivers 1 petaflop and 20 petaflops, respectively, of AI compute in a desktop formfactor; DGX Spark and DGX Station will be available later in 2025 

With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft’s CoPilot+….

…DGX Spark delivers up to 1 petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 Superchip. DGX Spark will be available in calendar Q3 and DGX Station later this year.

NVIDIA’s Omniverse is being adopted even more widely by leading software companies; TSMC used Omniverse to save months of work by designing fabs virtually; Foxconn used Omniverse to accelerate thermal simulations by 150x; Pegatron used Omniverse to reduce assembly line defect rates by 67%; GE Healthcare is using Omniverse to develop robotic imaging and surgery systems.

We have deepened Omniverse’s integration and adoption into some of the world’s leading software platforms, including Databricks, SAP and Schneider Electric. New Omniverse Blueprint such as Mega for at-scale robotic fleet management are being leveraged in KION Group, Pegatron, Accenture and other leading companies to enhance industrial operations. At COMPUTEX, we showcased Omniverse’s great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually, Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%…

…GE Healthcare is using the new NVIDIA Isaac platform for health care simulation built on NVIDIA Omniverse and using NVIDIA Cosmos for platform speed, development of robotic imaging and surgery systems.

NVIDIA’s automotive revenue had strong growth in 2025 Q1, driven partly by the ramp of self-driving technologies; NVIDIA is partnering with GM (General Motors) to build next-gen vehicles with NVIDIA AI, simulation, and accelerated computing; NVIDIA is now in production with its full-stack solution for Mercedes-Benz

With our Automotive group. Revenue was $567 million, down 1% sequentially but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs. We are partnering with GM to build the next-gen vehicles, factories and robots using NVIDIA AI, simulation and accelerated computing. And we are now in production with our full-stack solution for Mercedes-Benz starting with the new CLA, hitting roads in the next few months. 

NVIDIA recently announced Isaac GROOT N1, the world’s first open fully customizable foundation model for humanoid robots; NVIDIA recently launched Cosmos World Foundation models; leading robotics companies have begun using Isaac and Cosmos; management is very bullish on the development of robotics and thinks future manufacturing plants in the US will deeply incorporate robotics

We announced Isaac GR00T N1, the world’s first open fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmos World Foundation models. Leading companies include 1X, Agility Robotics, Figure AI, Uber and Waabi. We’ve begun integrating Cosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and XPENG Robotics are harnessing Isaac’s simulation to advance their humanoid efforts…

…The era of robotics is here, billions of robots, hundreds of millions of autonomous vehicles and hundreds of thousands of robotic factories and warehouses will be developed…

…Regarding onshore manufacturing, President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs and strengthen national security. Future plants will be highly computerized in robotics. We share this vision.

NVIDIA’s management sees reasoning models as being compute-intensive and requiring hundreds to thousands times more tokens per task than one-shot inference models; management thinks reasoning models are driving a step-function surge in inference demand

Reasoning AI enables step-by-step problem-solving, planning and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands more — thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand.

NVIDIA’s management sees AI scaling laws as being firmly intact, with inference now being a new driver

AI scaling laws remain firmly intact, not only for training, but now inference too requires massive scale compute.

TSMC’s new plants in the USA are for manufacturing of NVIDIA’s chips; other important chip-manufacturing partners of NVIDIA, besides TSMC, are also investing in US manufacturing; NVIDIA has made substantial long-term purchase commitments for US-made chips; management’s goal for US-manufacturing of AI chips is “From chip to supercomputer, built in America, within a year”; management sees the USA as always being NVIDIA’s largest market and home to the largest installed base of NVIDIA’s infrastructure

TSMC is building 6 fabs and 2 advanced packaging plants in Arizona to make chips for NVIDIA. Process qualification is underway with volume production expected by year-end. SPIL and Amkor are also investing in Arizona, constructing packaging, assembly and test facilities. In Houston, we’re partnering with Foxconn to construct a 1 million square foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we’ve made substantial long-term purchase commitments, a deep investment in America’s AI manufacturing future. Our goal: From chip to supercomputer built in America within a year. Each GB200 NVLink 72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job…

…The U.S. will always be NVIDIA’s largest market and home to the largest installed base of our infrastructure.

NVIDIA’s management is seeing the US government, under the Trump administration, changing its tune on AI diffusion rules; the US government now has a new policy to promote US AI technology with trusted partners; NVIDIA’s management is seeing the US government as wanting US AI technology to lead

On AI Diffusion Rule, President Trump rescinded the AI Diffusion Rule, calling it counterproductive, and proposed a new policy to promote U.S. AI tech with trusted partners. On his Middle East tour, he announced historic investments. I was honored to join him in announcing a 500-megawatt AI infrastructure project in Saudi Arabia and a 5-gigawatt AI campus in the U.A.E. President Trump wants U.S. tech to lead. The deals he announced are wins for America, creating jobs, advancing infrastructure, generating tax revenue and reducing the U.S. trade deficit.

NVIDIA’s management thinks every country now sees AI as a core technology for the next industrial revolution

Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities. At COMPUTEX, we announced Taiwan’s first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure. Japan, Korea, India, Canada, France, the U.K., Germany, Italy, Spain and more are now building national AI factories to empower start-ups, industries and societies.

NVIDIA’s management is seeing plenty of enterprise-data living on-premises, so NVIDIA is moving AI into enterprises instead of waiting for enterprises to shift to the cloud

We’re going to see AI go into enterprise, which is on-prem. Because so much of the data is still on-prem, access control is really important, it’s really hard to move all of — every company’s data into the cloud. And so we’re going to move AI into the enterprise. And you saw that we announced a couple of really exciting new products: our RTX Pro enterprise AI server that runs everything enterprise and AI; our DGX Spark and DGX Station, which is designed for developers who want to work on-prem. And so enterprise AI is just taking off.

NVIDIA’s management thinks 6G technology will be built on AI

Telcos, today, a lot of the telco infrastructure will be, in the future, software-defined and built on AI. And so 6G is going to be built on AI.

NVIDIA’s management thinks agentic AI has really dispelled a lot of worries people had over AI hallucinations

AI really busted through. Concerns about hallucination or its ability to really solve problems, I think a lot of people are crossing that barrier and realizing how incredible, incredibly effective agentic AI is and reasoning AI is.

Okta (NASDAQ: OKTA)

Okta’s new products, including Identity Threat Protection with Okta AI, had strong contribution in 2025 Q1

New products such as Okta Identity Governance, Okta Privileged Access, Okta Device Access, Fine Grained Authorization, Identity Security Posture Management and Identity Threat Protection with Okta AI had another quarter of strong contribution.

Okta’s latest advancements help organisations protect AI systems; Okta has been protecting nonhuman identities, or NHIs, for a long time, but NHIs have boomed in recent times with the rise of AI agents; in 2024, only 15% of organisations were confident of their ability to secure NHIs, and Okta has products that help solve this problem; Okta’s products to secure NHIs also help secure human identities; Okta’s products to secure NHIs ensure AI interactions remain governed under Zero Trust policies; Okta’s Auth0 platform now has Auth for GenAI, which solves the problem of AI agents creating unsecured NHIs; Auth for GenAI has a successful developer preview, and general availability (GA) is expected in the coming months; Auth for GenAI is currently a usage-based pricing model; Auth for GenAI is useful for both large and small companies; management is seeing a lot of interest for the Auth for GenAI developer preview from small companies; management thinks the problem of NHIs will become even more prominent as more and more AI projects enter production-mode; management thinks Okta will win with NHIs in the AI age because it is the only company with a complete solution

Our newest advancements help organizations protect their employees, customers and AI systems. The key themes that Showcase this year were: one, how Okta is protecting nonhuman identities or NHIs; and two, how Auth0 is helping developers build, secure AI agents. NHIs have been around for a long time. What’s new is how the recent boom in AI agents has resulted in exponential growth in NHIs. NHIs include service accounts, shared accounts, machines and tokens. NHIs often operate outside traditional identity governance frameworks and can leave organizations vulnerable to security risks. In fact, last year, only 15% of organizations said they are confident in their ability to secure NHIs. Okta addresses this problem with Identity Security Posture Management and Okta Privileged Access. By combining these 2 products, customers can discover, secure and manage NHIs with an end-to-end secure identity fabric to secure both human identities and NHIs across a single system. This integrated approach protects non-federated and privileged identities, ensuring AI-driven automation and machine-to-machine interactions remain governed under Zero Trust policies while continuously monitoring NHI risks and vulnerabilities across the enterprise…

…Auth for GenAI addresses the problem of AI agents creating unsecured NHIs by enabling developers to integrate secure identity into their Gen AI applications. This helps ensure that AI agents have built-in authentication, fine grained authorization, async workflows and secure API access. Auth for GenAI secures AI agents at every step without slowing them down, providing developers with the trusted tools and flexibility they need. The product has had a successful developer preview, and we expect the GA launch this summer…

…Auth for GenAI is a usage-based pricing model. So it’s the number of requests to Auth0. So it’s monetized in a similar way to the way Auth0 is now…

…I think that space is — there are big companies building things that could be taking advantage of Auth for GenAI, but it’s also a lot of smaller companies, too. Every small company start-ups trying to innovate around AI agents. And I know a lot of the interest in the developer preview around Auth for GenAI has been from small companies…

…When you look at our Identity Security Posture Management, its ability to detect these NHI and you look at our privileged solution and our general access management solution, which allows companies to secure those nonhuman identities, it’s very relevant for a company even if they’re just POC in these agents. And they’re in a proof of concept. They’re not really in production. It just puts us — shines a light on this problem as they think about moving to production. So that’s a very important aspect of this dynamic in the market. Now we do think as more of these projects move into production, it’s really, really going to force this issue even more. And so I think we’re going to see further acceleration as more and more companies move into production…

…[Question] Follow up on, as you say, the nonhuman side of the business. And the broader question is why do you think Okta will win in that environment? And I think a lot of investors assume it is going to be a big market. Pricing may be different. But why does Okta win versus when we were at RSA talking to CyberArk or SailPoint or Saviynt, whoever it is, all think that they’re in a position to win, particularly since our take, it sounds like governance will be part of identity with agents, more so than, say, just access.

[Answer] I think today, it’s because we’re the only one with a complete solution. And we have this breadth of products that can help solve this problem from detection to vaulting to governance workflows. And I’m talking specifically about NHIs. And I think — but that’s — I mean, that’s only kind of entry to the race. Now we have to execute well, and we have to keep innovating.

Adversaries are now conducting IT contracting scams with AI, and Okta has recommendations to counter these threats

I encourage you to check out a blog post we shared that highlighted Okta threat intelligence’s in-depth research on how adversaries are conducting IT contracting scams using AI and our recommendations to help mitigate these threats.

Okta’s management has been having conversations with customers that are moving AI projects from POC (proof-of-concept) to production, and how Okta can help them; management is seeing that only the most advanced enterprises are in production with AI projects right now

There’s just the conversations we’re having with customers about how important what we do is to them and how much they’re investing in everything from the traditional things we’ve helped them with cloud transformation and of course, security. But now with what’s going on with all these AI projects and moving from POCs to production and how we can help with that and how we can help them build Auth for GenAI applications…

…Only the most advanced forward-leaning enterprises are actually doing production AI right now and use cases at scale where they’re seeing tangible business benefit at scale in production.

Okta’s management thinks that MCP (model context protocol) is a big deal for AI, but also recognises that it’s still very early; management sees MCP as a way for AI agents to use technology resources; management is very excited about the possibility of adding OAuth to MCP; the pricing model for OAuth within MCP is to-be-determined (TBD)

The MCP is a big deal, as you all know. And the way I think about it is it’s basically a way to — it’s almost like a new Internet. It’s a new way to communicate with tools and technology in a way that these LLMs and these emerging set of browsers and user agents on the AI Internet can use all these resources. And that’s very exciting. People don’t — people forget that if you look at the internals of the web, HTTP, the tag for a browser is actually called a user agent and it uses HTTP to connect to web resources. Well, MCP could be a new kind of Internet where the clients are actually AI agents, not user agents and they can talk to these MCP servers. So it’s very exciting from a shifting of the industry and a shifting the capabilities of what these kinds of software systems could do. But it’s also very early. We’re talking about a protocol that was announced, I think, 6 weeks ago. And everyone’s running around, adding MCP servers to their capabilities and developers are experimenting with what this means. We’re very excited about the ability to work with the standards bodies and the community to add actual OAuth to the MCP, so authentication and OAuth protocol to the MCP protocol and handshake there…

…The way MCP will be monetized and how — if we add product capabilities to extend what an authentication handshake is to an MCP server, that’s — we haven’t built that yet, and we haven’t released that yet. So that will be TBD there.

A lot of large global companies are still using on-premise identity technologies and this is an opportunity for Okta, especially when these companies want to take advantage of AI, as cloud-migration is necessary for AI

We still have tons of room to grow inside the Global 2000 and really the top 5,000 biggest companies and organizations in the world is a tremendous opportunity for us. A lot of those organizations are invested a lot in on-premise technology and a lot in on-premise identity with big identity teams that they spend a lot of money on, a lot of cost there. And those companies are with all the change around cloud migration, which has been going on for years and years and years and the focus on security. And now with all of them trying to take advantage of the AI revolution, there’s another catalyst for them to change and upgrade their identity system.

Okta’s management does not see AI-agent apps as being a big accelerant for Okta’s Customer Identity business, but the overall trend is still towards buying instead of building when it comes to customer identity solutions

[Question] When you first started talking about the customer identity opportunity, I think to us, it kind of made a lot of sense why your customers would choose to buy this stuff instead of building it out of the box. That was, I guess, more for the traditional SaaS world. So what I’m trying to understand is there seems to be a lot of newfound excitement on the customer identity side as we head into this agentic world. Is there anything about a future of agent-based apps that is going to make it even more of a no-brainer to go with buying this out of the box from you guys on the customer identity side instead of trying to develop it themselves compared to maybe the old school SaaS world?

[Answer] In general, the trend is toward more buy, less build. And I think AI probably is — I’m not sure it’s a huge accelerant of that. I think it’s probably on trend just because I think it’s mostly like the solutions are getting better. If you go 10 years ago, there wasn’t really good customer identity solutions that were easy to use, reliable, scalable. And now with Auth0 had an amazing developer experience and were easy to start using and then upsell over time. And that continues. And I think I think the moving to the world of AI and agents and embedding customer identity inside of those apps, I don’t know if it’s material different, but it’s on a trend line that’s toward buying these solutions versus building.

Okta’s management is building a whole set of capabilities and products for AI agents that are not released or announced yet

This whole agentic revolution and agents working on your behalf, I think that’s a whole other set of capabilities and products that we’re thinking about and building, and we haven’t released and announced them yet. But there’s a whole layer on top of what we talk about service accounts and tokens and API access. That’s actually tracking the agent and knowing what that means and knowing what security posture you want and what governance, life cycles, et cetera, et cetera.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Informatica will enhance Salesforce’s data advantage in AI; Informatica is important in helping Salesforce customers harmonise their data for AI applications

If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right. And Informatica combined with Salesforce’s Data Cloud, combined with Tableau, combined with other key assets that we’re going to bring to bear, this is what is creating this incredible data business…

…Today, for our customers, they all want to get there. They all have the hunger to do that. They all want to have this great success, but it takes some time for them to start to build their data sets. And that is why the Informatica acquisition is so important because they all need to not only translate their data to build their master data management. They need to harmonize their data. They need to do all these things. And we see that and we go into these customers and like, “Let’s go.” And they’re like, “We can do some, but we can’t do all.” And the reason they can’t do all is because their whole enterprise data set is not fully harmonized, which is why

In enterprise AI, especially agentic AI, preparation of data sets is very important; management sees the existence of data-silos in enterprises as a key obstacle in enterprises more widely adopting AI

I think everyone who is going through an AI transformation, every business, including mine, we’re going to talk about some great businesses that are going through transformations whether it’s Pepsi or Falabella or OpenTable, et cetera, but every AI transformation is a data transformation. And you don’t see it on the consumer side because when you’re using a consumer AI, you have to remember that the data set has kind of been prefabricated for you. That is the training data and everything is put together. It’s an amalgamated data set applied to this consumer AI model. That’s not how an enterprise AI really works. You have to have your enterprise data together to get the result that you want…

…If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right…

…The enterprise has data sets that are highly controlled, highly governed and highly secured. And these data sets are everything from your customer data set to your financial data set to your HR data set, and the reality is that not all enterprise data is available to all users. Like, for example, you work, Kash, at Goldman Sachs. You can’t see all the Goldman Sachs customer information. There’s regulations around that. You can’t see all the employees’ salary information. You don’t have access to all the Goldman Sachs financial data. So when you’re using these models, they’re not just giving you access to all of this stuff. Are they, Kash? No, they have to be tightly controlled. But if I’m a Goldman Sachs customer, and I want to come in and I want to ask about my account balance or information about my — who I am and what my portfolio looks like or what my opportunities are or even if I’m a Goldman Sachs employee and I want information on — the general information on benefits or how to enable myself or how to sell products more efficiently to customers, all of those things could easily happen right now with the agentic platform. However, there’s a lot of things that could not happen as I kind of just amplified, and that is kind of the constraint.

Salesforce has closed 8,000 Agentforce deals since launch, of which half are paid; Agentforce has handled over 750,000 requests on Salesforce’s help site, lowering cases by 7% year-on-year; 800 customers are already in production with Agentforce; management has launched hundreds of prebuilt Agentforce templates; management has introduced the new Flex Credits consumption-based pricing model for Agentforce after customer feedback; management will add FedRamp High authorisation for Agentforce in June 2025; AgentForce is delivering AI agents to both employees and consumers; management thinks Salesforce is already delivering more agents than any other company in the world; AgentForce reached $100 million in AOV (annual order value) in only a few months and it’s the fastest product to do so in Salesforce’s history even without being fully deployed; 30% of Agentforce’s bookings in 2025 Q1 (FY2026 Q1) came from customers increasing consumption; Salesforce’s internal use of Agentforce has already reduced its hiring needs, driving $50 million in savings; Agentforce is a really fast-growing product that management has not seen before; Agentforce helps pull customers into other Salesforce products; all Agentforce deals in 2025 Q1 (FY2026 Q1) included 4 other clouds on average; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors

Salesforce has closed over 8,000 deals since launching Agentforce, of which half are paid. On help.salesforce.com, Agentforce has handled over 750,000 requests, cutting case volume by 7% Y/Y…

…We’ve got 800 customers already in production with Agentforce, including amazing companies like ENGIE, and that has been a success — incredible success story and with incredible velocity and conversations in OpenTable, Finnair, Grupo Globo, Falabella…

…We have launched hundreds of prebuilt Agentforce templates for different industries, roles, tasks, making it faster and easier for customers to deploy Agentforce…

…Earlier this month, we introduced our Flex Credits. It’s a new consumption-based pricing model. That’s how we’ve tuned our pricing after a huge amount of customer feedback…

…Next month, we’re going to add FedRAMP High authorization for Agentforce, so the U.S. public sector can also experience this incredible success…

…Agentforce does agentic augmentation for employees. Agentforce is also doing it directly to consumers. I think that we are really delivering at this point probably more agents and more conversations and more capability to more enterprises than any other vendor in the world. I really see us as the #1 agent platform already…

…It’s only been a few months. In fact, Agentforce reached more than $100 million in AOV. It’s much faster than any product in our history, and we’re not even fully deployed on all geographies, currencies or languages…

…Even though Agentforce is only in its second quarter, 30% of its bookings also came from customers increasing their consumption…

…In customer support, Agentforce has handled 750,000 cases and is on track to surpass 1 million help portal requests this quarter, cutting case volume by 7% year-over-year. As a result, we have reduced some of our hiring needs, enabling us to rebalance and redeploy 500 customer support employees to higher impact data plus AI roles by year-end, driving $50 million in savings…

…I don’t think the word agent was even on our earnings call a year ago. Maybe it wasn’t even on our earnings call 9 months ago. But it started to appear, and when we released the product end of October, it’s November, December, January, February, March, April, here we are in May. So just think about in a relatively short period of time, I’ve never seen in my career over 45 years in enterprise software this idea that we now have 8,000 customers, 4,000 of whom are paying, many of them who are at scale deployments where this is working in months. It just makes no sense actually to me…

…When we sell an Agentforce, we’re not just dropping some box off and saying, okay, we sold an Agentforce. We’re pulling all of our clouds in. And I’m sure that you heard like, for example, in the example I think of Pepsi, they have 11 of our clouds. So when we’re pulling in Agentforce, where all the other products are coming along with it…

…. We took all the deals, all the Agentforce deals for the quarter. On average, there were 4 other clouds on those deals…

…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce.

Data Cloud surpassed 22 trillion records in 2025 Q1 (FY2026 Q1), up 175% year-on-year (was 50 trillion records in 2024, or FY2025); 60% of Salesforce’s top 100 deals in 2025 Q1 (FY2026 Q1) included Data Cloud; 50% of Data Cloud’s new bookings in 2025 Q1 (FY2026 Q1) came from existing customers; Salesforce’s Data Cloud and AI ARR (annual recurring revenue) exceeded $1 billion in 2025 Q1 (FY2026 Q1), up 120% year-on-year; Salesforce closed 30 net new bookings exceeding $1 million that included Data Cloud and AI; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors; Salesforce had 3x more Data Cloud deals in 2025 Q1 (FY2026 Q1) compared to a year ago

In this quarter, our Data Cloud, just our Data Cloud surpassed 22 trillion records, up 175% year-over-year. Nearly 60% of our top 100 deals included investments in both Data Cloud and AI…

…50% of Data Cloud’s Q1 new bookings came from existing customers. I think that’s really important because it really speaks to the adoption of the product and the incredible usage by the customers who have it…

…Data Cloud and ARR grew more than 120% year-over-year, and it’s more than $1 billion part of our business…

…In Q1, we closed more than 30 net new annual bookings over $1 million that include both data and AI…

…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce…

…We had 3x more Data Cloud deals in Q1 than we had the year before.

Salesforce’s management has the ADAM framework for thinking about agents, apps, data, and meta data for AI; management thinks the ADAM framework is necessary in order for companies to achieve success with agentic AI; a new Tableau product, named Tableau Next, is an example of Salesforce’s ADAM framework

When I talk about agents and data and apps and metadata, that’s what we really call our ADAM framework. It’s in our experience to see now these 4 elements, the app, the data, the agents and the metadata, that make Salesforce unique, that companies need to achieve the real promise of agentic AI…

…If you were in San Diego, you saw Tableau Next. And what you saw was the DataFam. That’s the Tableau community kind of fully inspired because not only were they looking at Tableau Next, this incredible new product, but what they saw was Tableau, the Tableaus they love. And they also saw an agentic layer, and they saw it deeply integrated into our data cloud and all running on our metadata platform. That’s our ADAM framework, the agents, the data, the apps, the metadata all together…

…In this new agentic AI era, every company is going to say that they have agents. Well, I think every company does say that they have agents. But without these 4 parts of what we call ADAM, the — really the agents, the data, the apps, the metadata framework, you’re just not really able to deliver this complete experience for the enterprise, including delivering digital labor.

Salesforce’s management continues to see Slack as the interface for users to converse with Salesforce’s AI agents; every Slack user gets a digital teammate when Agentforce is deployed in Slack; Salesforce’s own sales agent within Slack is improving the efficiency of Salesforce’s sales teams by saving 44,000 hours of work annually; pairing Data Cloud with the sales agent has led to a significant reduction in lead-routing time from 20 minutes to 19 seconds

Slack is, of course, where I believe you’re going to really begin and end every Agentforce conversation. It’s the conversational interface for managing all of your work across apps, systems, teams. And Service Cloud, Sales Cloud, Tableau Next, any Salesforce app can live inside Slack…

…With Agentforce in Slack, every employee has a digital teammate that can make notes for your meeting, summarize your Slack channels. And you really see like AI taking place on Slack when you look at Slack recap or you look at agents just coming right into your channels to talk to you in real time…

…Our sales agent in Slack is transforming how our teams sell. Our AEs have already logged over 21,000 interactions, simplifying everyday sales activity, saving our teams over 44,000 hours annually. Further, Data Cloud is amplifying that impact, cutting lead routing from 20 minutes to 19 seconds in Slack.

Finnair is using AgentForce for customer service; Agentforce is in thousands of conversations a week with Finnair customers; using AgentForce, Finnair aims to automate 80% of customer service queries and reduce rep onboarding time by 25%; management sees the airline industry as a big opportunity for Agentforce

Finnair is using Agentforce to help manage customer service for 12 million passengers. Agentforce is already having thousands of conversations a week with Finnair customers, and the airline is aiming to automate 80% of customer service queries and reduce new rep onboarding time by 25% with Agentforce…

…We’re talking to so many airlines about how they not only can use all our Customer 360 apps, not just the Data Cloud, not just our meta platform but build this agentic capability around the airline. This is going to be a huge opportunity for that entire industry, which is so customer service obsessed.

Latin America retailer Falabella started using Agentforce in Colombia, deployed through WhatsApp, a few months ago; Falabella’s Agentforce experience was very successful and a six-figure Agentforce deal has now become a $1 million deal

Here’s this company that’s pioneering Agentforce just a couple of months ago in their Colombia business. And then it’s so successful, they’re actually deploying it on WhatsApp, which we hadn’t really seen before. And they’re using WhatsApp. The customers are coming in. They’re coming in and, “Hey, what’s my order? What’s going on?” And this what’s my order use case is the main thing that’s driving Falabella, and boom, all of a sudden, they go, “You know what, this is working so well. We’re going all over Latin America,” and what was kind of, I think, a low 6-figure deal. I mean, Miguel is going to have to come in here and tell me, turned into like a $1 million deal overnight…

…Yes, it was $300,000, right, from just Colombia.

OpenTable is using Agentforce and started with restaurants, before deploying to employees, and now consumers

OpenTable, we’ve been talking about this story for a while, which is [ Glenn ] is doing a great job deploying Agentforce. And he started with the restaurants. Then, he did employees. And now he’s like doing the consumers, and this is an incredible thing that OpenTable has been so successful.

Brazilian media conglomerate Grupo Globo bought Agentforce in 2024 Q4 (FY2025 Q4); Agentforce has since increased Grupo Globo’s customer retention rate by 22%

Another Latin American success is Grupo Globo. The Brazilian media conglomerate purchased Agentforce in Q4. In less than 3 months, Agentforce basically boosted Globo’s retention rate by 22%, driving revenue upgrades, cross selling, converting nonsubscribers.

Large Japanese enterprises are very excited about Agentforce and using to build agentic layers around their businesses

We’ve talked about the speed of which Agentforce is gone, but it’s not just a U.S. phenomenon. It’s an international phenomenon. And as I mentioned last week, I was in Japan, and one of our customers in Japan, Fujitsu, is really doing some amazing things. But when I heard at the rate and scale and speed that they want to deploy the product, and their vision in terms of how it can be all encompassing for an agentic layer around the entire company, I really just could not believe it. I really sat with 5 of the largest Japanese companies. And I think somehow every company’s imagination has been captured that they have this idea that they can build an agentic layer around their company.

Salesforce’s management is seeing that the rate of innovation in AI is far exceeding customer adoption

This idea that agents are kind of starting to provision to become digital labor, this is exceeding my expectation that it crosses industries. It’s crossing geographies. And as I said, all of this is really just happening in only 6 months. By the time we get to Dreamforce, which is still another 6 months ahead, I expect another huge massive transformation. We’re starting to cut the code right now on what will be one of the main releases of Dreamforce. And when we look at what will come as the release after Dreamforce, our technology, our product doesn’t look at all like what it looked like just a few months ago. So we’re moving very, very fast. And I think that I really would say this hasn’t really happened too many times in the last 30, 40 years. The rate of innovation far exceeds the rate of customer adoption.

Salesforce’s management thinks that most of the AI models are within 3-6 months of innovation of each other; management thinks the models have not improved a lot in accuracy because they are all trained on the same datasets

When we all are using ChatGPT or Gemini, or You.com or Perplexity or Anthropic or any of these models or an open source model or DeepSeek, okay, all of these models are mostly the same. They’re within 3 to 6 months of innovation of each other. We all know that. And then all these models are trained on mostly the same datasets because there’s only so much data that they can be trained on. Now there’s some synthetic data, but it doesn’t mean very much to a lot of these models. That’s why, by the way, that these models still have not improved a lot of their accuracy in the consumer side.

Salesforce’s management thinks Salesforce has been the best technology company in the world at building an agentic layer around itself; Salesforce used AI agents to handle 1 million conversations in customer support in 2025 Q1 (FY2026 Q1) and this has led to a dramatic reduction in the number of people needed to handle customer issues; Salesforce is Agentforce’s Customer Zero

What is it going to take to get this transformation to happen, where we have a much bigger agentic wrapper around Goldman Sachs, your company, or around all companies? We’ll look at my company to start. I think we’ve probably done the best of maybe any tech company. We’ve done now — this quarter, we’ll pass through 1 million conversations in customer support. It’s a dramatic reduction in the amount of human beings who have had to get involved to answer customers’ issues. I don’t think any other tech company at scale has delivered this capability. It is a proof point without any doubt that Salesforce has been able to deliver on its vision of digital labor, and Agentforce’s #1, Customer Zero, Salesforce. So we eat our own dog food, and this is amazing.

Salesforce’s management thinks the proclamation from some AI experts that AI will very soon cause massive job-losses in white-collar work to be alarmist with the current state of AI 

[Question] The CEO of Anthropic recently commented that AI could wipe out 50% of entry-level white-collar jobs and drive unemployment a lot higher, unfortunately. And since you’ve been very astute and very ahead of the curve on commoditization of LLMs and you’ve been very outspoken on the topic of digital labor, I’m curious just to get your thoughts on that concept.

[Answer] In terms of the amount of white-collar jobs that are going to disappear, you’re all experts at this point in the current generation of AI. You’re using it every day. We’re all using it. It doesn’t matter who I speak to. Probably all of your children, all of your family members are using it, and you can see how it’s impacted. Like people are smarter. They get their medical labs. They ask, “Well, what do you think about this?” But then when you call your doctor, sometimes the doctor goes, “Well, actually, that’s not completely true.” And we’re kind of at this point where it’s very good on some things but not for everything. And because of that, even in the enterprise, while there’s a lot of things that we can do, edit this press release or write me this speech or whatever, but the reality is, oh, you’re probably still going to want to get in there and work on it. And I think we all know that. So look, we’re at an exciting moment in AI, and maybe we’re moving into this world where there’s going to be like these AI prophets and obviously, I’m a huge fan of Dario’s. He’s great, amazing person, incredible company, wonderful. But some of these comments, I think, are alarmist and get a little aggressive in the current form of AI today.

Sea Ltd (NYSE: SE)

Sea’s management thinks AI will help Sea’s business on the consumer-facing side and internal product improvement; on the consumer-facing side, Sea has used AI to improve search recommendations, advertising efficiency, help sellers create better product descriptions, and help seller create videos based on images or descriptions of products; management measures the returns of Sea’s AI-related investments through click-through rates and conversion rates; management is seeing that most of Sea’s large AI-related investments on the consumer-facing side have delivered a positive return on investment (ROI); for internal product improvements, management is using AI to filter counterfeit products and detect fraud, among other areas; management measures the ROI of AI investments for internal product improvements through cost savings and most AI investments have positive ROI

For the AI investment, we believe that AI will make a big change to our industry, both from a consumer-facing side and also from our internal product improvement…

…One of the big improvement that we did is on our search recommendations and our ad. So we’re deploying AI solution to help us to target our user a lot more efficient when users search us and when people come to our app, so we can recommend more accurate products to them and also help us to have better efficiency on the ad product. That’s why we can improve the ad take rate over time. Another example is the AIGC production that we can help our seller to create for their product descriptions. We have been increasing the video coverage for our product description a lot over time, and part of that is driven by the — we are enabling the seller to create videos based on the images or based on some of the descriptions. And typically, for this investment, we always have a very clear ROI measurement for any of the investment, as I shared before, whether we are spending our AI resources on better our ads, we’re spending our AI resources on better the product descriptions, we measure the return on investment on this through our click-through rate, measure our investment through our conversion rate. And most of our investments so far, anything in meaningful size, has been positive return for any investment with AI resources…

…We are also investing quite a lot on improving our internal productivities, for example, that we’re using AI to help our internal listing team to filter the product in our marketplace a lot more efficient so we can discover the counterfeit, the fraud, et cetera, in a lot cheaper way. And again, those — for all those things we measure based on our AI investments versus the savings that we have typically bring a positive return. 

Tencent (OTC: TCEHY)

Tencent’s management is seeing AI having tangible contributions to Tencent’s businesses, such as performance advertising and evergreen games

During the first quarter of 2025, our high-quality revenue streams sustained their solid growth trajectory. AI capabilities already contribute tangibly to business such as the performance advertising and evergreen games. 

Tencent’s management has stepped up Tencent’s spending on AI opportunities, such as the Yuanbao app and AI in Weixin; management believes the operating leverage from Tencent’s existing revenues will absorb the costs associated with AI investments and contribute to the company’s growth; Tencent’s AI investments are in the form of both capital expenditures and operating expenses; some of Tencent’s AI investments are already generating revenue, such as through (1) improved advertising targeting, (2) improved content recommendation which increases user time-spent, (3) more time spent in games from usage of AI, and (4) cloud revenue from the deployment of GPUs, or graphics processing units; other AI investments will need more time to deliver a return on investment (ROI) and these investments will lead to lower-margin growth in the short-term compared to recent quarters

We also stepped up our spending on new AI opportunities such as Yuanbao application and AI in Weixin. We believe the operating leverage from our existing high-quality revenue streams will help absorb the additional costs associated with these AI-related investments and contribute to healthy financial performance during this investment phase. We expect this strategic AI investment will create value for users and society and generate substantial incremental returns for us over the longer term…

…As we have highlighted in the prior quarter earnings call, we are stepping up investments in AI in the form of capital expenditures as well as operating expenses. Some of these GPU and AI investments already generate revenue for us, such as improved ad targeting, which boosts ad revenue; improved content recommendation, which boosts user time spent and thus ad revenue; usage of AI within evergreen games, which boosts user engagement and thus game revenue; and deployment of GPUs and AI across our computing infrastructure, APIs and platform solutions, which generates cloud revenue.

For our other GPU and AI investments, which are more long cycle in nature, there’s a natural time lag between making the investments and those investments starting to generate significant revenue for us. During this time lag period, we expect the costs of those GPU and AI investments to offset our underlying operating leverage, resulting in a temporary smaller gap between our revenue and operating profit growth rate than we have achieved in recent quarters. That said, we’re confident that our stepped-up investment in longer-cycle AI projects will create substantial long-term value for our users, business and shareholders.  

Tencent is in the early stages of rolling out AI features for Weixin, such as (1) Yuanbao (Tencent’s AI chatbot) within Weixin chat, (2) AI answers within Weixin Search, (3) AI tools for content creators for easier content production, and (4) an AI coding assistant to make it easier to create Mini Programs in Weixin

We’re in the early stages of rolling out AI features within Weixin. Users can now add Yuanbao as a Weixin context for seamless AI interaction within Weixin Chat, providing context-aware responses and facilitating content discovery while leveraging the Weixin ecosystem and the worldwide web. Weixin Search is now starting to include results powered by large language models, including the fast thinking model Hunyuan Turbo S, and the chain of thoughts reasoning models Hunyuan T1 and DeepSeek R1. We provide AI tools so that content creators can generate images matching the text of their official accounts articles and generate video effects for video accounts videos utilizing preset templates. We reduced the Mini Programs development time via an AI coding assistant for creating AI programs that supports natural language prompts and image inputs. 

The Marketing Services segment’s revenue was up 20% year-on-year in 2025 Q1 because of higher user engagement and AI upgrades of the advertising platform; Marketing Services revenue grew across all major advertising categories; management has upgraded the Market Services segment’s advertising platform with enhanced generative AI capabilities to accelerate advertising creation and live-streaming content; management is using LLMs (large language models) to deliver better advertising recommendations

For Marketing Services, our revenue grew 20% year-on-year to RMB 32 billion, benefiting from higher user engagement, ongoing AI upgrades to our ad platform and a strengthening transaction ecosystem within Weixin…

…On the ad tech front, we upgraded our advertising platform with enhanced generative AI capabilities such as ad generation and video editing tools to accelerate ad creation and digital human solutions to facilitate live streaming activities for content creators and merchants. We’re using large language models to deepen our systems, understanding of merchandise and of user interests across our apps and so deliver better ad recommendations.

AI-related revenue within Tencent Cloud grew quickly in 2025 Q1, driven by demand for GPUs (graphics processing units), APIs (application programming interacts), and platform solutions; Tencent Cloud’s growth was constrained by GPU availability

AI-related revenue within Tencent Cloud grew quickly year-on-year, driven by increased customer demand for GPUs, APIs and platform solutions, although constrained by limited GPU availability. 

Tencent’s management thinks there’s room for both a general AI agent, and a Weixin-specific AI-agent that sits within the Weixin ecosystem; management believes that as Tencent’s AI chatbots Yuanbao and iMA improves and evolves over time, they can answer questions better and be able to interact with other apps and external APIs (application programming interfaces); management thinks that Yuanbao and iMA are similar to AI agents developed by peers; management believes that Tencent can create a unique AI agent that connects with users within Weixin’s ecosystem

So on Agentic AI, it’s a very hot concept, right? And the idea is actually, oh, the AI can actually help you to complete a very complicated tasks that involve many different steps as well as the use of tools and maybe in connection with other apps. So if we look at that concept, then there is a general Agentic AI, which everybody can do. Essentially, you create this agent and you go out to the world and try to complete tasks for your user. But at the same time, there’s also an Agentic AI that can sit within Weixin and the unique ecosystem of Weixin. And I think those are two different products…

…I think we are creating that capability within some of our AI native products such as Yuanbao and iMA, over time, as these AIs continue to evolve to increase in terms of their capability. So in the very beginning, these AIs actually answer questions very quickly. So those are the sort of quick response. And then over time, they include — they start including the chain of thoughts, a long thinking reasoning model and you can answer complicated questions. And over time, the capability can actually allow them to start doing more complicated tasks. So they start evolving to have Agentic capability, and they will be interacting with all other apps and programs and external APIs to help the users. So that would continue to evolve. And it’s not that much different from other Agentic AIs provided by our peers. 

But on the other hand, right, within the Weixin ecosystem, I think there is the opportunity for us to create a pretty unique Agentic AI that connects with the unique components of the Weixin ecosystem, including the social graph, including the communications and community capability, including the content ecosystem, such as our Official Accounts and Video Accounts and all the millions of Mini Programs that exist within Weixin, which actually sort of gets into all kinds of information as well as transactional and operative capabilities across many different verticals of applications. So I think that would be extremely unique compared to other more general Agentic AIs, and that’s sort of a very differentiated product for us. 

Tencent’s management thinks AI business models include (1) increasing advertising revenue through AI targeting, and (2) GPU rentals; management sees GPU rentals as a low priority form of business; management thinks the subscription model for AI services will not be an important business model within China

In terms of your question on AI business models, I think if you look at advertising, it’s directly augmented by AI because AI can actually help to improve the targeting capability of our ads. And when we deliver better results, then it translates directly into additional advertising revenue. And I think that is a big opportunity that we are already realizing in our performance ads, but there’s more opportunity to develop over time. Now I think transaction is actually very closely tied to advertising, right? When you have advertising that leads to direct transactions and then advertising value actually goes up significantly. And I think that’s the way we are actually also trying to increase our advertising revenue. That’s another component and pillar of our advertising revenue growth driver. 

GPU rental is sort of directly related to cloud business, and that’s more like a reselling business mostly. And to a large extent, right now, we are putting it on a lower priority because — especially when there’s a short supply of GPUs, right, then GPU rental is a lower priority for us.

And subscriptions, I think it’s not the most likely business model for AIs in China, right? Now everybody is actually providing AIs for free. So the subscription model, which exists outside of China, I think it’s not going to be mainstream business model for AI in China.

Tencent’s management sees long runway for growth in both the Domestic and International Games businesses; one driver for growth is the use of AI, in ways such as deploying an AI coach for new players, and to help prevent cheating

We do believe we have a long runway for our domestic and indeed international game revenue growth looking forward. And there’s many reasons, but just to pick on three for now. First of all, we talked extensively this time last year about some of the changes we were making to how we envisage and therefore, how we operate and therefore, who operates our biggest Domestic Games. And you can see that we have made those changes and they’re bearing the fruit that we hoped they would bear and we see them bearing more fruit going forward.  A second driver or enabler of that long runway, is the utilization of AI, which we think is particularly beneficial to the big competitive multiplayer games that we’ve talked about extensively and that represent the majority of our domestic game revenue. And that’s the case because while there’s many ways that we can and we’re starting to deploy AI within games some of the most interesting include using AI to help coach new players, to help accompany existing players, to help prevent cheating and hacking and so forth. And all of those are particularly important within competitive multiplayer games.

Tencent’s management is seeing users of Tencent’s new AI services use them for asking questions, following up with more questions, and analysing photos

At this stage, I think we’re trying to create functionalities and user experiences that would leverage AI and try to see what may or may not stick with the users. So as I said, right, the users sort of like to ask questions, like to interact with the AI with further follow-up questions. And when we put in various functionalities such as allowing photos to be analyzed and sort of people use it. So there are a lot of functionalities, which right now we have put in, and we’re starting to get to see people like them a lot or are not using it that much

The NVIDIA H20 chips was banned in 2025 Q1 and there are now new BIS guidelines (effectively new chip controls), but Tencent has a good stockpile of AI chips; management will use the stockpile of AI chips to generate immediate returns and also train Tencent’s models; management believes that Tencent can achieve very good training results even with small chip-clusters, so the company’s current stockpile of chips will be sufficient to train models for a few more generations; management thinks that the concept of scaling laws embraced by American technology companies, where AI models need to be trained on ever-larger chip-clusters, is outdated; management sees Tencent having a larger need for GPUs around inference, especially if the company moves toward agentic AI; to improve inference efficiency and reduce GPU-reliance, management thinks Tencent can leverage software optimisations, customise AI models depending on the use cases, and use other chips (such as ASICs, or application-specific integrated circuits) that are available in China

On the GPU front, it’s actually a very dynamic situation, right? So there — since the last earnings call, we have seen an H20 ban. And then after that, there was the BIS new guidelines that just came in overnight. So it’s a very dynamic situation, and we just sort of have to manage the situation, on one end, sort of in a completely compliant way, and on the other end, sort of we try to figure out the right solution for us to make sure that our AI strategy can still be executed. So the good thing that we are in is that, number one, I think we have a pretty strong stockpile of chips that we acquired previously, and that would be very useful for us in executing our AI strategy. And if you look at the allocation of the usage of these chips, obviously, they will be used for the applications that will generate immediate return for us. So for example, in the advertising business as well as content recommendation product, right? We actually would be using a lot of these GPUs to generate results and generate return for us. Secondly, in terms of the training of our large language models, they will be of the next priority. And the training actually requires higher-end chips. And the good thing on that front is that over the past few months, right, we start to move off the concept or the belief of the American tech companies, which they call the scaling law, which require continuous expansion of the training cluster. And now we can see even with a smaller cluster, you can actually achieve very good training results. And there’s a lot of potential that we can get on the post-training side, which do not necessarily meet very large clusters. So that actually sort of help us to look at our existing inventory of high-end chips and say, we should have enough high-end chips to continue our training of models for a few more generations going forward.

And then the larger need for GPUs are actually sort of around inferences and especially sort of when you see a growth in demand for inference on the user side as well as when we move into the chain of thoughts reasoning model, it actually requires many more tokens to answer a complicated question. And if we move into Agentic AI, right, it requires even more tokens, there’s actually a lot of need on the inference side. But on the inference side, there’s actually a lot of work that could be done for us to manage the need.

One is just sort of leveraging software optimization. I think there’s still quite a bit of room for us to keep on improving the inference efficiency, right? So if you can improve inference efficiency 2x, then basically, that means the amount of GPUs get doubled in terms of capacity. So that’s actually a very good way of investing our resources to improve on the inference efficiency. And the other approach is we can customize different sizes of models, especially some applications do not require very large models, right, and we can tailor-made models and distill models so that they can be used for different use cases, and that can actually save on the inference usage of GPUs. And finally, we actually sort of can potentially make use of other chips, compliant chips available in China or available for us to be imported as well as ASICs and GPUs in some cases for smaller models inferences. So I think there are a lot of ways to which we can fulfill the expanding and growing inference needs, and we just need to sort of keep exploring these venues and spend probably more time on the software side rather than just force buying GPUs.

Tencent’s management is unsure of when some of Tencent’s investments in AI will pay off because they see the whole world as being in uncharted territory when it comes to AI investments, but Tencent has historically experienced a pay-off within a 1-2 year timeframe for investments in new areas; management expects a narrowing of the difference in growth rate between Tencent’s revenue growth and operating profit growth, but operating leverage is still expected

[Question] You guys mentioned earlier in your opening remarks, smaller gap between revenue and operating profit. Can you kind of elaborate a bit more on this, the magnitude and what kind of extensive period that we’re talking about?

[Answer] We’re at uncharted territory, not only for Tencent, but for the whole world in terms of the deployment of artificial intelligence. So I don’t have necessarily a very high degree of confidence in these statements. But if you’re thinking about measuring the duration, then the past may be the best guide to the future in that Tencent has been through many time periods where we have cultivated a new product toward critical mass and substantial popularity ahead of monetizing that product. And typically, the duration of those gaps between investment to cultivate versus monetization and revenue generation would be in the sort of 1-year to 2-year time range. So obviously, it will depend on what our peer companies do in China, obviously, will depend on consumer habits, on advertiser habits. But I think that’s a reasonable time frame to think about.  In terms of magnitude, I won’t go beyond what we said earlier, which is referring to a narrowing. So we don’t expect the delta between revenue growth and operating profit growth that we experienced this quarter to continue. There will be a narrowing. But on the other hand, we don’t expect our operating leverage to turn negative either.

Tencent’s AI investments are mostly in the form of capital expenditures, but there’s also incremental marketing expenses for Yuanbao and salaries for AI engineers

In terms of what costs other than CapEx or really depreciation could cause that narrowing, then CapEx depreciation is, by far, the most important. We do have some incremental marketing expenses for Yuanbao, although not so much for AI within Weixin. And then we referenced the fact that engineers with expertise in AI are expensive, but that’s more of a sort of mix comment rather than an aggregate headcount comment. We don’t see a step-up in headcount. We’ll continue to manage headcount closely, but we observe that engineers with that AI expertise are rightly well paid.

Historically, Tencent’s banner ads had 0.1% click-through rates while feed ads had 1%, but with AI, management has seen that certain ad inventories have reached a 3% click-through rate; management thinks no one knows the upper limit of AI-powered advertising click-through rates; AI can benefit Tencent’s advertising revenue by showing more appealing content to consumers to increase their time-spent, but the increase in click-through rates is still most important improvement from AI

A big part of the uplift that AI is providing to advertising revenue today can be quantified in the form of the click-through rate on ads. And historically, banner ads achieved a roughly 0.1% click-through rate. Feed ads achieved roughly 1.0% click-through rate. With the benefit of AI, we have seen that the click-through rate on certain ad inventories can improve toward 3.0%, for example.  And then the question is, what’s the upper limit on that click-through rate. And at this point, no one knows the answer because it almost becomes philosophical if you had complete information or insight into a consumer if you had the ability to infer what the consumer wants or the consumer given their prior behaviors should want and then deliver an ultra targeted ad to that consumer, then it’s very hard to say that the upper limit should be X percent rather than Y percent…

We can use AI to target more appealing content to the consumer, which means they spend more time in the feed, which means they then view more ads, but I think that ad click-through rate is perhaps the most important.

Veeva Systems (NYSE: VEEV)

Veeva’s management announced the Veeva AI initiative in April 2025; Veeva AI will see the company build AI into its applications across clinical and commercial; management thinks the addition of AI will significantly improve productivity for customers; Veeva AI agents will have application-specific context and direct access to Veeva data; the first release of Veeva AI will happen in December 2025; the first 2 of Veeva’s AI Agent solutions, CRM Bot and MLR Bot, is planned for December 2025; Veeva AI is part of Veeva’s overall AI strategy, which includes the Veeva Direct Data API and the Veeva AI Partner Program; management sees the Vault CRM product as a fast path to AI productivity for many customers; management thinks Veeva AI can improve the efficiency of the life sciences industry by 15% in the next few years; management thinks Veeva AI’s efficiency gains will manifest because the AI technology will be deeply embedded in core applications; early reception from customers to Veeva AI has been very positive; management will charge an appropriate license fee for Veeva AI that balances revenue growth for the company, and broad customer adoption; customer response to a demo of the CRM Bot was great

Announced in April, Veeva AI is a major initiative for us with a clear vision that’s focused on delivering tangible value. We’re building AI into Vault Platform and Veeva applications across all major areas from clinical to commercial. Adding AI – through AI Agents and AI Shortcuts – to our core applications can significantly improve productivity for customers and the industry. Veeva AI Agents have application-specific context and direct, secure access to Veeva application data, documents, and workflows. AI Shortcuts enable end users to set up personal AI-powered automations for their most frequent user-specific tasks. The first release of Veeva AI is planned for December 2025. Our first two AI Agent solutions, CRM Bot and MLR Bot in commercial, are also planned for year end. Veeva AI is part of our overall AI strategy which also includes the Veeva Direct Data API and the Veeva AI Partner Program, which are both available and operating well today…

…We showed our AI Agents – CRM Bot, Voice Control, Compliant Free Text, and MLR Bot. Vault CRM will be a fast path to highly productive AI for many of our customers…

…I think if you look over the next 3, 4, 5 years out to 2030, I think Veeva can help increase life sciences efficiency by 15% or so with Veeva AI…

…Why I’m so bullish on it? Because Veeva has the core applications, and we’re building the AI very deeply embedded in the core applications. So when we build AI, we’re not building a generic AI. We’re building a medical legal regulatory approval agent, a CRM agent that does pre-call planning, a safety AI agent that can transcribe pretext into a safety case, so deep AI applications. And you need the deep core applications and the AI working together. And that’s where the magic will happen. It’s just very, very, very clear to me…

…[Question] Understanding it’s Veeva AI just recently kind of rolled out. Any initial feedback from customers and how you think in the coming years, how it might impact your overall business?

[Answer] The reception from customers is very positive because it just makes sense. It’s not a lot of hype. They need the AI working with the core applications…

…I think Veeva AI is something that we will charge an appropriate license fee for. So I think it will be a net positive for Veeva. We don’t have that packaging worked out yet. We do want to price it so that it can be very reasonable and broadly adopted, help the industry move forward. And yes, that certainly help our revenue all the time…

…When I showed the demo of Veeva AI, one example of what Veeva AI can do with CRM Bot, you can just see the aha moment go with the customers because what they want is they want AI to help them with the engagement planning, right, do all that work. And then all the data entry afterwards, do that work so they can focus on the engagement in their field. And you just see the light bulbs going on.

Veeva’s management sees the core AI technology as settling down a little; management thinks it’s clear that AI is a new computing paradigm that can produce new kinds of automation; management thinks that core applications will still be very relevant despite the automation that AI can deliver

I think we — the core technology has settled down a little bit in terms of large language models, what’s going on there. And then it’s very clear that this AI is a new computing paradigm. It’s something that can automate certain things that humans can do, which basic software, traditional software couldn’t do that. It doesn’t work like a human. This is nondeterministic computing. It can automate some things that a human can do, but it doesn’t obviate the need for a core application.

Veeva’s management thinks that AI can deliver the biggest positive productivity impacts in the pharma industry within the sales function

Where is AI likely to be and across sales, marketing or service. Yes, it’s a good question. Overall, the way to think about the pharma industry is the human relationships, the sales organizations, the spend on the sales force is very significant and very meaningful. And if you can provide productivity gains and effectiveness gains for the field team, you have a very significant impact. So I think there’s a — we’re seeing a lot of focus in the sales side, which is why one of our first agents will be in the CR in the core CRM space, the CRM bot. We think we can make customers significantly more productive from a field team perspective. So it’s not to say that we’re not seeing investment in other areas, certainly, customer service. Case intake as an example. There’s a lot of examples on the marketing side. But I would prioritize sales higher given the size, the importance, the relationships and the potential impact for it to have.

Veeva’s management thinks the pharma industry has problems with fragmented data (which hampers the use of AI), and has not been able to produce deep industry-specific AI yet; management thinks Veeva can help with both problem areas

The industry still has fragmented data and getting the data to work together, getting the data into the software so you can make decisions and you can get insights fast about that. That is still a challenge for the industry. It’s not a solved problem yet…

…We talk a lot about the excitement around AI, but there’s also a lot of unsolved problems in the AI space. And part of that is bringing together or the industry hasn’t yet been able to bring together very industry-specific processes with deep industry-specific AI. That’s a problem that they’ve made investments. They haven’t often seen the full return on their investment in some of the AI projects. And I think that’s another area where they’re excited about our ability to help them over time.

Wix (NASDAQ: WIX)

Wix’s management recently introduced a new AI-powered product, Wixel, which is a stand-alone visual design platform for things other than websites; Wixel is constantly choosing and optimizing the best AI models for each task behind the scenes, which is a unique feature among similar offerings; Wixel helps make image and video editing more accessible; Wix has partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot; it’s still very early days for Wixel and management expects Wixel to evolve meaningfully throughout 2025; management is currently treating Wixel as a separate subscription with its own pricing of $79 currently, and management is testing pricing now; management believes legacy players will find it hard to change their user interface and experience as they already have big customer base, creating differentiation for Wixel

Earlier this month, we introduced Wixel, our new stand-alone visual design platform that extends Wix’ vast design expertise beyond websites for the first time. Wixel marks the beginning of our next-generation approach to visual design, combining Wix’s intuitive creation tools and user-friendly interface with the power of generative AI. This platform combines the best AI models on the market today tailored for specific image needs, including object, background editing and much more with a constant pipeline of new AI enhancements. This makes Wixel unique from everything else available on the market. It handles the complexity of today’s high-end AI technology behind the scenes, choosing and continuously optimizing the best models for each task. This allows our users to always have access to the most advanced and up-to-date tools for image generation and editing…

Our goal is to give total control over photo and video editing to everyone, the same way we did for website creation. Wixel is for Wix users, for entrepreneurs, freelances and business owners who already rely on Wix to build and grow online. It’s for the millions of DeviantArt artists, who want to add an easy to use yet powerful editing tool to their toolkit, without sacrificing the quality of their art…

… Excitingly, we partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot. This collaboration allows Microsoft 365 users, small business owners, students and everyday creators, to design in a smarter, more intuitive way with Wixel.  Though this launch is a cornerstone of our product road map, we are still very early in the journey with plenty of work ahead in order to achieve our vision for Wixel. In the coming year, you can expect the platform to evolve meaningfully with breakthrough capabilities. As we continue to innovate, I’m excited to see how Wixel reshapes the digital creation space…

[Question] Some thoughts on pricing, how you landed at $79 a year, and how you’re trying to strike the balance between monetization and adoption?…

…When it comes to Wixel, we don’t try to build another drag and drop editing environment, which I think all the tools that you’re referring to are a drag and drop editing environment. What we’re trying to do is really how would — if you would think in the 5 years from today, how you could edit images content with AI, how would that look like? And we’re trying to build that into Wixel. So I think the way that the tool itself behaves is very different than the traditional editing environment. Now I’m not saying that they cannot do that. I’m sure, they can, there are a lot of smart people there. I’m just saying that if you try to rebuild your tools into this thinking about how will the universe look in 5 years or how would AI look in 5 years, you’ll find that you have to change a lot of the user interface, a lot of the experience, a lot of the underlying technologies in those existing tools, which I believe is a bit of a challenge when you have a lot of users.

Wix’s management recently launched Astro, a new AI assistant embedded within the Wix dashboard; management expects Astro to improve user engagement, product upgrades, and reduce churn; management plans to launch more AI agents

We also introduced Astro, our new AI assistant embedded within the Wix dashboard. Astro simplifies the user journey by guiding users, surfacing relevant tools and insights and helping them complete key tasks. We expect Astro to improve user engagement, boost package upgrades and reduce churn over the long term. And it’s only the first in a series of AI agents we plan to roll out.

Wix’s management recently launched new AI-powered tools for website automations and customisations; the tools include (1) the creation of dynamic content based on site-visitor characteristics, (2) no-code interface for users to drive business outcomes, and (3) automating advanced business workflows

Additionally, we launched new AI-powered tools for website automations and real-time site customization, including adaptive content application, Wix Functions and Wix Automations. These features are designed to make our platform smarter and more efficient while delivering highly personalized experiences to site visitors…

…. This suite includes: 

  • Adaptive content application: a tool designed to personalize website experiences for site visitors by generating dynamic content based on visitor characteristics and instructions, ultimately enhancing engagement and user experience
  • Wix Functions: a no-code interface that allows users to customize outcomes for various business scenarios, enabling businesses to operate more smoothly and effectively
  • Wix Automations: a builder designed to support advanced business workflows with a highly intuitive, fully customizable automation engine

These tools help businesses effortlessly optimize their operations for enhanced efficiency, while ensuring a seamless visitor experience without performance drawbacks like increased load times.

Wix’s management recently launched Wix Model Context Protocol or MCP Server, an infrastructure advancement that lets users leverage natural-language prompts to connect Wix’s business functionality with their preferred AI tools; management demonstrated at a recent conference how Wix MCP Server can be used to generate code for fully functional payment solutions

Finally, we rolled out the Wix Model Context Protocol or MCP Server, a key infrastructure advancement that allows users to leverage natural-language prompts to seamlessly connect Wix’ comprehensive business functionality with their preferred compatible AI-powered tools. The Wix MCP Server enables AI-driven app development for users to build custom experiences on top of Wix or manage their Wix-based business using natural language and AI coding assistance. As the use case presented at Stripe’s recent conference, our team demonstrated how to use LLMs to generate reliable code for fully functional payment solutions. They built a complete website that accepts online payments via credit cards, Apple Pay and Google Pay through Wix Payments and Stripe.

Wix’s management thinks that agencies, which are the customers of Wix’s Partners business, still have a big role to play today and in the future even as AI agents proliferate; management thinks AI agents today still need to evolve significantly in order to achieve big goals; management sees agencies picking up AI technologies faster than consumers and small businesses

Well, in theory, right, in theory, if we look at the far future, then why would you need an agency, right? Because in theory, you can just tell the AI, hey, build this website for me, change those things, now make it successful. And — but practically, we’re not there yet. I think there’s a big distance that we have to have for those AI agents to evolve in order to be able to help you actually achieve all of those goals. Even when we are trying to build this exact agents to do each one of those, there’s still a lot of human interactions and I think a lot of expertise that the human can bring to help it. So I think there is a lot of room for agencies even in the next — in the years to come. Currently, when we look at the AI data, I would say that agencies probably pick up technologies faster than consumers and small businesses. So we’ve actually kind of gave them a bit of a shift in terms of what they can do. 

Wix’s management is optimistic about vibe coding but it’s still a young technology that produces code that tends to break over time; management thinks vibe coding will help to expand Wix’s market reach

I think vibe coding is a super exciting concept. It’s still very early. And so things tend to break. After a while, they’re not stable. They’re not good at SEOs or search engine optimization. There’s a lot of things that need to get there to be mature in order for it to be a viable product for our customers. Just the simplest one is if you edit something, right, it takes 4 minutes for any small change to happen, right? In the best case scenario, it’s 4 minutes. So moving a button will take you a few minutes. So there’s a lot of super exciting potential in vibe coding…

…We’re going to start by — with, of course, a few things including the ability to code components into the Wix Editor, which is one of the obvious things that we’re going to be doing.  I do think that this will allow us and companies like us to expand our market reach because things that you could not have done traditionally on website building platforms, right, now you’ll be able to do because you are able to write this custom-code without coding. So I do — I’m very optimistic. I think it’s going to present to us a lot of really interesting opportunities. But I want to emphasize again, it’s really a young technology, it’s still not stable.

Wix’s management thinks websites will be structurally different in the AI age; management is using Google less when searching for information; management believes that the complexity of building websites in the age of AI will increase, which will benefit website builders such as Wix

[Question] Help us expand our mind, so to speak, on whether websites somehow kind of need to be like structurally different in the AI era, particularly from like a utility and discoverability perspective, and kind of how do you position for that?

[Answer] I do believe that there is a big change coming. I know that for myself, I’m using ChatGPT more than Google when I search for things now. So I would love to have a content — and ChatGPT digest a lot of content from the Internet and try to give you this limited version and there’s advantages and there’s disadvantages, right?…

…LLMs work today by just scrolling the Internet, of course, is not good enough. It’s not going to provide you any knowledge about will my hairdresser have an appointment in 2 days, right? And so — and we’re starting to see the first layer of protocols, right? Microsoft just announced once, Anthropic announced MCP, which is a way for an LLM to query complicated services on — in a way that the agent know how to learn, how to ask an API. We just announced that we supported and released everything that we say now is available for MCP…

…I do also believe that in many ways, that will play — help platforms like Wix because the complexity of building a website that know how to offer its services for APIs and MCP to LLM, and how to do the equivalent of SEO for LLM are just going to make building a website 10x harder, right? So if you — today, you can take somebody who know how to write HTML CSS and in theory, build a distant website, then in a year, that will be impossible. I think the complexity that will be created by those tools and the speed of innovation, right? MCP was announced 1.5 months ago, already released to [indiscernible] I think about 1.5 months ago. And so the complexity and the need to support and to accelerate, I think that is something that will actually help all the website and content-building platform because it’s going to be much harder to do it with your own internal team.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Meituan, Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Sea Ltd, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q1 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the first quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks that designing end-to-end travel is very difficult and travelers often find planning travel to be very complicated, so travelers do it very infrequently; management thinks that a great user interface is the key to designing a great end-to-end travel experience for Airbnb users, and AI will be an important way to do it

I think a lot of companies have tried to like design an end-to-end travel. I think designing end-to-end travel is very, very hard. It’s funny — there’s this funny thing. One of the most common start-up ideas for entrepreneurs is to do a travel planning app. And yet travel planning apps almost always fail. So it’s almost like a riddle, why do travel planning apps fail, and everyone really tries to do it? And the reason why is because to plan travel is very complicated. In fact, it’s so complicated many people have assistants and a big part of their job is to plan travel for them. And yet you use it infrequently. So it’s a very difficult thing to do and you do it infrequently. And so therefore, a lot of companies have failed to design like a so-called connected trip. So I think to do this, a lot of it is to design a really good user experience. And I think that’s one of the things that we’re going to try to do to really design a great end-to-end experience, to able to book your entire trip, and much more. I think the user interface will be important. I think AI will be an important way to do this as well…

…We’re focused on making everything instant book and easy to use. We’re trying to make sure that the end-to-end travel experience is really, really wonderful with great Airbnb design, and we’re going to bring more AI into the application so that Airbnb, you can really solve your own problems with great self-solve through AI customer service agents.

Airbnb’s management recently rolled out an AI customer service agent; 50% of Airbnb’s US users are already using the customer service agent and it will soon be rolled out to 100% of Airbnb’s US users; management thinks Airbnb’s AI customer service agent is the best of its kind in travel, having already led to a 15% reduction in users needing to contact human agents; the AI customer service agent will be more personalised and agentic in the years ahead

We just rolled out our AI customer service agent this past month. 50% U.S. users are now using the agent, and we’ll roll it out to 100% of U.S. users this month. We believe this is the best AI-supported customers travel agent in travel. It’s already led to a 15% reduction in people needing to contact live human agents and it’s going to get significantly more personalized and agentic over the years to come.

Alphabet (NASDAQ: GOOG)

AI Overviews in Search now has more than 1.5 billion monthly users; AI Mode has received early positive reaction; usage growth of AI Overviews continues to increase nearly a year after its launch; management is leaning heavily into AI Overviews; management released the AI Mode in March as an experiment; AI Mode searches are twice as long as traditional search queries; AI Mode is getting really positive feedback from early users; the volume of commercial queries on Google Search has increased with the launch of AI Overviews; AI Overviews is now available in 15 languages and 140 countries; AI Overviews continues to monetise at a similar rate to traditional Search; reminder that ads within AI Overviews was launched in mobile in the USA in late-2024; an example of longer search queries in AI Mode is product comparisons; management thinks AI Overviews in Search and Gemini as 2 distinct consumer experiences; management thinks of AI Mode as a way to discover how the most advanced users are using AI-powered search

AI Overviews is going very well with over 1.5 billion users per month, and we are excited by the early positive reaction to AI Mode…

…Nearly a year after we launched AI Overviews in the U.S., we continue to see that usage growth is increasing as people learn that Search is more useful for more of their queries. So we are leaning in heavily here, continuing to roll the feature out in new countries to more users and to more queries. Building on the positive feedback for AI Overviews, in March, we released AI Mode, an experiment in labs. It expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities to help with questions that need further exploration and comparisons. On average, AI Mode queries are twice as long as traditional search queries. We’re getting really positive feedback from early users about its design, fast response time and ability to understand complex, nuanced questions…

…As we’ve mentioned before, with the launch of AI Overviews, the volume of commercial queries has increased. Q1 marked our largest expansion to date for AI Overviews, both in terms of launching to new users and providing responses for more questions. The feature is now available in more than 15 languages across 140 countries. For AI Overviews, overall, we continue to see monetization at approximately the same rate, which gives us a strong base in which we can innovate even more…

…On the ads of — in AI Overviews, last — late last year, actually, we launched them within the AI Overviews on mobile in the U.S. And this builds on our previous rollout of ads above and below. So this was a change that we have…

…I mentioned people typing in longer queries. There’s a lot more complex, nuanced questions. People are following through more. People are appreciating the clean design, the fast response time and the fact that they can kind of be much more open-ended, can undertake more complicated tasks. Product comparisons, for example, has been a positive one, exploring how tos, planning a trip…

…On AI-powered search and how do we see our consumer experience. Look, I do think Search and Gemini, obviously, will be 2 distinct efforts, right? I think there are obviously some areas of overlap, but they’re also — like expose very, very different use cases. And so for example, in Gemini, we see people iteratively coding and going much deeper on a coding workflow, as an example. So I think both will be around…

…AI Mode is the tip of the tree for us pushing forward on an AI-forward experience. There will be things which we discover there, which will make sense in the context of AI Overviews, so I think will flow through to our user base. But you almost want to think of what are the most advanced 1 million people using Search for, the most advanced 10 million people and then how do 1.5 billion people use Search for.

Alphabet’s management rolled out Alphabet’s latest foundation model, Gemini 2.5, in 2025 Q1; Gemini 2.5 is widely recognised as the best model in the industry; Gemini 2.5 Pro debuted at No.1 on the Chatbot Arena in 2025 Q1 by a significant margin; activer users in AI Studio and Gemini API is up 200% since the start of 2025; Alphabet introduced Gemini 2.5 Flash in April 2025; Gemini models are now found in all of Alphabet’s 15 products with at least 0.5 billion users each; Alphabet is upgrading Google Assistant on mobile devices to Gemini, and will also upgrade tablets, cars, and devices that connect to phones later this year; the Pixel 9a phone with Gemini integration was launched to strong reviews; the Gemini Live camera feature, among others, will soon be rolled out to all Android devices

This quarter was super exciting as we rolled out Gemini 2.5, our most intelligent AI model, which is achieving breakthroughs in performance, and it’s widely recognized as the best model in the industry…

…We released Gemini 2.5 Pro last month, receiving extremely positive feedback from both developers and consumers. 2.5 Pro is state-of-the-art on a wide range of benchmarks and debuted at #1 on the Chatbot Arena by a significant margin. 2.5 Pro achieved big leaps in reasoning, coding, science and math capabilities, opening up new possibilities for developers and customers. Active users in AI Studio and Gemini API have grown over 200% since the beginning of the year…

…Last week, we introduced 2.5 Flash, which enables developers to optimize quality and cost…

…All 15 of our products with 0.5 billion users now use Gemini models…

…We are upgrading Google Assistant on mobile devices to Gemini. And later this year, we’ll upgrade tablets, cars and devices that connect to your phones such as headphones and watches. The Pixel 9a launched very strong reviews, providing the best of Google’s AI offerings like Gemini Live and AI-powered camera features. And Gemini Live camera and screen sharing is now rolling out to all Android devices, including Pixel and Samsung S25.

Google Cloud is offering the industry’s widest range of TPUs and GPUs; Alphabet’s 7th generation TPU, Ironwood, has 10x better compute power and 2x better power efficiency than the previous generation TPU; Google Cloud is the first cloud provider to offer NVIDIA’s Blackwell family of GPUs; Google Cloud will be offering NVIDIA’s upcoming Rubin family of GPUs

Complementing this, we offer the industry’s widest range of TPUs and GPUs and continue to invest in next-generation capabilities. Ironwood, our seventh-generation TPU and most powerful to date, is the first designed specifically for inference at scale. It delivers more than 10x improvement in compute power or a recent high-performance TPU while being nearly twice as power efficient. Our strong relationship with NVIDIA continues to be a key advantage for us and our customers. We were the first cloud provider to offer NVIDIA’s groundbreaking B200 and GB200 Blackwell GPUs, and we’ll be offering their next-generation Vera Rubin GPUs.

Alphabet’s management is rolling out the company’s latest image and video generation models; Alphabet has launched its open-sourced Gemma 3 model in March 2025; Gemma models have been downloaded more than 140 million times; Alphabet is developing robotics AI models; Alphabet has launched a multi-agent AI research system called AI Co-Scientist; the AlphaFold model has been used by more than 2.5 million researchers

Our latest image and video generation models, Imagen 3 and Veo 2, are rolling out broadly and are powering incredible creativity. Turning to open models. We launched Gemma 3 last month, delivering state-of-the-art performance for its size. Gemma models have been downloaded more than 140 million times. Lastly, we are developing AI models in new areas where there’s enormous opportunity, for example, our new Gemini Robotics models. And in health, we launched AI Co-Scientist, a multi-agent AI research system, while AlphaFold has now been used by over 2.5 million researchers.

Google Cloud’s AI developer platform, Vertex AI, now has more than 200 foundation models available, including Alphabet’s in-house models and third-party models

Our Vertex AI platform makes over 200 foundation models available, helping customers like Lowe’s integrate AI. We offer industry-leading models, including Gemini 2.5 Pro, 2.5 Flash, Imagen 3, Veo 2, Chirp and Lyria, plus open-source and third-party models like Llama 4 and Anthropic.

Google Cloud is the leading cloud platform for building AI agents; Google Cloud has an open source framework for building AI agents and multi-agent systems called Agent Development Kit; Google Cloud has a low-code agent-building tool called Agent Designer; KPMG is using Google Cloud to deploy AI agents to employees; Google Cloud has the Google Agentspace product that helps employees in organisations use AI agents widely; Google Cloud offers pre-packaged AI agents across various functions including coding and customer engagement; Alphabet is working on agentic experiences internally and deploying it across the company; Alphabet’s customer service teams have deployed AI agents to dramatically enhance the user experience and is teaching Google Cloud customers how to do so

We are the leading cloud solution for companies looking to the new era of AI agents, a big opportunity. Our Agent Development Kit is a new open-source framework to simplify the process of building sophisticated AI agents and multi-agent systems. And Agent Designer is a low-code tool to build AI agents and automate tasks in over 100 enterprise applications and systems.

We are putting AI agents in the hands of employees at major global companies like KPMG. With Google Agentspace, employees can find and synthesize information from within their organization, converse with AI agents and take action with their enterprise applications. It combines enterprise search, conversational AI or chat and access to Gemini and third-party agents. We also offer pre-packaged agents across customer engagement, coding, creativity and more that are helping to provide conversational customer experiences, accelerate software development, and improve decision-making…

…Particularly with the newer models, I think we are working on early agentic workflows and how we can get those coding experiences to be much deeper. We are deploying it across all parts of the company. Our customer service teams are deeply leading the way there. We’ve both dramatically enhanced our user experience as well as made it much more efficient to do so. And we are actually bringing all our learnings and expertise in our solutions through cloud to our other customers. But beyond that, all the way from the finance team preparing for this earnings call to everything, it’s deeply embedded in everything we do.

Waymo is now serving 250,000 trips per week (was 150,000 in 2024 Q4), up 5x from a year ago; Waymo launched its paid service in Silicon Valley in 2025 Q1; Waymo has expanded in Austin, Texas, and will launch in Atlanta later this year; Waymo will launch in Washington DC and Miami in 2026; Waymo continues to make progress in airport access and freeway driving; management thinks Alphabet will not be able to scale Waymo by themselves, so partners are needed

Waymo is now safely serving over 0.25 million paid passenger trips each week. That’s up 5x from a year ago. This past quarter, Waymo opened up paid service in Silicon Valley. Through our partnership with Uber, we expanded in Austin and are preparing for our public launch in Atlanta later this summer. We recently announced Washington, D.C. as a future ride-hailing city, going live in 2026 alongside Miami. Waymo continues progressing on 2 important capabilities for riders, airport access and freeway driving…

More businesses are adopting Alphabet’s AI-powered campaigns; Alphabet’s recent work with AI is helping advertisers reach customers and searches where advertising would previously not be showed; Alphabet is infusing AI at every step of the marketing process for advertisers, for example, (1) advertisers can now generate a broader variety of lifestyle imagery customized to their business, (2) in PMax, advertisers can automatically source images from their landing pages and crop them, (3) on media buying, AI-powered campaigns continue to help advertisers find new customers, (4) in Demand Gen, advertisers can more precisely manage ad placements and understand which assets work best at a channel level; users of Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend; when Demand Gen is paired with Product Feed, advertisers see double the conversion per dollar spend year-over-year on average; Royal Canin used Demand Gen and PMax campaigns and achieved a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases, a 8% higher value per user

More businesses, big and small, are adopting AI-powered campaigns, and the deployment of AI across our Ads business is driving results for our customers and for our business. Throughout 2024, we launched several features that leverage LLMs to enhance advertiser value, and we’re seeing this work pay off. The combination of these launches now allows us to match ads to more relevant search queries. And this helps advertisers reach customers and searches where we would not previously have shown their ads.

Focusing on our customers, we continue to solve advertisers’ pain points and find opportunities to help them create, distribute and measure more performant ads, infusing AI at every step of the marketing process. On Audience Insights, we released new asset audience recommendations, which tell businesses the themes that resonate most with their top audiences. On creatives, advertisers can now generate a broader variety of lifestyle imagery customized to their business to better engage their customers and use them across PMax, demand gen, display and app campaigns. Additionally, in PMax, advertisers can automatically source images from their landing pages and crop them, increasing the variety of their assets. On media buying, advertisers continue to see how AI-powered campaigns help them find new customers. In Demand Gen, advertisers can more precisely manage ad placements across YouTube, Gmail, Discover and Google Display Network globally and understand which assets work best at a channel level. Thanks to dozens of AI-powered improvements launched in 2024, businesses using Demand Gen now see an average 26% year-on-year increase in conversions per dollar spend for goals like purchases and leads. And when using Demand Gen with Product Feed, on average, they see more than double the conversion per dollar spend year-over-year…

…Royal Canin combined Demand Gen and PMax campaigns to find more customers for its cat and dog food products. The integration resulted in a 2.7x higher conversion rate, a 70% lower cost per acquisition for purchases and increased the value per user by 8%.

Google Cloud still has more AI demand than capacity in 2025 Q1 (as it did in 2024 Q4) 

Recall I’ve stated on the Q4 call that we exited the year in Cloud specifically with more customer demand than we had capacity. And that was the case this quarter as well.

30% of new code at Alphabet is now generated by AI (it was 25% in 2024 Q3)

We’re continuing to make a lot of progress there in terms of people using coding suggestions. I think the last time I had said, the number was like 25% of code that’s checked in. It involves people accepting AI-suggested solutions. That number is well over 30% now. But more importantly, we have deployed more deeper flows.

Amazon (NASDAQ: AMZN)

AWS grew 17% year-on-year in 2025 Q1, and is now at a US$117 billion annualised revenue run rate (was US$115 billion in 2024 Q4); management used to think AWS could be a multi-hundred billion dollar revenue run rate business without AI and now that there’s AI, they think AWS could be even bigger; AWS’s AI business is now at a multi-billion annual revenue run rate and is growing triple-digits year-on-year; the shifting from on-premise to the cloud is still a huge tailwind for AWS, and now even more so as companies that want realize the full potential of AI will need to shift to the cloud; AWS is currently still supply constrained and it will be on a lot more new chips in the coming months; management thinks that the supply chain issues with chips will get better as the year progresses

AWS grew 17% year-over-year in Q1 and now sits at a $117 billion annualized revenue run rate…

…Before this generation of AI, we thought AWS had the chance to ultimately be a multi-hundred billion dollar revenue run rate business. We now think it could be even larger…

…Our AI business has a multibillion-dollar annual revenue run rate, continues to grow triple-digit year-over-year percentages and is still in its very early days…

…Infrastructure modernization is much less sexy to talk about than AI, but fundamental to any company’s technology and invention capabilities, developer productivity, speed and cost structure. And for companies to realize the full potential of AI, they’re going to need their infrastructure and data in the cloud…

…During the first quarter, we continued to see growth in both generative AI business and non-generative AI offerings as companies turn their attention to newer initiatives, bring more workloads to the cloud, restart or accelerate existing migrations from on-premises to the cloud and tap into the power of Generative AI…

…We — as fast as we actually put the capacity in, it’s being consumed. So I think we could be driving — we could be helping more customers driving more revenue for the business if we had more capacity. We have a lot more Trainium2 instances and the next generation of NVIDIA’s instances landing in the coming months…

…I do believe that the supply chain issues and the capacity issues will continue to get better as the year proceeds.

Management is directing Amazon to invest aggressively in AI; Amazon is building 1000-plus AI applications across the company; the next generation of Alexa is Alexa+; Amazon is using AI in its fulfilment network, robotics, shopping, and more

If you believe your mission is to make customers’ lives easier and better every day, and you believe that every customer experience will be reinvented with AI, you’re going to invest very aggressively in AI, and that’s what we’re doing. You can see that in the 1,000-plus AI applications we’re building across Amazon. You can see that with our next generation of Alexa, named Alexa+. You can see that in how we’re using AI in our fulfillment network, robotics, shopping, Prime Video and advertising experiences. And you can see that in the building blocks AWS is constructing for external and internal builders to build their own AI solutions.

AWS’s in-house AI chip, Trainium 2, is starting to lay in capacity in larger quantities with significant appeal and demand; AWS will always be offering AI chips from multiple providers, but Trainium 2 offers a compelling option with 30%-40% better price performance; management believes that the price of inference needs to be much lower for AI to be successful, and they think the price of inference will go down; Anthropic is still building its next few models with Trainium 2

Our new custom AI chip Trainium2 is starting to lay in capacity in larger quantities with significant appeal and demand. While we offer customers the ability to do AI in multiple chip providers and will for as long as I can foresee, customers doing AI at any significant scale realize that it can get expensive quickly. So the 30% to 40% better price performance that Trainium2 offers versus other GPU-based instances is compelling. For AI to be as successful as we believe it can be, the price of inference needs to come down significantly…

…I would say that we’ve been bringing on a lot of P5, which is a form of NVIDIA chip instances, as well as landing more and more Trainium2 instances as fast as we can…

…Anthropic is running — building the next few training models on top of our Trainium2 chip on AWS…

…As they’re waiting to see the cost of inference continue to go down, which it will.

The latest premier Amazon Nova model was launched yesterday and it delivers frontier intelligence and industry-leading price performance; thousands of customers are already using Amazon Nova models; Amazon Nova Sonic, a speech-to-speech foundation model, was recently released and it enables developers to build voice-based AI applications; Amazon Nova Sonic has lower word error rates and higher win rates over other comparable models; AWS recently released a research preview of Amazon Nova Act, a new AI model that can perform actions within a web browser; Amazon Nova Act aims to move the current state-of-the-art accuracy of multi-step agentic actions from 30%-60% to 90%-plus

We offer our own Amazon Nova state-of-the-art foundation models in Bedrock with the latest premier model launching yesterday. They deliver frontier intelligence and industry-leading price performance, and we have thousands of customers already using them, including Slack, Siemens, Sumo Logic, Coinbase, FanDuel, Glean and Blue Origin. A few weeks ago, we released Amazon Nova Sonic, a new speech-to-speech foundation model that enables developers to build voice-based AI applications that are highly accurate, expressive and human-like. Nova Sonic has lower word error rates and higher win rates over other comparable models for speech interactions…

…We’ve just released a research preview of Amazon Nova Act, a new AI model trained to perform actions within a web browser. It enables developers to break down complex workflows into reliable atomic commands like search or checkout or answer questions about the screen. It also enables them to add more detailed instructions to these commands where needed, like don’t accept the insurance upsell. Nova Act aims to move the current state-of-the-art accuracy of multistep agentic actions from 30% to 60% to 90-plus percent with the right set of building blocks to build these action-oriented agents.

Amazon’s management sees question-and-answer being the only current use-case for AI agents, but they want AI agents to be be capable of performing a wide variety of complex tasks and they have built Alexa+ to be such an agent; management launched a new lightning fast AI agent coding experience in Amazon Q in 2025 Q1 and customers are loving it; management has made generally available GitLab Duo with Amazon Q, which enables AI agents to assist multi-step tasks; Alexa+ is meaningfully smarter and more capable than the previous Alexa; Alexa+ is free with Prime and available for non-Prime customers at $19.99 per month; Alexa+ is just starting to be rolled out in the USA and will be introduced to other countries later in 2025; users really like Alexa+ thus far; Alexa+ is now with more than 100,000 users; Amazon already has 0.5 billion devices in people’s homes and cars that can easily distribute Alexa+; management thinks users will have to relearn a little on how to communicate with Alexa+, but the communication experience is now much better; management asked Alexa+  about good Italian restaurants in New York and Alexa+ helped to make a reservation

To date, virtually all of the agentic use cases have been of the question-answer variety. Our intention is for agents to perform wide-ranging complex multistep tasks by organizing a trip or setting the lighting, temperature and music ambience in your house for dinner guests or handling complex IT tasks to increase business productivity. There haven’t been action-oriented agents like this until Alexa+…

…This past quarter, Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging your own data, launched a lightning fast new agent coating experience within the command line interface that can execute complex workflows autonomously. Customers are loving this. We also made generally available GitLab Duo with Amazon Q, enabling AI agents to assist multi-step tasks such as new feature development, code-based upgrades for Java 8 and 11, while also offering code review and unit testing, all within the same familiar GitLab platform…

…We introduced Alexa+, our next-generation Alexa personal assistant, who is meaningfully smarter and more capable than our prior self can both answer virtually any question and take actions and is free with Prime or available to non-Prime customers for $19.99 a month. We’re just starting to roll this out in the U.S., and we’ll be expanding to additional countries later this year. People are really liking Alexa+ this far…

…So we’ve worked hard on that in Alexa+. We’ve been — we started rolling out over the last several weeks. It’s with now over 100,000 users with more rolling out in the coming months. And so far, the response from our customers has been very, very positive…

…We’re very fortunate in that we have over 0.5 billion devices out there in people’s homes and offices and cars. So we have a lot of distribution already…

…To some degree, there will be a little bit of rewiring for people on what they can do because you get used to patterns. I mean even the simple thing of not having to speak, Alexa speak anymore, we we’re all used to saying, Alexa, before we want every action to happen. And what you find is you really don’t have to do it the first time, and then really the conversation is ongoing where you don’t have to say Alexa anymore. And I’ve been lucky enough to have the alpha and the beta that I’ve been playing with for several months, and it took me a little bit of time to realize they didn’t have to keep saying Alexa, it’s very freeing when you don’t have to do that…

…When I was in New York, when we were announcing, I asked her, what were the — we did the event way downtown. I asked her what was great Italian restaurants or pizza restaurants, she gave me a list and she asked me if she wanted me to make a reservation. I said yes. And she made the reservation and confirmed the time, like that. When you get into those types of routines and you have those types of experience, they’re very, very useful.

The majority of Amazon’s capital expenditure (capex) in 2025 Q1 was for AWS’s technology infrastructure, including the Trainium chips

Turning to our cash CapEx, which was $24.3 billion in Q1. The majority of this spend is to support the growing need for technology infrastructure. It primarily relates to AWS as we invest to support demand for our AI services and increasingly in custom silicon like Trainium as well as tech infrastructure to support our North America and International segments. We’re also investing in our fulfillment and transportation network to support future growth and improve delivery speeds and our cost structure. This investment will support growth for many years to come.

The vast majority of successful startups are built on AWS; high-profile startups building AI coding agents are on AWS

If you look at the start-up space, the vast majority of successful start-ups over the last 10 to 15 years have run on top of AWS…

…If you just look at the growth of these coding agents in the last few months, these are companies like Cursor or Vercel, both of them run significantly on AWS.

Amazon’s management thinks that current AI apps have yet to really tackle customer experiences that are going to be reinvented and many other agents that are going to be built

What’s interesting in AI is that we still haven’t gotten to all the other customer experiences that are going to be reinvented and all the other agents that are going to be built. They’re going to take the role of a lot of different functions today. And those are — they’re — even though we have a lot of combined inference in those areas, I would say we’re not even at the second strike of the first batter in the first inning. It is so early right now.

AWS operating margin improved from 37.6% in 2024 Q1 to 39.5% in 2025 Q1, but margins will fluctuate from time to time; AWS’s margin strength is from the business’s strong growth, the impact of some continued investments, and AWS’s custom chips; the investments include software optimisations for server capacity, low-cost custom networking equipment, and power usage in data centers

AWS operating income was $11.5 billion and reflects our continued growth coupled with our focus on driving efficiencies across the business. As we said before, we expect AWS operating margins to fluctuate over time, driven in part by the level of investments we’re making at any point in time…

…We had a strong quarter in AWS, as you mentioned, the margin performance. I would attribute it to the strong growth that we’re seeing, coupled with the impact of some continued investment we’re making in innovation and technology. I’ll give you some examples. So we invest in software and process improvements and ends up optimizing our server capacity, which helps our infrastructure cost. We’ve been developing more efficient network using our low-cost custom networking gear. We’re working to maximize the power usage in our existing data centers, which both lowers our costs and also reclaims power for other newer workloads. And we’re also seeing the impact of advancing custom silicon like Graviton. It provides lower cost not only for us, but also for our customers, better price performance for them.

Apple (NASDAQ: AAPL)

Apple is currently shipping an LLM (large language model) on the iPhone 16 where some of the queries are being handled on the device itself

As you know, we’re shipping an LLM on the iPhone 16 today. And there are — some of the queries that are being used by our customers are on-device, and then others go to the private cloud where we’ve essentially mimicked the security and privacy of the device into the cloud. nd then others, for world knowledge, are with the integration with ChatGPT.

The new Mac Studio has Apple’s M4 Max and M3 Ultra chips, and it can run large language models with over 600 billion parameters entirely in memory

The new Mac Studio is the most powerful Mac we’ve ever shipped, equipped with M4 Max and our new M3 Ultra chip. It’s a true AI powerhouse capable of running large language models with over 600 billion parameters entirely in memory.

Apple has released VisionOS 2.4 which unlocks the first set of Apple Intelligence features for Vision Pro users

VisionOS 2.4 unlocks the first set of Apple Intelligence features for Vision Pro users while inviting them to explore a curated and regularly updated collection of spatial experiences with the Spatial Gallery app.

Apple’s management has released iOS 18.4, which brings Apple Intelligence to more languages (including Singlish); Apple has built its own foundation models for everyday tasks; new Apple Intelligence features in iOS 18 include, Writing Tools, Genmoji, Image Playground, Image Wand, Clean Up, Visual Intelligence, and a seamless connection to ChatGPT

Turning to software. We just released iOS 18.4, which brought Apple Intelligence to more languages, including French, German, Italian, Portuguese, Spanish, Japanese, Korean and simplified Chinese as well as localized English to Singapore and India…

At WWDC24, we announced Apple Intelligence and shared our vision for integrating generative AI across our ecosystem into the apps and features our users rely on every day. To achieve this goal, we built our own highly capable foundation models that are specialized for everyday tasks. We designed helpful features that are right where our users need them and are easy to use. And we went to great lengths to build a system that protects user privacy whether requests are processed on-device or in the cloud with Private Cloud Compute, an extraordinary step forward for privacy and AI.

Since we launched iOS 18, we’ve released a number of Apple Intelligence features from helpful Writing Tools to Genmoji, Image Playground, Image Wand, Clean Up, visual intelligence and a seamless connection to ChatGPT. We made it possible for users to create movies of their memories with a simple prompt and added AI-powered photo search, smart replies, priority notifications, summaries for mail, messages and more. We’ve also expanded these capabilities to more languages and regions.

Apple’s in-house chips are designed with a neural engine that powers AI features across Apple’s products and 3rd-party apps; management thinks the neural engine makes Apple products the best devices for generative AI

AI and machine learning are core to so many profound features we’ve rolled out over the years to help our users live a better day. It’s why we designed Apple silicon with a neural engine that powers so many AI features across our products and third-party apps. It’s also what makes Apple products the best devices for generative AI.

Apple still needs more time to work on the more personalised Siri that was unveiled by management recently

With regard to the more personal Siri features we announced, we need more time to complete our work on these features so they meet our high-quality bar. We are making progress and we look forward to getting these features into customers’ hands.

Apple has low capital expenditures for AI relative to other US technology giants because it uses 3rd-party data centers so they are mostly operating expenses; Apple’s new $500 billion investment in the USA could signal more capital expenditures and data center investments

On the data center side, we have a hybrid strategy. And so we utilize third parties in addition to the data center investments that we’re making. And as I’ve mentioned in the $500 billion, there’s a number of states that we’re expanding in. Some of those are data center investments. And so we do plan on making investments in that area

Arista Networks (NYSE: ANET)

Arista Networks’ management remains confident of reaching $750 million in back-end AI revenue in 2025 even with the uncertainty surrounding US tariffs; the 1:1 ratio between front-end and back-end AI spending for Arista Networks’ products still remains, but management thinks it’s increasingly hard to parse between front-end and back-end

Our cloud and AI momentum continues as we remain confident of our $750 million front-end AI goal in 2025…

…Just a quick clarification before we go into Q&A. Jayshree meant we were reiterating our back-end goal of $750 million, not front-end AI…

…[Question] Is that 1:1 ratio for the front-end back is still intact in your perspective?

[Answer] On the front-end ratio, yes, we’ve said it’s generally 1:1. It’s getting harder and harder to measure front end and back end. Maybe we’ll look at the full AI cluster differently next year. But I think 1:1 is still a good ratio. It varies. Some of them just build a cluster and don’t worry about the front end and others worry about it entirely holistically. So it does vary, but I think the 1:1 is still a good ratio…

…[Question] You reiterated the $750 million back-end target, but you’ve kind of had this $1.5 billion kind of AI target for 2025. And just wondering, is the capability of that more dependent on kind of the tariffs given kind of some of the front-end spend?

[Answer] Regarding tariffs, I don’t think it will have a material difference on the $750 million number or the $1.5 billion. We got the demand. So unless we have some real trouble shipping it or customers change their mind, I think we’re good with both those targets for the year.

Arista Networks is progressing well with its 4 major AI customers; 1 of the 4 customers have been in NVIDIA’s Infiniband solution for a long time, so they’ll be small for Arista Networks; 2 of the 4 are heading towards 50,000 GPU deployments by end-2025, maybe even 100,000 GPUs; 3 of the 4 customers are already in production with the 4th progressing well towards production; management has a lot of visibility from the 4 major AI customers for 2025 and 2026 and it’s looking good; the 4 major AI customers are mostly deploying Arista Networks’ 800-gig switches

We are progressing well in all 4 customers and continue to add smaller ones as well…

…Let me start with the 4 customers. All of them are progressing well. One of them is still new to us. They’ve been in Infiniband for a long time, so they’ll be small. I would say 2 of them are heading towards 50,000 GPU deployments by end of the year, maybe they’ll be at 100 but I can be most certainly sure of 50,000, heading to 100,000. And then the other one is also in production. So I had talked about all 4 going into production. Three are already in production, the fourth one is well underway…

…[Question] If I can go back to the 4 Tier 1s that you’re working with on the AI back end and the progress that you updated on that front. Are these customers now giving you more visibility just given the tariff landscape and that you would need to sort of build inventory for some of the finished codes? And can you just update us how they’re handling the situation on that front? And particularly then, as you think about — I think the investor focus is a lot about sort of 2026 and potential sort of changes in the CapEx landscape from these customers at that point. Are you getting any forward visibility from them? Any sort of early signs for 2026 on these customers?

[Answer] We definitely have all the visibility in the world for this year, and we’re feeling good. We’re getting unofficial visibility because they all know our lead times are tied to some fairly long lead times from our partners and suppliers. So I would say 2026 is looking good. And based on our execution of 2025 and plans we’re putting together, we should have a great year in 2026 as well for AI sector specifically…

…[Question] Do you see the general cadence of hyperscalers deploying 800-gig switch ports this year? I ask because I believe your Etherlink family of switches became generally available in late 2024.

[Answer] I alluded to this earlier in 2024, the majority of our AI trials were on 400 gig at that time. So you’re right to observe that with our Etherlink portfolio really getting introduced in the second half of ’24 that a lot of our 800-gig activity has picked up in 2025, some of which will be reflected in shipments and some of it which will be part of our deferred. So it’s a good observation and an accurate one that this is the year of 800, like last year it was the year of 400.

Arista Networks’ management plans for the company to be the premier and preferred network for NVIDIA’s next-generation GPUs; Arista Networks’ Etherlink portfolio makes it easy to identify and localise performance issues in accelerated compute AI clusters

At the GTC event in March of 2025, we heard all about NVIDIA’s planned GPU road map every 12 to 18 months, and Arista intends to be the premier and preferred scale-out network for all of those GPUs and AI accelerators. Traditional GPUs have a collective communication libraries or CCL, as they’re known, that try to discover the underlying network topology using localization techniques. With this accelerated compute approach, the discrepancies between the discovered topology and the one that actually happens can impact AI job completion times. Arista’s ethylene portfolio highlights the accelerated networking approach, bringing that single point of network control and visibility as a differentiation. This makes it extremely crisp to identify and localize performance issues especially as the size of the AI cluster grows to 50,000 and 100,000 XPUs with the Arista AI Spine and leaf network designs.

Arista Networks’ campus portfolio provides cost-effective access points for agentic AI applications

Arista’s cognitive campus portfolio features our advanced spine with power or Ethernet-wired lease capabilities, along with a wide range of cost-effective wireless or 7 indoor and outdoor access points for the newer IoT and agentic applications.

The data center ecosystem is still somewhat new to AI and the suppliers are figuring things out together

But everybody is new to AI, they’ve never really put together a network design for 4-rail or 8-rail or how does it connect into the GPUs and what is the NIC [network interface card] attachment? What is the accessories in terms of cables or optics that connect? So this movement from trials to production causes us to bring a whole ecosystem together for the first time.

Arista Networks’ management thinks that when it comes to AI use-cases, Arista Networks’ products will play a far bigger role than whitebox networking manufacturers, even though whiteboxes will always be around and management is even happy to help customers build networking solutions that encompass both Arista Networks’ products and whiteboxes; Arista Networks was able to help a small AI customer build a network for a cluster of a few hundred GPUs very quickly after the customer struggled to do so with whiteboxes

I’ve always said, that white box is not new. It’s been with us since the beginning of time. In fact, when Arista got started, a couple of our customers had already implemented internally various implementations of white box. So there is a class of customers who will make the investments in engineering and operations to build their own network and manage it. And it’s a very different business model. It operates typically at 10% gross margins. I don’t think you want Arista to go there. And it’s very hardware-centric and doesn’t require the rich software foundation and investments that we’ve made. So first, I’ll start by saying we will always and will continue to coexist with white box. There are times that you’ve noticed this, too, that because Arista builds some very superior hardware, that even if they don’t use our EOS, they like to have our blue box, as I often call it, the Arista hardware that’s engineered much better than any others with a more open OS like Sonic or FBOSS or at least the attributes of running both EOS and an open-source networking system. So I think we view this as a natural part of selection in a customer base where if it’s a simple use case, they’re going to use something cost effective. But if it’s a really complex use case, like the AI spine or roles that require and demand more mission-critical features, Arista always plays a far bigger role in premium, highly scalable, highly valued software and hardware combinations than we do in a stand-alone white box. So we’ll remain coexistent peacefully, and we’re not in any way threatened by it. In fact, I would say we work with our customers to make sure as they’re building permutations and combinations of the white box, that we can work with that and build the right complement to that with our Etherlink portfolio…

…We had a customer, again, not material. We said, “I can’t get these boxes. I can’t make them run. I cannot get an AI network.” And one of my most technical sales leaders said, hey, we got a chance to build an AI cluster here for a few hundred GPUs. We jumped on it. Obviously, that customer is small and have been largely using white boxes and is now about to install an AI leaf and an AI spine, and we had to get it to him before the tariff deadline. So as an example of not material, but how quickly these decisions get made when you have the right product, right performance, right quality, right mission-critical nature and you can deal with that traffic pattern better than anyone else can. So it happens. It’s not big because we’ve got so much commitment in a given quarter from a customer, but when it is, we ask with great deal of nimbleness and agility to do that.

Arista Networks’ management is happy to support any kind of advanced packaging technologies – such as co-packaged optics or co-packaged copper – for back-end AI networks in the company’s products; management has yet to see any major adoption of co-packaged optics for back-end AI networks

[Question] I’d love to get your latest views around co-packaged optics. NVIDIA introduced its first CPO switches, GCC, for scale-out. And I was wondering whether that had any impact on your views regarding CPO adoption in back-end AI networks in coming years.

[Answer] It’s had no impact. It’s very early days. I think you’ve seen — Arista doesn’t build optics, but Arista enables optics and we’ve always been at the forefront, especially with Andy Bechtolsheim and his team of talented tech individuals that whether it is pluggable optics with LPO or how we define the OSFP connector for MSAs or 100 gig, 400 gig, it’s something we take seriously. And our views on CPOs, it’s not a new idea. It’s been demonstrated in prototype for, I don’t know, 10 to 20 years. The fundamental lack of adoption to date on CPO, it’s relatively high failure rates and it’s mostly been in the labs. So what are some of the advantages of CPO? Well, it has a linear interface. It has lower power than DSP for long-haul optics. It has a higher channel count. And I think if pluggable optics can achieve some of that in the best of both worlds, then you can overcome that with pluggable optics or even co-packaged copper. So Arista has no religion. We will do co-package copper. We’ll do co-package optics. We will do pluggable optics, but it’s too early to call this a real production-ready product that’s still in very early experiments and trials.

Arista Networks’ management is not seeing any material pull-forward in demand for its products because of US tariffs

[Question] We know tariffs are coming later in the year. Whether the strength you’re seeing is the result of early purchases of customers ahead of tariffs in order to save some dollars?

[Answer] Even if our customers try to pull it in and get it all by July, we would be unable to supply it. So that would be the first thing. So I’m not seeing the pull-ins that are really material in any fashion. I am seeing a few customers trying to save $1 here, $1 there to try and ship it before the tariff date but nothing material. Regarding pull-ins for 4 to 6 quarters, again, our best visibility is near term. And if we saw that kind of behavior, we would see a lot of inventory sitting in our customers, which we don’t. In fact, that’s long enough to ship faster and ship more.

2 years ago, Arista Networks’ management saw all its Cloud Titan customers pivot to AI and slow down their cloud spending; management is seeing more balanced spending now, with a more surgical focus on AI

2 years ago, I was very nervous because the entire cloud titans pivoted to AI and slowed down their cloud. Now we see a more balanced spend. And while we can’t measure how much of this cloud and how much of it is AI, if they’re kind of cobbled together, we are seeing less of a pivot, more of a surgical focus on AI and then a continued upgrade of the cloud networks as well. So compared to ’23, I would say the environment is much more balanced between AI and cloud.

Arista Networks’ management sees competitive advantages in the company’s hardware design, development, and operation that are hard to replicate even for its Cloud Titan customers

[Question] What functionality about the blue box actually makes it defensible versus what hyperscalers can kind of self-develop?

[Answer] Let me give you a few attributes of what I call the blue box, and I’m not saying others don’t have it, but Arista has built this as a mission, although we’re known for our software. We’re just as well known for our hardware. When you look at everything from a form factor of a one RU that we build to a chassis, we’ve got a tremendous focus on signal integrity, for example, all the way from layer 1, multilayer PCB boards, a focus on quality, a focus on driving distances, a focus on integrating optics for longer distances, a focus on driving MACsec, et cetera. So that’s a big focus. The second is hardware diagnostics. Internal to the company, we call it Arista boot. We’ve got a dedicated team focused on not just the hardware but the firmware to make it all possible in terms of troubleshooting because when these boards get super complex, you know where the failure is and you’re running at high-speed 200 [indiscernible] 30s. So things are very complex. So the ability to pinpoint and troubleshoot is a big part of what we do. And then there’s additional focus on the mechanical, the power supplies, the cooling, all of which translate to better power characteristics. Along with our partners and chip vendors, there’s a maniacal focus on not just high performance but low power. So some of the best attributes come from our blue boxes, not only for 48 ports, but all the way up to 576 ports of an AI spine or double that if you’re looking for dual capabilities. So well-designed, high-quality hardware is a thing of beauty, but also think of complexity that not everyone can do.

With neo AI cloud customers, Arista Networks’ management is observing that they are very willing to forsake NVIDIA’s GPUs and networking solutions and try other AI accelerators and Ethernet; management thinks that the establishment of the Ultra Ethernet Consortium in 2024 has a role to play in the increasing adoption of Ethernet for AI networking; with the Cloud Titans, management is also observing that they are shifting towards Ethernet; management thinks that the shift from Infiniband to Ethernet is faster than the the shift from NVIDIA’s GPUs to other companies’ GPUs

[Question] There’s a general perception that most of them are buying NVIDIA-defined clusters and networking. So I wonder if you could comment on those trends, their interest in moving past InfiniBand? And also are there opportunities developing with some of these folks to kind of multi-source their AI connectivity to different providers?

[Answer] We’re seeing more adventurous spirit in the neo-cloud customers because they want to try alternatives. So some of them are absolutely trying other AI accelerators like Lisa and AMD and my friends there. Some of them are absolutely looking at Ethernet, not InfiniBand as a scale-out. And that momentum has really shifted in the last year with the Ultra Ethernet Consortium and the spec coming out in May. I just want to give a shout-out to that team and what we have done. So I think Ethernet is a given that there’s an awful lot of legacy of InfiniBand that will obviously sort itself out. And a new class of AI accelerators we are seeing more niche players, more internal developments from the cloud titans, all of which is mandating more Ethernet. So I think between your 2 questions, I would say the progress from InfiniBand to Ethernet is faster, the progress from the ones they know and the high-performance GPU from NVIDIA versus the others is still taking time.

ASML (NASDAQ: ASML)

ASML’s management still sees AI (artificial intelligence) as the key growth driver; ASML will hit upper range of guidance for 2025 if AI demand continues to be strong, while ASML will hit the lower range of guidance if there is uncertainty among its customers

Consistent with our view from last quarter, the growth in artificial intelligence remains the key driver for growth in our industry. If AI demand continues to be strong and customers are successful in bringing on additional capacity to support the demand, there is a potential opportunity towards the upper end of our range. On the other hand, there is still quite some uncertainty for a number of our customers that can lead to the lower end of our range. 

ASML’s management is still positive on the long-term outlook for ASML, with AI being a driver for growth

Looking longer term, the semiconductor market remains strong with artificial intelligence, creating growth in recent quarters, and we see some of the future demand for AI solidifying, which is encouraging. 

ASML’s management thinks inference will become a larger part of AI demand going forward

I think there has been a lot of emphasis in the past quarters on the training side of life. I think more and more, which I think is logical, that you also see more and more emphasis being put on the inferencing side of the equation. So I think you will see the inferencing part becoming a larger component of AI demand on a go-forward basis.

ASML’s management is unable to tell what 2027 will look like for AI demand, but the commitment to AI chips in the next 2 years is very strong

You are looking at major investment, investment has been committed, investment that a lot of company believe they have to make in order to basically enter this AI race, I think the threshold to change this behavior is pretty high. And this is why — this is what our customers are telling us. And that’s also why we mentioned that, based on those conversations, we still see ’25, ’26 as growth years. That’s largely driven by AI and by that dynamic. Now ’27 start to be a bit further away, so you’re asking us too much, I think, to be able to answer basically what AI may look like in ’27. But if you look at the next couple of year, so far, the commitment to the AI investment and, therefore, the commitment also to deliver the chips for AI has been very solid.

Coupang (NYSE: CPNG)

Coupang’s management is investing in automation (such as automated picking, packing and sorting) and machine learning to deploy inventory more precisely to improve the customer experience and reduce costs

This quarter, we saw benefits from advances in our automated picking, packing and sorting systems and machine learning utilization that deploys inventory with more precise prediction of demand. This, coupled with our focus on operational excellence, enables us to continually improve the customer experience while also lowering their cost of service.

Datadog (NASDAQ: DDOG)

Existing customer usage growth in 2025 Q1 was in line with management’s expectations; management is seeing high growth in Datadog’s AI cohort, and stable growth in the other cohorts

Overall, we saw trends for usage growth from existing customers in Q1 that were in line with our expectations. We are seeing high growth in our AI cohort as well as consistent and stable growth in the rest of the business.

Datadog’s ,anagement continues to see increase in interest in next-gen AI capabilities and analysis; 4,000 Datadog customers at the end of 2025 Q1 used 1 or more Datadog AI integrations (was 3,500 in 2024 Q4), up 100% year-on-year; companies using end-to-end data observability to manage model performance, security, and quality, has more than doubled in the past 6 months; management has observed that data observability has become a big enabler of building AI workloads; the acquisition of Metaplane helps Datadog build towards a comprehensive data observability suite; management thinks data observability will be a big opportunity for Datadog

We continue to see rising customer for next-gen AI capabilities and analysis. At the end of Q1, more than 4,000 customers used one or more Datadog AI integrations, and this number has doubled year-over-year. With end-to-end data observability, we are seeing continued growth in customers and usage as they seek to manage end-to-end model performance, security and quality. I’ll call out the fact that the number of companies using end-to-end data observability has more than doubled in the past 6 months…

…[Question] What the vision is about moving into data observability and how consequential an opportunity it could be for Datadog?

[Answer] The field is evolving into a big enabler or it can be positive enabler, if you don’t do it right, for building enterprise workloads — for AI workloads, sorry. So in other words, making sure the data is being extracted from the the right place, transformed the right way and is being fed into the right AI models on the other hand…

…We only had some building blocks for data observability. We built data streams monitoring product for streaming data that comes out of few, such as Kafka, for example. We built their job monitoring product that monitors back jobs and large transformation jobs. We have a database monitoring product that looks at the way you optimize queries and optimize base performance and cost. And by adding data quality and data pipelines, with Metaplane, we have a full suite basically that allows our customers to manage everything from getting the data from their core data storage into all of the products and AI workloads and reports they need to go populate that data. And so we think it’s a big opportunity for us.

Datadog’s management has improved Bits AI, and is using next-gen AI to help solve customer issues quickly and move towards auto remediation

We are adding to Bits AI, with capabilities for customers to take action with workflow automation and App Builder, using next GenAI to help our customers immediate issues more quickly and move towards auto remediation in the future.

Datadog has made 2 recent acquisitions; Eppo is a feature management and experimentation platform; management sees automated experimentation as an important part of modern application development because of the use of AI in coding; Metaplane is a data observability platform that works well for new enterprise AI workloads; management is seeing more AI-written code in both its customers and the company itself; management thinks that as AI writes more code, more value will come from being able to observe and understand the AI-written code in production environments, which is Datadog’s expertise; the acquisitions of Eppo and Metaplane are to position Datadog for the transition towards a world of AI-written code

We recently announced a couple of acquisitions.

First, we acquired Eppo, a next-generation feature management and experimentation platform. The Eppo platform helps increase the velocity of releases, while also lowering risk by helping customers to release and validate features in a controlled manner. Eppo augments our efforts in product analytics, helping customers improve the variance and tie feature performance to business outcomes. More broadly, we see automated experimentation as a key part of modern application development, with the rapid adoption of the agent generative code, as well as more and more of the application logic itself being implemented with nondeterministic AI models. 

Second, we also acquired Metaplane, the data observability platform built for modern data teams. Metaplane helps prevent, detect and resolve their availability and quality issues across the company’s data warehouses and data pipelines. We’ve seen for several years now that better freshness and quality were critical for applications and business analytic. And we believe that they are becoming key enablers of the creation of new enterprise AI workloads, which is why we intend to integrate the Metaplane capabilities into our end-to-end dataset offerings…

…There is definitely a big transition that is happening right now, like we see the rise of AI written code. We see it across our customers. We also see it inside of Datadog, where we’ve had very rapid adoption of this technology as well…

…The way we see it is that it means that there’s a lot less value in writing the code itself, like everybody can do it pretty quickly, can do a lot of it. You can have the machine to do a lot of it, and you complement it with a little bit of your own work. But the real difficulty is in validating that code, making sure that it’s safe, making sure it runs well, that it’s performing and that it does what it’s supposed to do for the business. Also making sure that when 15 different people are changing the code at the same time, all of these different changes come together and work the right way, and you understand the way these different pieces interact in the way. So the way we see it is this move out a lot of their value from writing the code to observing it and understanding it in production environments, which is what we do. So a lot of the investments we’re making right now, including some of the acquisitions we’ve announced, build towards that, and making sure that we’re in the right spot.

Datadog signed a 7-figure expansion deal with a leading generative AI company; the generative AI company needs to reduce tool fragmentation; the generative AI company is replacing commercial tools for APM (application performance monitoring) and log management with Datadog, and is expanding to 5 Datadog products

We signed a 7-figure expansion as an annualized contract with a leading next GenAI company. This customer needs to reduce tool fragmentation to keep on top of its hyper growth in usage and employee headcount. With this expansion, the customer will use 5 Datadog products and will replace commercial tool for APM and log management.

AI-native customers accounted for 8.5% of Datadog’s ARR in 2024 Q4 (was 6% in 2024 Q4); AI-native customers contributed 6 percentage points to Datadog’s year-on-year growth in 2025 Q1, compared to 2 percentage points in 2024 Q1; management thinks AI-native customers will continue to optimise cloud and observability usage in the future; AI-native contracts that come up for renewal are healthy; Datadog has huge customer concentration with the AI-native cohort; Datadog has more than 10 AI-native customers that are spending $1 million or more with Datadog; the strong performance of the AI-native cohort in 2025 Q1 is fairly broad-based; Datadog is helping the AI-native customers mostly with inference, and not training; when Datadog sees growth among AI-native customers, that’s growth of AI adoption because the AI-native customers’ workloads are mostly customer-facing

We saw a continued rise in contribution from AI-native customers who represented about 8.5% of Q1 ARR, up from about 6% of ARR last quarter and up from about 3.5% of ARR in the year ago quarter. AI-native customers contributed about 6 points of year-over-year revenue growth in Q1 versus about 5 points last quarter and about 2 points in the year ago quarter. We continue to believe that adoption of AI will benefit Datadog in the long term, but we remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms and as they may choose to optimize cloud and observability usage…

…[Question] Could you talk about what you’re seeing from some of those AI-native contracts that have already come up for renewal and just how those conversations have been trending?

[Answer] All the contracts that come up for renewal, they are healthy. The trick with the cohort is that it’s growing fast. There’s also a revenue concentration there. We now have our largest customer in the cohort, and they’re growing very fast. And on the flip side of that, we also have a larger number of large customers that are also growing. So we — I think we mentioned more than 10 customers now that are spending $1 million or more with us in that AI-native cohort and that are also growing fast…

…On the AI side, we do have, as I mentioned, one customer large and the others there, they’re contributing more of the new revenue than the others. But we see growth in the rest of the cohort as well. So again, it’s fairly typical…

…For the AI natives, actually, what we help them with mostly is not training. It’s running their applications and their inference workloads as customer-facing. Because what’s training for the AI natives tends to be largely homegrown one-off and different from — between each and every one of them. We expect that as and if most other companies and enterprises do significant training, that this will not be the case. This will not be one-off and homegrown. But right now, it is still the AI natives that do most of the training, and they still do it in a way that’s largely homegrown. So when we see growth on the AI-native cohorts, that’s growth of AI adoption because that’s growth of customer-facing workloads by and large.

Datadog’s management sees the trend of cloud migration as being steady; management sees cloud migration being partly driven by customers’ desires to adopt AI, because migrating to the cloud is a prerequisite for AI

[Question] What are the trend lines on the cloud migration side?

[Answer] It’s consistent with what we’ve seen before. It’s also consistent with what you’ve heard from the hyperscalers over the past couple of weeks. So I would say it’s steady, unremarkable. It’s not really trending up nor trending down right now. But we see the same desire from customers to move more into the cloud and to lay the groundwork so they can also add up AI, because digital transformation and cloud migrations are prerequisites for that.

Datadog’s management thinks there will be more products for Datadog to build as AI workloads shift towards inferencing; management is seeing its LLM Observability product getting increasing usage as customers move AI workloads into production; management wants to build more products across the stack, from closer to the GPU to AI agents; 

On the workloads turning more towards inference, so there’s definitely more product to build there. So we have a — so we built an LLM Observability product that is being — that is getting increasing usage from customers as they move into production. And we think there’s more that we need to build both down the stack closer to the GPUs and up the stack closer to the agents that are being built on top of these models.

Datadog’s management is already seeing returns on Datadog’s internal investments in AI in terms of employee productivity; in the long-term, there’s the possibility that Datadog may need lesser headcount because of AI

[Question] Internally, how do you think about AI from an efficiency perspective?

[Answer] For right now, I think we’re seeing the returns in productivity, whether that be salespeople getting more information or R&D. We’re essentially trying to create an environment where we’re encouraging the various departments to use it and learning from it. Long term, there might well be efficiency gains — there may be efficiency gains that can be manifested in headcount.

Mastercard (NYSE: MA)

Mastercard’s management sees contactless payments and tokenised transactions as important parts of agentic AI digital commerce; Mastercard has announced Mastercard Agent Pay, which will facilitate safe, frictionless and programmable transactions across AI platforms; Mastercard is working with important AI companies such as Microsoft and OpenAI to deliver agentic payments

Today, 73% of all in-person switched transactions are contactless and approximately 35% of all our switch transactions are tokenized. These technologies will continue to play an important role as we move forward into the next phase of digital commerce, such as Agentic AI. We announced Mastercard Agent Pay to leverage our Agentic tokens as well as franchise rules, fraud and cybersecurity solutions. Combined, these will help partners like Microsoft to facilitate safe, frictionless and programmable transactions across AI platforms. We will also work with companies like OpenAI to deliver smarter, more secure and more personalized agentic payments. The launch of Agent Pay is an important step in redefining commerce in the AI era.

Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future just unveiled the AI-powered Malware Intelligence; Malware Intelligence enables proactive threat prevention

On the cybersecurity front, Recorded Future just unveiled malware intelligence. It’s a new capability enabling proactive threat prevention for any business using real-time AI-powered intelligence insights.

Mastercard’s management sees AI as being deeply ingrained in Mastercard’s business; Mastercard’s access to an enormous amount of data is an advantage for Mastercard in deploying AI; in 2024, a third of Mastercard’s products in its value-added services and solutions segment was powered by AI

AI is deeply ingrained in our business. We have access to an enormous amount of data, and this uniquely positions us to enhance our AI’s performance, resulting in greater accuracy and reliability. And we’re deploying AI to enable many solutions in market today. In fact, in 2024, AI enabled approximately 1 in 3 of our products within value-added services and solutions.

Meta Platforms (NASDAQ: META)

Meta’s management is focused on 5 opportunities within AI namely, improved advertising, more engaging experiences, business messaging, Meta AI and AI devices; the 5 opportunities are downstream of management’s attempt to build artificial general intelligence and leading AI models and infrastructure in an efficient manner; management thinks the ROI of Meta’s investment in AI will be good even if Meta does not succeed in all the 5 opportunities;

As we continue to increase our investments and focus more of our resources on AI, I thought it would be useful today to lay out the 5 major opportunities that we are focused on. Those are improved advertising, more engaging experiences, business messaging, Meta AI and AI devices. And these are each long-term investments that are downstream from us building general intelligence and leading AI models and infrastructure. Even with our significant investments, we don’t need to succeed in all of these areas to have a good ROI. But if we do, then I think that we will be wildly happy with the investments that we are making…

…We are focused on building full general intelligence. All of the opportunities that I’ve discussed today are downstream of delivering general intelligence and doing so efficiently.

Meta’s management’s goal with the company’s advertising business is for businesses to simply tell Meta their objectives and budget, and for Meta to do all the rest with AI; management thinks that Meta can redefine advertising into an AI agent that delivers measurable business results at scale

Our goal is to make it so that any business can basically tell us what objective they’re trying to achieve like selling something or getting a new customer and how much they’re willing to pay for each result and then we just do the rest. Businesses used to have to generate their own ad creative and define what audiences they wanted to reach, but AI has already made us better at targeting and finding the audiences that will be interested in their products than many businesses are themselves, and that keeps improving. And now AI is generating better creative options for many businesses as well. I think that this is really redefining what advertising is into an AI agent that delivers measurable business results at scale.

Meta tested a new advertising recommendation model for Reels in 2025 Q1 called Generative Ads Recommendation Model, or GEM, that has improved conversion rates by 5%; 30% more advertisers are using Meta’s AI creative tools in 2025 Q1; GEM is twice as efficient at improving ad performance for a given amount of data and compute; GEM’s better efficiency helped Meta significantly scale up the amount of compute used for model training; GEM is now being rolled out to additional surfaces across Meta’s apps; the initial test of Advantage+’s streamlined campaign creation flow for sales, app and lead campaigns is encouraging and will be rolled out globally later in 2025; Advantage+ Creative is seeing strong adoption; all eligible advertisers can now automatically adjust the aspect ratio of their existing videos and generate images; management is testing a feature that uses gen AI to place clothing on virtual models; management has seen a 46% lift in incremental conversions in the testing of the incremental attribution feature and will roll out the feature to all advertisers in the coming weeks; improvements in Meta’s advertising ranking and modeling drove conversion growth that outpaced advertising impressions growth in 2025 Q1

In just the last quarter, we are testing a new ads recommendation model for Reels, which has already increased conversion rates by 5%. We’re seeing 30% more advertisers are using AI creative tools in the last quarter as well…

…In Q1, we introduced our new Generative Ads Recommendation Model, or GEM, for ads ranking. This model uses a new architecture we developed that is twice as efficient at improving ad performance for a given amount of data and compute. This efficiency gain enabled us to significantly scale up the amount of compute we use for model training with GEM trained on thousands of GPUs, our largest cluster for ads training to date. We began testing the new model for ads recommendations on Facebook Reels earlier this year and have seen up to a 5% increase in ad conversions. We’re now rolling it out to additional surfaces across our apps…

…We’re seeing continued momentum with our Advantage+ suite of AI-powered solutions. We’ve been encouraged by the initial test of our streamlined campaign creation flow for sales, app and lead campaigns, which starts with Advantage+ turned on from the beginning for advertisers. In April, we rolled this out to more advertisers and expect to complete the global rollout later this year. We’re also seeing strong adoption of Advantage+ Creative. This week, we are broadening access of video expansion to Facebook Reels for all eligible advertisers, enabling them to automatically adjust the aspect ratio of their existing videos by generating new pixels in each frame to optimize their ads for full screen surfaces. We also rolled out image generation to all eligible advertisers. And this quarter, we plan to continue testing a new virtual try-on feature that uses gen AI to place clothing on virtual models, helping customers visualize how an item may look and fit…

…We continue to evolve our ads platform to drive results that are optimized for each business’ objectives and the way they measure value. One example of this is our incremental attribution feature, which enables advertisers to optimize for driving incremental conversions or conversions we believe would not have occurred without an ad being shown. We’re seeing strong results in testing so far with advertisers using incremental attribution in tests seeing an average 46% lift in incremental conversions compared to their business-as-usual approach. We expect to make this available to all advertisers in the coming weeks…

…Year-over-year conversion growth remains strong. And in fact, we continue to see conversions grow at a faster rate than ad impressions in Q1, so reflecting increased conversion rates. And ads ranking and modeling improvements are a big driver of overall performance gains.

Improvements in the past 6 months to Meta’s content recommendation systems have driven increases of 7% in time spent on Facebook, 6% on Instagram, and 35% on Threads; video consumption in Facebook and Instagram grew strongly in 2025 Q1 because of improvements to Meta’s content recommendation systems; management sees opportunities for further gains in improving the content recommendation systems in 2025; Meta is making progress on longer-term efforts to improve its content recommendation systems in two areas, (1) develop increasingly efficient recommendation systems by incorporating innovations from LLM model architectures, and (2) integrating LLMs into content recommendation systems to better identify what is interesting to a user; management’s testing of Llama in Threads’ recommendation systems has led to a 4% increase in time spent from launch; management is exploring how Llama can be deployed in recommendation systems for photo and video content, which management expects can improve Meta AI’s personalisation by better understanding users’ interests and preferences through their use of Meta’s apps; management launched a new feed in Instagram in the US in 2025 Q1 of content a user’s friends have left a note on or liked and the new feed is producing good results; management has launched the Blend experience that blends a user’s Reels algorithm in direct messages with friends; the increases of 7% in time spent on Facebook and 6% on Instagram seen in the last 6 months is on top of uplift in time spent on Facebook and Instagram that management had already produced in the first 9 months of 2024

In the last 6 months, improvements to our recommendation systems have led to a 7% increase in time spent on Facebook, 6% increase on Instagram and 35% on Threads…

…In the first quarter, we saw strong growth in video consumption across both Facebook and Instagram, particularly in the U.S., where video time spent grew double digits year-over-year. This growth continues to be driven primarily by ongoing enhancements to our recommendation systems, and we see opportunities to deliver further gains this year.

We’re also progressing on longer-term efforts to develop innovative new approaches to recommendations. A big focus of this work will be on developing increasingly efficient recommendation systems so that we can continue scaling up the complexity and compute used to train our models while avoiding diminishing returns. There are promising techniques we’re working on that will incorporate the innovations from LLM model architectures to achieve this. Another area that is showing early promise is integrating LLM technology into our content recommendation systems. For example, we’re finding that LLM’s ability to understand a piece of content more deeply than traditional recommendation systems can help better identify what is interesting to someone about a piece of content, leading to better recommendations.

We began testing using Llama in Threads recommendation systems at the end of last year given the app’s text-based content and have already seen a 4% lift in time spent from the first launch. It remains early here, but a big focus this year will be on exploring how we can deploy this for other content types, including photos and videos. We also expect this to be complementary to Meta AI as it can provide more relevant responses to people’s queries by better understanding their interests and preferences through their interactions across Facebook, Instagram and Threads…

…In Q1, we launched a new experience on Instagram in the U.S. that consists of a feed of content your friends have left a note on or liked, and we’re seeing good results. We also just launched Blend, which is an opt-in experience in direct messages that enables you to blend your Reels algorithm with your friends to spark conversations over each other’s interest…

…We shared on the Q3 2024 call that improvements to our AI-driven feed and video recommendations drove a roughly 8% lift in time spent on Facebook and a 6% lift on Instagram over the first 9 months of last year. Since then, we’ve been able to deliver similar gains in just 6 months’ time with improvements to our AI recommendations delivering 7% and 6% time spent gains on Facebook and Instagram, respectively.

AI is enabling the creation of better content on Meta’s apps; the better content includes AI generating content directly for users and AI helping users produce better content; management thinks that the content created on Meta’s apps will be increasingly interactive over time; management recently launched the stand-alone Edits app that contains an ultra-high resolution, short-form video camera, and generative AI tools to remove backgrounds of video or animate still images; more features on Edits are coming soon; 

AI is also enabling the creation of better content as well. Some of this will be helping people produce better content to share themselves. Some of this will be AI generating content directly for people that is personalized for them. Some of this will be in existing formats like photos and videos, and some of it will be increasingly interactive…

…Our feeds started mostly with text and then became mostly photos when we all got mobile phones with cameras and then became mostly video when mobile networks became fast enough to handle that well. We are now in the video era, but I don’t think that this is the end of the line. In the near future, I think that we’re going to have content in our feeds that you can interact with and that it will interact back with you rather than you just watching it…

…Last week, we launched our stand-alone Edits app, which supports the full creative process for video creators from inspiration and creation to performance insights. Edits has an ultra-high resolution, short-form video camera and includes generative AI tools that enable people to remove the background of any video or animate still images with more features coming soon.

Countries like Thailand and Vietnam with low-cost labour actually conduct a lot of business through Meta’s messaging apps but management thinks this phenomena is absent in developed economies because of the high cost of labour; management thinks that AI will allow businesses in developed economies to conduct business through Meta’s messaging apps; management thinks that every business in the future will have AI business agents that are easy to set up and can perform customer support and sales; Meta is currently testing AI business agents with small businesses in the USA and a few countries across Meta’s apps; management has launched a new agent management experience to make it easier for businesses to train their AI; management’s vision is for that to be one agent that’s interacting with a consumer regardless of where he/she is engaging with the business AI; feedback from the tests are that the AI business agents are saving businesses a lot of time and helping them determine which conversations to spend more time on

In countries like Thailand and Vietnam, where there is a low cost of labor, we see many businesses conduct commerce through our messaging apps. There’s actually so much business through messaging that those countries are both in our top 10 or 11 by revenue even though they’re ranked in the 30s in global GDP. This phenomenon hasn’t yet spread to developed countries because the cost of labor is too high to make this a profitable model before AI, but AI should solve this. So in the next few years, I expect that just like every business today has an e-mail address, social media account and website, they’ll also have an AI business agent that can do customer support and sales. And they should be able to set that up very easily given all the context that they’ve already put into our business platforms…

…We are currently testing business AIs with a limited set of businesses in the U.S. and a few additional countries on WhatsApp, Messenger and on ads on Facebook and Instagram. We’ve been starting with small business and focusing first on helping them sell their goods and services with business AIs…

…We’ve launched a new agent management experience and dashboard that makes it easier for businesses to train their AI based on existing information on their website or WhatsApp profile or their Instagram and Facebook pages. And we’re starting with the ability for businesses to activate AI in their chats with customers. We are also testing business AIs on Facebook and Instagram ads that you can ask about product and return policies or assist you in making a purchase within our in-app browser…

…No matter where you engage with the business AI, it should be one agent that recalls your history and your preferences. And we’re hearing encouraging feedback, particularly that adopting these AIs are saving the business that we’re testing with a lot of time and helping to determine which conversations make sense for them to spend more time on.

Meta AI now has nearly 1 billion monthly actives; management’s focus for Meta AI in 2025 is to establish Meta AI as the leading personal AI for personalization, voice conversations, and entertainment; management thinks people will eventually have an AI to talk to throughout the day on smart-glasses and this AI will be one of the most important and valuable services that has ever been created; management recently released the first Meta AI stand-alone app; the Meta AI stand-alone app is personalised to the user’s behaviour on other Meta apps, and it also has a social feed for discovery on how others are using Meta AI; initial feedback on the Meta AI stand-alone app is good; management expects to focus on scaling and deepening engagement on Meta AI for at least the next year before attempting to monetise; management saw engagement on Meta AI improve when testing Meta AI’s ability to personalize responses by remembering people’s prior queries and their usage of Meta’s apps; management has built personalisation into Meta AI across all of Meta’s apps; the top use cases for Meta AI currently include information gathering, writing assistance, interacting with visual content, and seeking help; WhatsApp has the strongest usage of Meta AI, followed by Facebook; a standalone Meta AI app is important for Meta AI to become the leading personal AI assistant because WhatsApp is currently not the primary messaging app used in the USA; management thinks that people are going to use different AI agents for different things; management thinks having memory of a user will be a differentiator for AI agents

Across our apps, there are now almost 1 billion monthly actives using Meta AI. Our focus for this year is deepening the experience and making Meta AI the leading personal AI with an emphasis on personalization, voice conversations and entertainment. I think that we’re all going to have an AI that we talk to throughout the day, while we’re browsing content on our phones, and eventually, as we’re going through our days with glasses. And I think that this is going to be one of the most important and valuable services that has ever been created.

In addition to building Meta AI into our apps, we just released our first Meta AI stand-alone app. It is personalized. So you can talk to it about interests that you’ve shown while browsing Reels or different content across our apps. And we built a social feed into it. So you can discover entertaining ways that others are using Meta AI. And initial feedback on the app has been good so far.

Over time, I expect the business opportunity for Meta AI to follow our normal product development playbook. First, we build and scale the product. And then once it is at scale, then we focus on revenue. In this case, I think that there will be a large opportunity to show product recommendations or ads as well as a premium service for people who want to unlock more compute for additional functionality or intelligence. But I expect that we’re going to be largely focused on scaling and deepening engagement for at least the next year before we’ll really be ready to start building out the business here…

…Earlier this year, we began testing the ability for Meta AI to better personalize its responses by remembering certain details from people’s prior queries and considering what that person engages with on our apps. We are already seeing this lead to deeper engagement with people we’ve rolled it out to, and it is now built into Meta AI across Facebook, Instagram, Messenger and our new stand-alone Meta AI app in the U.S. and Canada…

…The top use case right now for Meta AI from a query perspective is really around information gathering as people are using it to search for and understand and analyze information followed by social interactions from — ranging from casual chatting to more in-depth discussion or debate. We also see people use it for writing assistance, interacting with visual content, seeking help…

…WhatsApp continues to see the strongest Meta AI usage across our Family of Apps. Most of that WhatsApp engagement is in one-on-one Threads, followed by Facebook, which is the second largest driver of Meta AI engagement, where we’re seeing strong engagement from our feed deep dives integration that lets people ask Meta AI questions about the content that’s recommended to them…

…I also think that the stand-alone app is going to be particularly important in the United States because WhatsApp, as Susan said, is the largest surface that people use Meta AI and which makes sense. If you want to text an AI, having that be closely integrated and a good experience in the messaging app that you use makes a lot of sense. But we’re — while we have more than 100 million people use WhatsApp in the United States, we’re clearly not the primary messaging app in the United States at this point. iMessage is. We hope to become the leader over time. But we’re in a different position there than we are in most of the rest of the world on WhatsApp. So I think that the Meta AI app as a stand-alone is going to be particularly important in the United States to establishing leadership in — as the main personal AI that people use…

…I think that there are going to be a number of different agents that people use, just like people use different apps for different things. I’m not sure that people are going to use multiple agents for the same exact things, but I’d imagine that something that is more focused on kind of enterprise productivity might be different from something that is somewhat more optimized for personal productivity. And that might be somewhat different from something that is optimized for entertainment and social connectivity. So I think there will be different experiences…

…Once an AI starts getting to know you and what you care about in context and can build up memory from the conversations that you’ve had with it over time, I think that will start to become somewhat more of a differentiator.

Meta’s management continues to think of glasses as the ideal form factor for an AI device; management thinks that the 1 billion people in the world today who wear glasses will likely all be wearing smart glasses in the next 5-10 years; management thinks that building the devices people use for Meta’s apps lets the company deliver the best AI and social experiences; sales of the Ray-Ban Meta AI glasses have tripled in the last year and usage of the glasses is high; Meta has new launches of smart glasses lined up for later this year; monthly actives of Ray-Ban Meta AI glasses is up 4x from a year ago, with the number of people using voice commands growing even faster; management has rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish; management continues to want to scale the Ray-Ban Meta AI glasses to 10 million units or more for its 3rd generation; management intends to run the same monetisation playbook with the Ray-Ban Meta AI glasses as Meta’s other products

Glasses are the ideal form factor for both AI and the metaverse. They enable you to let an AI see what you see, hear what you hear and talk to you throughout the day. And they let you blend the physical and digital worlds together with holograms. More than 1 billion people worldwide wear glasses today, and it seems highly likely that these will become AI glasses over the next 5 to 10 years. Building the devices that people use to experience our services lets us deliver the highest-quality AI and social experiences…

…Ray-Ban Meta AI glasses have tripled in sales in the last year. The people who have them are using them a lot. We’ve got some exciting new launches with our partner, EssilorLuxottica, later this year as well that should expand that category and add some new technological capabilities to the glasses…

…We’re seeing very strong traction with Ray-Ban Meta AI glasses with over 4x as many monthly actives as a year ago. And the number of people using voice commands is growing even faster as people use it to answer questions and control their glasses. This month, we fully rolled out live translations on Ray-Ban Meta AI glasses to all markets for English, French, Italian and Spanish. Now when you are speaking to someone in one of these languages, you’ll hear what they say in your preferred language through the glasses in real time…

…If you look at some of the leading consumer electronics products of other categories, by the time they get to their third generation, they’re often selling 10 million units and scaling from there. And I’m not sure if we’re going to do exactly that, but I think that that’s like the ballpark of the opportunity that we have…

…As a bunch of the products start to hit and start to grow even bigger than the number that I just said is just sort of like the sort of a near-term milestone, then I think we’ll continue scaling in terms of distribution. And then at some point, just like the other products that we build out, we will feel like we’re at a sufficient scale that we’re going to primarily focus on making sure that we’re monetizing and building an efficient business around it.

Meta released the first few Llama 4 models in April 2025 and more Llama 4 models are on the way, including the massive Llama 4 Behemoth model; management thinks leading-edge AI models are critical for Meta’s business, so they want the company to control its own destiny; by developing its own models, Meta is also able to optimise the model to its infrastructure and use-cases; an example of the optimisation is the Llama 4 17-billion model that comes with low latency to suit voice interactions; another example of the optimisation is the models’ industry-leading context window length which helps Meta AI’s personalisation efforts; Llama 4 Behemoth is important for Meta because all the models the company is using internally, and some of the models the company will develop in the future, are distilled from Behemoth

We released the first Llama 4 models earlier this month. They are some of the most intelligent, best multimodal, lowest latency and most efficient models that anyone has built. We have more models on the way, including the massive Llama 4 Behemoth model…

…On the LLM, yes, there’s a lot of progress being made in a lot of different dimensions. And the reason why we want to build this out is — one is that we think it’s important that for kind of how critical this is for our business that we sort of have control of our own destiny and are not depending on another company for something so critical. But two, we want to make sure that we can shape the development to be optimized for our infrastructure and the use cases that we want.

So to that end, Llama 4, the shape of the model with 17 billion parameters per expert was designed specifically for the infrastructure that we have in order to provide the low latency experience to be voice optimized. One of the key things, if you’re having a voice conversation with AI, is it needs to be low latency. So that way, when you’re having a conversation with it, there’s isn’t a large gap between when you stop speaking and it starts. So everything from the shape of the model to the research that we’re doing to techniques that go into it are kind of fit into that.

Similarly, another thing that we focused on was context window length. And in some of our models, we have really — we’re industry-leading on context window length. And part of the reason why we think that that’s important is because we’re very focused on providing a personalized experience. And there are different ways that you can put personalization context into an LLM, but one of the ways to do it is to include some of that context in the context window. And having a long context window that can incorporate a lot of the background that the person has shared across our apps is one way to do that…

…I think it’s also very important to deliver big models like Behemoth, not because we’re going to end up serving them in production, but because of the technique of distilling from larger models, right? The Llama 4 models that we’ve published so far and the ones that we’re using internally and some of the ones that we’ll build in the future are basically distilled from the Behemoth model in order to get the 90%, 95% of the intelligence of the large model in a form factor that is much lower latency and much more efficient.

Meta’s management is accelerating the buildout of Meta’s AI capacity, leading to higher planned investment for 2025; Meta’s capex growth in 2025 is for both generative AI and core business needs with the majority of overall capex supporting Meta’s core business; management continues to build infrastructure in a flexible way where the company can react to how the AI ecosystem develops in the coming years; management is increasing the efficiency of Meta’s workloads and this has helped the company to achieve strong returns from its core AI initiatives

We are accelerating some of our efforts to bring capacity online more quickly this year as well as some longer-term projects that will give us the flexibility to add capacity in the coming years as well. And that has increased our planned investment for this year…

…Our primary focus remains investing capital back into the business with infrastructure and talent being our top priorities…

…Our CapEx growth this year is going toward both generative AI and core business needs with the majority of overall CapEx supporting the core. We expect the significant infrastructure footprint we are building will not only help us meet the demands of our business in the near term but also provide us an advantage in the quality and scale of AI services we can deliver. We continue to build this capacity in a way that grants us maximum flexibility in how and when we deploy it to ensure we have the agility to react to how the technology and industry develop in the coming years…

…The second way we’re meeting our compute needs is by increasing the efficiency of our workloads. In fact, many of the innovations coming out of our ranking work are focused on increasing the efficiency of our systems. This emphasis on efficiency is helping us deliver consistently strong returns from our core AI initiatives.

Meta’s management sees a number of long-term tailwinds that AI can provide for Meta’s business, including making advertising a larger share of global GDP, and freeing up more time for people to engage in entertainment

Over the coming years, I think that the increased productivity from AI will make advertising a meaningfully larger share of global GDP than it is today…

…Over the long term, as AI unlocks more productivity in the economy, I also expect that people will spend more of their time on entertainment and culture, which will create an even larger opportunity to create more engaging experiences across all of these apps.

Meta’s management still expects to develop an AI coding agent sometime in 2025 that can operate as a mid-level engineer; management expects this AI coding agent to be do a substantial part of Meta’s AI research and development in 2026 H2; management is focused on building AI that can run experiments to improve Meta’s recommendation systems

I’d say it’s basically still on track for something around a mid-level engineer kind of starting to become possible sometime this year, scaling into next year. So I’d expect that by the middle to end of next year, AI coding agents are going to be doing a substantial part of AI research and development. So we’re focused on that. Internally, we’re also very focused on building AI agents or systems that can help run different experiments to increase recommendations across our other AI products like the ones that do recommendations across our feeds and things like that.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is seeing accelerating demand across industries for cloud migrations; there are 4 things happening to drive cloud migrations, (1) classic migration, (2) data growth, (3) growth in cloud-native companies’ consumption, and (4) growth in AI consumption, which also requires non-AI consumption 

When it comes to cloud migrations, we saw accelerating demand with customers in every industry, from Abercrombie in French, to Coca-Cola and ServiceNow expanding their footprints on Azure…

…[Question] On your comment about accelerating demand for cloud migrations. I’m curious if you could dig in and extrapolate a little more what you’re seeing there.

[Answer] One is, I’ll just say, the classic migration of whether it’s SQL, Windows Server. And so that sort of again got good steady-state progress because the reality is, I think everyone is now, perhaps there’s another sort of kick in the data center migrations just because of the efficiency the cloud provides. So that’s sort of one part.

The second piece is good data growth. You saw some — like Postgres on Azure — I mean, forgetting even SQL server, Postgres on Azure is growing. Cosmos is growing. The analytics stuff I talked about with Fabric. It’s even the others, whether it is Databricks or even Snowflake on Azure are growing. So we feel very good about Fabric growth and our data growth.

Then the cloud-native growth. So this is again before we even get to AI, some of the core compute consumption of cloud-native players is also pretty very healthy. It was healthy throughout the quarter. We projected to go moving forward as well.

Then the thing to notice is the ratio, and I think we mentioned this multiple times before, if you look underneath even ChatGPT, in fact, that team does a fantastic job of thinking about not only their growth in terms of AI accelerators they need, they use Cosmos DB, they use Postgres. They use core compute and storage. And so there’s even a ratio between any AI workload in terms of AI accelerator to others.

So those are the 4 pockets, I’d say, or 4 different trend lines, which all have a relationship with each other.

Foundry is now used by developers in over 70,000 companies, from enterprises to startups, to design, customize and manage their AI apps and agents; Foundry processed  more than 100 trillion tokens in 2025 Q1, up 5x from a year ago; Foundry now has industry-leading model fine tuning tools; the latest models from AI heavyweights including OpenAI and Meta are available on Foundry;  Microsoft’s Phi family of SLMs (small language model) now has over 38 million downloads (20 million downloads in 2024 Q4); Foundry will soon introduce an LLM (large language model) with 1 billion parameters that can run on just CPUs

Foundry is the agent in AI app factory. It’s now used by developers at over 70,000 enterprises and digital natives from Atomicwork to Epic, Fujitsu and Gainsight to H&R Block and LG Electronics to design, customize and manage their AI apps and agents. We processed over 100 trillion tokens this quarter, up 5x year-over-year, including a record 50 trillion tokens last month alone. And 4 months in, over 10,000 organizations have used our new agent service to build, deploy and scale their agents.

This quarter, we also made a new suite of fine-tuning tools available to customers with industry-leading reliability, and we brought the latest models from OpenAI along with new models from Cohere, DeepSeek, Meta, Mistral, Stability to Foundry. And we’ve expanded our Phi family of SLMs with new multimodal and mini models. All-up, Phi has been downloaded 38 million times. And our research teams are taking it one step further with BitNet b1.58, a billion parameter, large language model that can run on just CPUs coming to the Foundry.

With agent mode in VS Code, Github Copilot can now iterate on code, recognize errors, and fix them automatically; there are other Github agent modes that provide coding support to developers; Microsoft is previewing a first-of-its-kind SWE (software engineering) agent that can execute developer tasks; GitHub Copilot now has 15 million users, up 4x from a year ago; GitHub Copilot is used by a wide range of companies; VS Code has more than 50 million monthly active users

We’re evolving GitHub Copilot from paired to peer programmer with agent mode in VS Code, Copilot can now iterate on code, recognize errors and fix them automatically. This adds to other Copilot agents like Autofix, which helps developers remediate vulnerabilities as well as code review agent, which has already reviewed over 8 million pull requests. And we are previewing a first-of-its-kind SWE-agent capable of asynchronously executing developer tasks. All-up, we now have over 15 million GitHub Copilot users, up over 4x year-over-year. And both digital natives like Twilio and enterprises like Cisco, HPE, Skyscanner and Target continue to choose GitHub Copilot to their developers with AI throughout the entire dev life cycle. With Visual Studio and VS Code, we have the world’s most popular editor with over 50 million monthly active users.

Microsoft 365 Copilot is now used hundreds of thousands of customers, up 3x from a year ago; deal sizes for Microsoft 365 Copilot continue to grow; a record number of customers in 2025 Q1 returned to buy more seats for Microsoft 365 Copilot; new researcher and analyst deep reasoning agents can analyze vast amounts of web and enterprise data on-demand directly within Microsoft 365 Copilot; Microsoft is introducing agents for every role and business process; customers can build their own AI agents with no/low code with Copilot Studio and these agents can handle complex tasks, including taking action across desktop and web apps; 230,000 organisations, including 90% of the Fortune 500, have already used Copilot Studio; customers created more than 1 million custom agents across SharePoint and Copilot Studio, up 130% sequentially

Microsoft 365 Copilot is built to facilitate human agent collaboration, hundreds of thousands of customers across geographies and industries now use Copilot, up 3x year-over-year. Our overall deal size continues to grow. In this quarter, we saw a record number of customers returning to buy more seats. And we’re going further. Just last week, we announced a major update, bringing together agents, notebooks, search and create into a new scaffolding for work. Our new researcher and analyst deep reasoning agents analyze vast amounts of web and enterprise data to deliver highly skilled expertise on demand directly within Copilot…

…We are introducing agents for every role and business process. Our sales agent turns contacts into qualified leads and with sales chat reps can quickly get up to speed on new accounts. And our customer service agent is deflecting customer inquiries and helping service reps resolve issues faster.

With Copilot Studio, customers can extend Copilot and build their own agents with no code, low code. More than 230,000 organizations, including 90% of the Fortune 500 have already used Copilot Studio. With deep reasoning and agent flows in Copilot Studio, customers can build agents that perform more complex tasks and also handle deterministic scenarios like document processing and financial approvals. And they can now build Computer Use Agents that take action on the UI across desktop and web apps. And with just a click, they can turn any SharePoint site into an agent, too. This quarter alone, customers created over 1 million custom agents across SharePoint and Copilot Studio, up 130% quarter-over-quarter.

Azure grew revenue by 33% in 2025 Q1 (was 31% in 2024 Q4), with 16 points of growth from AI services (was 13 points in 2024 Q4); management brought capacity online for Azure AI services faster than expected;  Azure’s non-AI business saw accelerated growth in its Enterprise customer segment as well as some improvement in its scale motions; management thinks the real outperfomer within Azure in 2025 Q1 is the non-AI business; the strength in the AI business in 2025 Q1 came because Microsoft was able to match supply and demand somewhat, and also deliver supply early to some customers; management thinks it’s getting harder to separate an AI workload from a non-AI workload

In Azure and other cloud services, revenue grew 33% and 35% in constant currency, including 16 points from AI services. Focused execution drove non-AI services results, where we saw accelerated growth in our Enterprise customer segment as well as some improvement in our scale motions. And in Azure AI services, we brought capacity online faster than expected…

…The real outperformance in Azure this quarter was in our non-AI business. So then to talk about the AI business, really, what was better was precisely what we said. We talked about this. We knew Q3 that we had and hadn’t really match supply and demand pretty carefully and so didn’t expect to do much better than we had guided to on the AI side. We’ve been quite consistent on that. So the only real upside we saw on the AI side of the business was that we were able to deliver supply early to a number of customers…

…[Question] You mentioned that the upside on Azure came from the non-AI services this time around. I was wondering if you could just talk a little bit more about that.

[Answer] In general, we saw better-than-expected performance across our segments, but we saw acceleration in our largest customers. We call that the Enterprise segment in general. And then in what we talked about of our scale motions, where we had some challenges in Q2, things were a little better. And we still have some work to do in our scale motions, and we’re encouraged by our progress. We’re excited to stay focused on that as, of course, we work through the final quarter of our fiscal year…

…It’s getting harder and harder to separate what an AI workload is from a non-AI workload.

Around half of Microsoft’s cloud and AI-related capex in 2025 Q1 (FY2025 Q3) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs; management expects Microsoft’s capex in 2025 Q2 (FY2025 Q4) to increase sequentially, but the guidance for total capex for FY2025 H2 is unchanged from previous guidance (previously, expectation was for capex for 2025 Q1 and 2025 Q2 to be at similar levels as 2024 Q4 (FY2025 Q2); FY2026’s capex is still expected to grow at a lower rate than in FY2025; the mix of spend in FY2026 will shift to short-lived assets in FY2026; demand for Azure’s AI services is growing faster than capacity is being brought online and management expects to have some AI capacity constraints beyond June 2025 (or FY2025 Q4); management’s goal with Microsoft’s data center investments is to be positioned for the workload growth of the future; management thinks pretraining plus test-time compute is a big change in terms of model-training workloads; Microsoft is short of power in fulfilling its data center growth plans; Microsoft’s data center builds have very long lead-times; in Microsoft’s 2024 Q4 (FY 2025 Q1) earnings call, management expected Azure to no longer be capacity-constrained by the end of 2025 Q2 (FY2025 Q4) but demand was stronger than expected in 2025 Q1 (FY2025 Q3); management still thinks they can get better and better capital efficiency from the cloud and AI capex; Azure’s margin on the AI business now is far better than what the margin was when the cloud transition was at a similar stage

Roughly half of our cloud and AI-related spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining cloud and AI spend was primarily for servers, both CPUs and GPUs, to serve customers based on demand signals, including our customer contracted backlog of $315 billion…

…We expect Q4 capital expenditures to increase on a sequential basis. H2 CapEx in total remains unchanged from our January H2 guidance. As a reminder, there can be quarterly spend variability from cloud infrastructure build-outs and the timing of delivery of finance leases…

…Our earlier comments on FY ’26 capital expenditures remain unchanged. We expect CapEx to grow. It will grow at a lower rate than FY ’25 and will include a greater mix of short-lived assets, which are more directly correlated to revenue than long-lived assets…

… In our AI services, while we continue to bring data center capacity online as planned, demand is growing a bit faster. Therefore, we now expect to have some AI capacity constraints beyond June…

…the key thing for us is to have our builds and lease be positioned for what is the workload growth of the future, right? So that’s what you have to [ goal ] seek to. So there’s a demand part to it, there is the shape of the workload part to it, and there is a location part to it. So you don’t want to be upside down on having one big data center in one region when you have a global demand footprint. You don’t want to be upside down when the shape of demand changes because, after all, with essentially pretraining plus test-time compute, that’s a big change in terms of how you think about even what is training, right, forget inferencing…

…We will be short power. And so therefore — but it’s not a blanket statement. I need power in specific places so that we can either lease or build at the pace at which we want…

…From land to build to build-outs can be lead times of 5 to 7 years, 2 to 3 years. So we’re constantly in a balancing position as we watch demand curves…

…I did talk about in my comments, we had hoped to be in balance by the end of Q4. We did see some increased demand as you saw through the quarter. So we are going to be a little short still, say, a little tight as we exit the year…

…[Question] You’ve said in the past that you can attain better and better capital efficiency with the cloud business and probably cloud and AI business. Where do you stand today?

[Answer] The way, of course, you’ve seen that historically is right when we went through the prior cloud transitions, you see CapEx accelerate, you build out data center footprint.,, You slowly filled GPU capacity. And over time, you see software efficiencies and hardware efficiencies build on themselves. And you saw that process for us for goodness now quite a long time. And what Satya’s talking about is how quickly that’s happening on the AI side of the business and you add to that model diversity. So think about the same levers plus model efficiency, those compounds. Now the one thing that’s a little different this time is just the pace. And so when you’re seeing that happen, pace in terms of efficiency side, but also pace in terms of the build-out. So it can mask some of the progress… Our margins on the AI side of the business are better than they were at this point by far than when we went through the same transition in the server to cloud transition…

…I think the way to think about this is you can ask the question, what’s the difference between a hosting business and a hyperscale business? It’s software. That’s, I think, the gist of it. Yes, for sure, it’s a capital-intensive business, but capital efficiency comes from that system-wide software optimization. And that’s what makes the hyperscale business attractive and that’s what we want to just keep executing super well on.

Microsoft’s management sees Azure as Microsoft’s largest business; management thinks that the next platform shift in technology, which is AI, is built on the last major platform, which was for cloud computing, so this benefits Microsoft

There’s nothing certain for sure in the future, except for one thing, which is our largest business is our infrastructure business. And the good news here is the next big platform shift builds on that. So it’s not a complete rebuild, having gone through all these platform shifts where you have to come out on the other side with a full rebuild. If there is good news here is that we have a good business in Azure that continues to grow and the new platform depends on that.

It’s possible that software optimizations with AI model development and deployment could lead to even longer useful lives for GPUs, but management wants to observe this for longer

[Question] Could we start to consider the possibility that software enhancements might extend the useful life assumption that you’re using for GPUs?

[Answer] In terms of thinking about the depreciable life of an asset, we like to have a long history before we make any of those changes. So we’re focused on getting every bit of useful life we can, of course, out of assets. But to Satya’s point, that tends to be a software question more than a hardware one.

Netflix (NASDAQ: NFLX)

Netflix’s content talent are already using AI tools to improve the content production process; management thinks AI tools can enable lower-budget projects to access top-grade VFX; Rodrigo Prieto is directing his first feature film with Netflix in 2025, Pedro Paramo, and he’s able to use AI tools for de-aging VFX at a much lower cost than The Irishman film that Prieto worked on 5 years ago; the entire budget for Pedro Paramo is similar to the cost of VFX alone for The Irishman; management’s focus with AI is to find ways for AI to improve the member and creator experience

So our talent today is using AI tools to do set references or previs, VFX sequence prep, shop planning, all kinds of things today that kind of make the process better. Traditionally, only big budget projects would have access to things like advanced visual effects such as de-aging. So today, you can use these AI-powered tools so to enable smaller budget projects to have access to big VFX on screen.

A recent example, I think, is really exciting. Rodrigo Prieto was the DP on The Irishman just 5 years ago. And if you remember that movie, we were using very cutting edge, very expensive de-aging technology that still had massive limitations, still creating a bunch of complexity on set for the actors. It was a giant leap forward for sure, but nowhere near what we needed for that film. So this year, just 5 years later, Rodrigo is directing his first feature film for us, Pedro Páramo in Mexico. Using AI-powered tools he was able to deliver this de-aging VFX to the screen for a fraction of what it cost on The Irishman. In fact, the entire budget of the film was about the VFX cost on The Irishman…

…So our focus is simple, find ways for AI to improve the member and the creator experience.

Netflix’s management is building interactive search into Netflix which is based on generative AI

We’re also building out like new capabilities, an example would be interactive search. That’s based on generative technologies. We expect that will improve that aspect of discovery for members.

Paycom Software (NYSE: PAYC)

Paycom’s GONE is the industry’s first fully automated time-off solution, utilising AI, that automates all time off requests; prior to GONE, 10% of an organisation’s labour cost was unmanaged; GONE can generate ROI of up to 800%, according to Forrester; GONE helped Paycom be named by Fast Company as one of the world’s most innovative companies

Our award-winning solution, GONE, is a perfect example of how Paycom simplifies tests through automation and AI. GONE is the industry’s first fully automated time-off solution that decisions all time-off requests based on customizable guidelines set by the company’s time-off rules. Before GONE, 10% of an organization’s labor cost went substantially unmanaged, creating scheduling errors, increased cost from overpayments, staffing shortages and employee uncertainty over pending time-off requests. According to a Forrester study, GONE’s automation delivers an ROI of up to 800% for clients. GONE continues to receive recognition. Most recently, Fast Company magazine named Paycom, one of the world’s most innovative companies for a second time. This honor specifically recognized GONE and is a testament to how Paycom is shaping our industry by setting new standards for automation across the globe.

PayPal (NASDAQ: PYPL)

PayPal’s management is leaning into agentic commerce; PayPal recently launched the payments industry’s first remote MCP (Model Context Protocol) server to enable AI agent frameworks to integrate with PayPal APIs; the introduction of the MCP allows any business to create an agentic commerce experience; all major AI players are involved with PayPal’s annual Developer Days to engage PayPal’s developer community

At Investor Day, I told you we were leaning into agentic commerce…

…Just a few weeks ago, we launched the industry’s first remote MCP server and enabled the leading AI agent frameworks to seamlessly integrate with PayPal APIs. Now any business can create agentic experience that allow customers to pay, track shipments, manage invoices and more, all powered by PayPal and all within an AI client. As we speak, developers are gathering in our San Jose headquarters for our annual Developer Days. Every major player in AI is represented, providing demos and engaging with our developer community.

Shopify (NASDAQ: SHOP)

Shopify’s management recently launched TariffGuide.ai, an AI-powered tool that provides duty rates based on just a product description and the country of origin, helping merchants source the right products in minutes

And just this past week, we launched TariffGuide.ai. This AI driven tool provides duty rates based on just a product description and the country of origin. Sourcing the right products from the right country can mean the difference between a 0% and a 15% duty rate or higher, And TariffGuide.ai allows merchants to do this in minutes, not days.

Shopify CEO Tobi Lutke penned a memo recently on his vision on how Shopify should be workin with AI; AI is becoming 2nd nature to how Shopify’s employees work, where employees use AI reflexively; before any team requests for additional headcount, they need to first assess if AI can meet their goals; Shopify has built a dozen MCP (model context protocol) servers in the last few weeks to enable anyone in Shopify to ask questions and find resources more easily; management sees AI being a cornerstone of how Shopify delivers value; management is investing more in AI, but the increased investment is not a driver for the lower gross margin in Shopify’s Subscription Solutions segment in 2025 Q1; management does not expect the Subscription Solutions segment’s gross margin to change much in the near term; Shopify has shown strong operating leverage partly because of its growing internal use of AI

AI is at the core of how we operate and is transforming our work processes. For those who have not seen it, I encourage you to check out Toby’s recent company wide email on AI that has now been shared publicly. At Shopify, we take AI seriously. In fact, it’s becoming second nature to how we work. By fostering a culture of reflexive AI usage, our teams default to using AI first, reflexive being the key term here. This also means that before requesting additional headcount or resources, teams are required to start with assessing how they can meet their goals using AI first. This approach is sparking some really fascinating explorations and discussions around the company, challenging the way we think, the way we operate, and pushing us to look ahead as we redefine our decision making processes. In the past couple of weeks, we built a dozen MCP servers that make Shopify’s work legible and accessible. And now anyone within Shopify can ask questions, find resources, and leverage those tools for greater efficiency. This reflexive use of AI goes well beyond internal improvements. It supercharges our team’s capabilities and drives operational efficiencies, keeping us agile. And as we continue to innovate, AI will remain a cornerstone of how we deliver value across the board…

…Gross profit for Subscription Solutions grew 19%, slightly less than the 21% revenue growth for Subscription Solutions. The lower rate was driven primarily by higher cloud and infrastructure hosting costs needed to support higher volumes and geographic expansion. Although we are investing more in AI, it is not a significant factor in this increase. Over the past 5 years, the gross margin for Subscription Solutions has centered around 80%, plus or minus a couple of hundred basis points in any given quarter, and we do not anticipate that trend changing in the near term…

…Our continued discipline on head count across all 3 of R&D, sales and marketing and G&A continued to yield strong operating leverage, all while helping us move even faster on product development aided by our increasing use of AI.

Shopify’s management rearchitected the AI engine of Sidekick, Shopify’s AI merchant assistant, in 2025 Q1; monthly average users of Sidekick has more than doubled since the start of 2025; early results of Sidekick are really strong for both large and small merchants

In Q1, key developments for Sidekick included a complete rearchitecture of the AI engine for deeper reasoning capabilities, enhancing processing of larger business datasets and accessibility in all supported languages, allowing every Shopify merchant to use Sidekick in their preferred language. And these changes, well, they’re working. In fact, our monthly average users of Sidekick continue to climb more than doubling since the start of 2025. Now this is still really early days, but the progress we are making is already yielding some really strong results for merchants, both large and small. 

Shopify acquired Vantage Discovery in 2025 Q1; Vantage Discovery works on AI-powered, multi-vector search; management thinks the acquisition will improve the overall consumer search experience delivered by Shopify’s merchants

In March, we closed the acquisition of Vantage Discovery, which helps accelerate the development of AI-powered, multi-vector search across our search, APIs, shop and storefront search offerings. This acquisition is one piece of a broader strategy to ensure that our merchants are able to continue meeting buyers regardless of where they’re shopping or discovering great products…

…The Vantage team coming in who are rock stars in AI are going to help take our search abilities to the next level.

Shopify’s management is seeing more and more commerce searches starting away from a search engine; Shopify is already working with AI chatbot providers on AI shopping; management thinks that AI shopping is a huge opportunity; management thinks AI agents will be a great opportunity for Shopify too

One of the things we think about is that wherever commerce is taking place, Shopify will be there. And obviously, one of the things we are seeing is that more and more searches are starting on places beyond just somebody’s search engine. That’s a huge opportunity whereby more consumers are going to be searching for great products…

…We’ve talked about some of the partnerships in the past. You’ve seen what we’ve done with Perplexity and OpenAI. We will continue doing that. We’re not going to front run our product road map when it comes to anything, frankly. But we do think though that AI shopping, in particular, is a huge opportunity…

…[Question] How does Shopify view the emergence of AI agents in terms of do you guys see this as an opportunity or more of a threat because, on one hand, they could facilitate direct checkout with their own platforms. On the other hand, this may also unlock some new sales channel for Shopify merchants, very similar to sort of what happened with social media commerce

[Answer] We think it’s a great opportunity. Look, the more channels that exist in the world, the more complexity it is for merchants and brands, that’s where the value of Shopify really shines. So if there’s a new surface area, whether it’s through AI agents or through just simply LLMs and AI wrappers, that consumer goes to, to look for a new pair of sneakers or a new cosmetic or a piece of furniture, they want to have access to the most interesting products for the most important brands, and those are all on Shopify. So for us, we think that all of these new areas where commerce is happening is a great thing. It allows Shopify to increase its value.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management continues to expect AI accelerators revenue to double in 2025; management has factored China-bans on US chips into TSMC’s 2025 outlook; AI-related demand outside of China appears to have become even stronger over the last 3 months

We reaffirm our revenue from AI accelerated to double in 2025. The AI accelerators we define as AI GPU, AI ASIC and HPM controllers for AI training and inference in the data center. Based on our customers’ strong demand, we are also working hard to double our CoWoS capacity in 2025 to support their needs…

…[Question] The geopolitical risk, micro concerns is one of the major uncertainty nowadays. Last 2 days, we have like H20 being banned in China, blah, blah, blah. So how does that impact to TSMC’s focus and production planning, right? Do we have enough other customers and demand to keep our advanced node capacity fully utilized? Or how does that change our long-term production planning moving forward?

[Answer] Of course, we do not comment on specific customers or product, but let me assure you that we have taken this into consideration when providing our full year’s growth outlook. Did I answer the question?…

…[Question] AI is still expected to double this year despite the U.S. ban on AI GPUs into China. And I guess, China was a meaningful portion of accelerated shipments well over 10% of volumes. So factoring this in, it would imply your AI outlook this year, still doubling would mean that the AI orders have improved meaningfully outside of China in the last sort of 3 months. Is that how we should interpret your comment about you still expect the business to double?

[Answer] 3 months ago, we are — we just cannot supply enough wafer to our customer. And now it’s a little bit balanced, but still, the demand is very strong. And you are right, other than China, the demand is still very strong, especially in U.S.

TSMC’s management has a disciplined approach when building capacity and management recognises how important the discipline is given the high forecasted demand for AI-related chips

At TSMC, higher level of capital expenditures is always correlated with higher growth opportunities in the following years. We reiterate our 2025 capital budget is expected to be between USD 38 billion and USD 42 billion as we continue to invest to support customers’ growth. About 70% of the capital budget will be allocated for advanced process technologies. About 10% to 20% will be spent for specialty technologies and about 10% to 20% will be spent for advanced packaging, testing, mass-making and others. Our 2025 CapEx also includes a small amount related to our recently announced additional $100 billion investment plan to expand our capacity in Arizona…

…To address the structural increase in the long-term market demand profile, TSMC employed a disciplined and robust capacity planning system. This is especially important when we have such high forecasted demand from AI-related business. Externally, we work closely with our customers and our customers’ customers to plan our capacity. Internally, our planning system involves multiple teams across several functions to assess and evaluate the market demand from both a top-down and bottom-up approach to determine the appropriate capacity build.

TSMC’s management expects the Foundry 2.0 industry to grow 10% year-on-year in 2025, driven by AI-related demand and mild recovery in other end markets; management expects TSMC to outperform the Foundry 2.0 industry in 2025

Looking at the full year of 2025, we expect Foundry 2.0 industry growth to be supported by robust AI-related demand and a mild recovery in other end market segment. In January, we had forecasted a Foundry 2.0 industry to grow 10 points year-over-year in 2025, which is consistent with IDC’s forecast of 11% year-over-year growth for Foundry 2.0…

…We are confident TSMC can continue to outperform the Foundry 2.0 industry growth in 2025.

TSMC’s management thinks impact from recent AI models, including DeepSeek, will lower the barrier to future long-term AI development; TSMC’s management continues to expect mid-40% revenue CAGR from AI accelerators in the 5-years starting from 2024

Recent developments are also positive to AI’s long-term demand outlook. In our assessment, the impact from AI recent models, including DeepSeek, will drive greater efficiency and help lower the barrier to future AI development. This will lead to wider usage and greater adoption of AI models, which all require use of leading-edge silicon. These developments only serve to strengthen our conviction in the long-term growth opportunities from the industry megatrend of 5G, AI and HPC…

…Based on our planning framework, we are confident that our revenue growth from AI accelerators will approach a mid-40s percentage CAGR for the next 5 years period starting from 2024.

TSMC’s 2nd fab in Arizona will utilise N3 process technology and is already complete and management wants to speed up volume production schedule to meet AI-related demand

Our first fab in Arizona has already successfully entered high-volume production in 4Q ’24, utilizing N4 process technology with a yield comparable to our fab in Taiwan. The construction of our second fab, which will utilize the 3-nanometer process technology, is already complete and we are working on speeding up the volume production schedule based on the strong AI-related demand from our customers. Our third and fourth fab will utilize N2 and A16 process technologies and with the expectation of receiving all the necessary permits are scheduled to begin construction later this year. Our fifth and sixth fab will use even more advanced technologies. The construction and ramp schedule for this fab will be based on our customers’ demand.

TSMC’s management believes its A16 technology has a best-in-class backside power delivery solution that is also the first in the industry; A16 is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; A16 is scheduled for volume production in 2026 H2

We also introduced a 16 feature in super power rail or SPR as a separate offering. Compared with the N2P, A16 provides a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal route and dense power delivery network. Volume production is scheduled for second half 2026.

Tesla (NASDAQ: TSLA)

Tesla’s management continues to expect fully autonomous Tesla rides in Austin, Texas in June 2025; management will sell full autonomy software for Model Y in Austin; management now demarcates CyberCab as a separate product, and all of the other models (S, 3, X, Y) that is compatible with autonomous software as being robotaxis; management reiterates that once Tesla can solve for autonomy in 1 city, it can very quickly scale because Tesla’s autonomous solution is a general solution, not a city-specific solution; Tesla’s autonomous solution involves AI and a specific Tesla-designed AI chip, as opposed to expensive sensors and high-precision maps; the fully autonomous Teslas in June 2025 in Austin will be Model Ys; management expects full autonomy in Tesla’s fleet to ramp up very quickly; management is confident that Tesla will have large-scale autonomy by 2026 H2, meaning, millions of fully autonomous Tesla vehicles by 2026 H2; even with the introduction of full autonomy, management thinks there will be some localised parameters – effectively a mixture of experts model – set for safety; management thinks Tesla’s autonomous solution can scale well because when the FSD (Full Self Driving) software was deployed in China, it used very minimal China-specific data and yet could work well in China; validation of Tesla’s autonomous solution will be important in determining its rate of acceptance; there are now convoys of Teslas in Austin running autonomously in testing in order to compress Tesla’s AI’s learning curve; a consumer in China used FSD on a narrow mountain dirt road; management expects FSD unsupervised to be available for personal use by end of 2025; Musk thinks the first Model Y ro drive itself from factory to customer will happen later in 2025; newly-manufactured Model Ys are already driving themselves around in Tesla factories

We expect to have — be selling fully autonomous rides in June in Austin as we’ve been saying for now several months. So that’s continued…

…Unsupervised autonomy will first be sold for the Model Y in Austin, and then actually, should parse out the term for robotic taxi or robotaxi and just generally like what’s the Cybercab because we’ve got a product called the Cybercab. And then any Tesla, which could be an S, 3, X or Y that is autonomous is a robotic taxi or a robotaxi. It’s very confusing. So the vast majority of the Tesla fleet that we’ve made is capable of being a robotaxi or a robotic taxi…

…Once we can make the system work where you can have paid rides, fully autonomously with no one in the car in 1 city, that is a very scalable thing for us to go broadly within whatever jurisdiction allows us to operate. So because what we’re solving for is a general solution to autonomy, not a city-specific solution for autonomy, once we make it work in a few cities, we can basically make it work in all cities in that legal jurisdiction. So if it’s — once we can make the pace to work in a few cities in America, we can make it work anywhere in America. Once we can make it work in a few cities in China, we can make it work anywhere in China, likewise in Europe, limited only by regulatory approvals. So this is the advantage of having a generalized solution using artificial intelligence and an AI chip that Tesla designed specifically for this purpose, as opposed to very expensive sensors and high-precision maps on a particular neighborhood where that neighborhood may change or often changes and then the car stops working. So we have a general solution instead of a specific solution…

…The Teslas that will be fully autonomous in June in Austin are fully Model Ys. So that is — it’s currently on track to be able to do paid rides fully autonomously in Austin in June, and then to be in many other cities in the U.S. by the end of this year.

It’s difficult to predict the exact ramp sort of week by week and month by month, except that it will ramp up very quickly. So it’s going to be like some — basically an S-curve where it’s very difficult to predict the intermediate slope of the S-curve, but you kind of know where the S-curve is going to end up, which is the vast majority of the Tesla fleet being autonomous. So that’s why I feel confident in predicting large-scale autonomy around the middle of next year, certainly the second half next year, meaning I bet that there will be millions of Teslas operating autonomously, fully autonomously in the second half of next year, yes…

…It does seem increasingly likely that there will be a localized parameter set sort of — especially for places that have, say, a very snowy weather, like I say, if you’re in the Northeast or something like this — you can think of — it’s kind of like a human. Like you can be a very good driver in California but are you going to be also a good driver in a blizzard in Manhattan? You’re not going to be as good. So there is actually some value in — you can still drive but your probability of an accident is higher. So the — it’s increasingly obvious that there’s some value to having a localized set of parameters for different regions and localities…

…You can see that from our deployment of FSD supervised in China with this very minimal data that’s China-specific, the model is generalized quite well to completely different driving styles. That just like shows that the AI-based solution that we have is the right one because if you had gone down the previous rule-based solutions, sort of like more hard-coded HD map-based solutions, it would have taken like many, many years to get China to work. You can see those in the videos that people post online themselves. So the generalized solution that we are pursuing is the right one that’s going to scale well…

…You can think of this like location-specific parameters that Elon alluded to as a mixture of experts. And if you are sort of familiar with the AI models, Grok and others, they all use this mixture of experts to sort of specialize the parameters to specific tasks while still being general…

…What are the critical things that need to get right, one thing I would like to note is validation. Self-driving is a long-tail problem where there can be a lot of edge cases that only happen very, very rarely. Currently, we are driving around in Austin using our QA fleet, but then super [ rare ] to get interventions that are critical for robotaxi operation. And so you can go many days without getting a single intervention. So you can’t easily know whether you are improving or regressing in your capacity. And we need to build out sophisticated simulations, including neural network-based video generation…

…There’s just always a convoy of Teslas going — just going all over to Austin in circles. But yes, I just can’t emphasize this enough. In order to get a figure on the long-tail things, it’s 1 in 10,000, that says 1 in 20,000 miles or 1 in 30,000. The average person drives 10,000 miles in a year. So not trying to compress that test cycle into a matter of a few months. It means you need a lot of cars doing a lot of driving in order to compress that to do in a matter of a month what would normally take someone a year…

…I saw one guy take a Tesla on — autonomously on a narrow dirt road across like a mountain. And I’m like, still a very brave person. And I said this driving along the road with no barriers where he makes a mistake, he’s going to plunge to his doom. But it worked…

…[Question] when will FSD unsupervised be available for personal use on personally-owned cars?

[Answer] Before the end of this year… the acid test being you should — can you go to sleep in your car and wait until your destination? And I’m confident that will be available in many cities in the U.S. by the end of this year…

…I’m confident also that later this year, the first Model Y will drive itself all the way to the customer. So from our — probably from a factory in Austin and our one in here in Fremont, California, I’m confident that from both factories, we’ll be able to drive directly to a customer from the factory…

…We have — it has been put to use — it’s doing useful work fully autonomously at the factories, as Ashok was mentioning, the cars drive themselves from end of line to where they supposed to be picked up by a truck to be taken to the customer… It’s important to note in the factories, we don’t have dedicated lengths or anything. People are coming out every day, trucks delivering supplies, parts, construction.

Tesla’s management expects thousands of Optimus robots to be working in Tesla factories by end-2025; management expects Optimus to be the fastest product to get to millions of units per year; management thinks Tesla can get to 1 million units annually in 4-5 years; management expects to make thousands of Optimus robots at the end of this year; there’s no existing supply chain for all of Optimus’s components, so Tesla has to build a supply chain from scratch; the speed of manufacturing of a product is governed by the speed of the slowest item in the supply chain, but in Optimus’s case, there are many, many such items since it’s so new; Optimus production is currently rate-limited by restrictions on rare-earth magnets from China but management is working on it; management still has no idea how Optimus’s supply chain will look like at maturity

Making good progress in Optimus. We expect to have thousands of Optimus robots working in Tesla factories by the end of this year beginning this fall. And we expect to see Optimus faster than any product, I think, in history to get to millions of units per year as soon as possible. I think we feel confident in getting to 1 million units per year in less than 5 years, maybe 4 years. So by 2030, I feel confident in predicting 1 million Optimus units per year. It might be 2029…

…This year, we’ll make a few — we do expect to make thousands of Optimus robots, but most of that production is going to be at the end of the year…

…Almost everything in Optimus is new. There’s not like an existing supply chain for the motors, gearboxes, electronics, actuators, really anything in the Optimus apart from the AI for Tesla, the Tesla AI computer, which is the same as the one in the car. So when you have a new complex manufactured product, it will move as fast as the slowest and the least lucky component in the entire thing. And as a first order approximation, there’s like 10,000 unique things. So that’s why anyone who tells you they can predict with precision, the production ramp of the truly new product is — doesn’t know what they’re talking about. It is literally impossible…

…Now Optimus was affected by the magnet issue from China because the Optimus actuators in the arm to use permanent magnet. Now Tesla, as a whole, does not need to use permanent magnets. But when something is volume constrained like an arm of the robot, then you want to try to make the motor as small as possible. And then — so we did design in permanent magnets for those motors and those were affected by the supply chain by basically China requiring an export license to send out any rare earth magnets. So we’re working through that with China. Hopefully, we’ll get a license to use the rare earth magnets. China wants some assurances that these are not used for military purposes, which obviously they’re not. They’re just going into a humanoid robot. So — and it’s a nonweapon system…

…[Question] Wanted to ask about the Optimus supply chain going forward. You mentioned a very fast ramp-up. What do you envision that supply chain looking like? Is it going to require many more suppliers to be in the U.S. now because of the tariffs?

[Answer] We’ll have to see how things settle out. I don’t know yet. I mean some things we’re doing, as we’ve already talked about, which is that we’ve already taken tremendous steps to localize our supply chain. We’re more localized than any other manufacturer. And we have a lot of things kind of underway that to increase the localization to reduce supply chain risk associated with geopolitical uncertainty.

Tesla’s supervised FSD (full-self driving) software is safer than a human driver; management has been using social media (X, or Twitter) to encourage people to try out Tesla’s FSD software; management did not directly answer a question on FSD pricing once the vehicle can be totally unsupervised

Not only is FSD supervised safer than a human driver, but it is also improving the lifestyle of individuals who experience it. And again, this is something you have to experience and anybody who has experienced just knows it. And we’ve been doing a lot lately to try and get those stories out, at least on X, so that people can see how other people have benefited from this…

…[Question] Can we envision when you launch unsupervised FSD that there could be sort of a multitiered pricing approach to unsupervised versus supervised similar to what you did with autopilot versus FSD in the past?

[Answer] I mean this is something which we’ve been thinking about. I mean just so now for people who have been trying FSD and who’ve been using FSD, they think given the current pricing is too cheap because for $99, basically getting a personal shop… I mean we do need to give people more time to — if they want to look at — like a key breakpoint is, can you read your text messages or not? Can you write a text message or not? Because obviously, people are doing this, by the way, with unautonomous cars all the time. And if you just go over and drive down the highway and you’ll see people texting while driving doing 80-mile an hour… So that value — it will really be profound when you can basically do whatever you want, including sleep. And then that $99 is going to seem like the best $99 you ever spent in your life.

Tesla’s management thinks Waymo vehicles are too expensive compared to Teslas; Waymo has expensive sensor suites; management thinks Tesla will have lion’s share of the robotaxi market; a big difference between Tesla and Waymo is that Tesla is also manufacturing the cars whereas Waymo is retrofitting cars from other parties; management thinks Tesla’s vision-only approach will not have issues with cameras becoming blinded by glare and stuff because the system uses direct photon counting and bypasses image signal processing

The issue with Waymo’s cars is it costs way more money, but that is the issue. The car is very expensive, made in low-volume. Teslas are probably cost 1/4, 20% of what a Waymo costs and made in very high volume. Ironically, like we’re the ones who made the bet that a pure AI solution with cameras and [ already ] what the car actually will listen for sirens and that kind of thing. It’s the right move. And Waymo decided that an expensive sensor suite is the way to go, even though Google is very good at AI. So I’m wondering…

….As far as I’m aware, Tesla will have, I don’t know, 99% market share or something ridiculous…

…The other thing which people forget is that we’re not just developing the software solution, we are also manufacturing the cars. And like you know what like Waymo has, they’re taking cars and then trying to…

…[Question] You’re still sticking with the vision-only approach. A lot of autonomous people still have a lot of concerns about sun glare, fog and dust. Any color on how you anticipate on getting around those issues? Because my understanding, it kind of blinds the camera when you get glare and stuff.

[Answer] Actually, it does not blind the camera. We use an approach which is a direct photon count. So when you see a processed image, so the image that goes from the — with sort of photon counter, the silicon photon counter, that they get — goes through a digital signal processor or image signal processor. That’s normally what happens. And then the image that you see looks all washed out because if it’s — you pointed a camera at the sun, the post-processing of the photon counting washes things out. It actually adds noise. So quite a big breakthrough that we made some time ago was to go with direct photon counting and bypass the image signal processor. And then you can drive pretty much straight at the sun, and you can also see in what appears to be the blackest of night. And then here in fog, we can see as well as people can, probably better, but in fact probably slightly better than people than the average person anyway.

Tesla’s AI software team and chip-design team was built from scratch with no acquisitions; management thinks Tesla’s team is the best

It is worth noting that Tesla has built an incredible AI software team and AI hardware chip design team from scratch, didn’t acquire anyone. We just built it. So yes, it’s really — I mean I don’t see anyone being able to compete with Tesla at present.

Tesla’s management thinks China is ahead of the USA in physical AI with respect to autonomous drones because China has the ability to manufacture autonomous drones, but the USA does not;  management thinks Tesla is ahead of any company in the world, even Chinese companies, in terms of humanoid autonomous robots 

[Question] Between China and United States, who, in your opinion, is further ahead on the development of physical AI, specifically on humanoid and also drones?

[Answer] A friend of mine posted on X, I reposted it. I think of a prophetic statement, which is any country that cannot manufacture its own drones is going to be the vassal state of any country that can. And we can’t — America cannot currently manufacture its own drones. Let that sink in, unfortunately. So China, I believe manufactures about 70% of all drones. And if you look at the total supply chain, China is almost 100% of drones are — have a supply chain dependency on China. So China is in a very strong position. And here in America, we need to tip more of our people and resources to manufacturing because this is — and I have a lot of respect for China because I think China is amazing, actually. But the United States does have such a severe dependency on China for drones and be unable to make them unless China gives us the parts, which is currently the situation.

With respect to humanoid robots, I don’t think there’s any company and any country that can match as well. Tesla and SpaceX are #1. And then I’m a little concerned that on the leaderboard, ranks 2 through 10 will be Chinese companies. I’m confident that rank 1 will be Tesla.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s industry-leading Koa AI tools are embedded across Kokai; adoption of Kokai is now ahead of schedule, with 2/3 of clients using it; the bulk of spending on Trade Desk now takes place on Kokai; management continues to expect all Trade Desk clients to be using Kokai by end-2025; management is confident that Kokai will be seen as the most powerful buying platform by the industry by end-2025

The injection of our industry-leading Koa AI tools across every aspect of our platform has been a game changer, and we are just getting started…

…The core of Kokai has been delivered and adoption is now ahead of schedule. Around 2/3 of our clients are now using it and the bulk of the spend in our platform is now running through Kokai. We expect all clients to be using it by the end of the year…

…I’m confident that by the end of this year, we will reflect on Kokai as the most powerful buying platform the industry has ever seen, precisely because it combines client needs with the strong point of view on where value is shifting and how to deliver the most efficient return on ad spend.

…Kokai adoption now represents the majority of our spend, almost 2/3, a significant acceleration from where we ended 2024.

Deutsche Telekom used Kokai’s AI tools and saw an 11x improvement in post-click conversions and an 18x improvement in the cost of conversions; Deutsche Telekom is now planning to use Kokai across more campaigns and transition from Trade Desk’s previous platform, Solimar, like many other Trade Desk clients

Deutsche Telekom. They’re running the streaming TV service called Magenta TV, and they use our platform to try to grow their subscriber base…

…Using seed data from their existing customers, Deutsche Telekom was able to use the advanced AI tools in our Kokai platform to find new customers and define the right ad impressions across display and CTV, most relevant to retain those new customers successfully, and the results were very impressive. They saw an 11x improvement in post-click conversions attributed to advertising and an 18x improvement in the cost of those conversions. Deutsche Telekom is now planning to use Kokai across more campaigns, a transition that is fairly typical as clients move from our previous platform, Solimar to our newer, more advanced AI fuel platform, Kokai.

Visa (NASDAQ: V)

Visa recently announced the Authorize.net product that features AI capabilities, including an AI agent; Authorize.net enables all different types of payments;

In Acceptance Solutions, we recently announced 2 new product offerings. The first is a completely new version of Authorize.net, launching in the U.S. next quarter and additional countries next year. It features a streamlined user in base; AI capabilities with an AI agent, Anet; improved dashboards for day-to-day management and support for in-person card readers and Tap to Phone. It will help businesses analyze data, summarize insights and adapt to rapidly changing customer trends…

……I talked about the Authorize.net platform that we’ve relaunched and we’re relaunching. That’s a great example of enabling all different types of payments. And that’s going to be, we think, a really positive impact in the market specifically focused on growing our share in small business checkout.

Visa has an enhanced holistic fraud protection solution known as Adaptive Real-time Individual Change identification (ARIC) Risk Hub; ARIC Risk Hub uses AI to build more accurate risk profiles;

We also now provide an enhanced holistic fraud protection solution from Featurespace called the Adaptive Real-time Individual Change identification, or ARIC, Risk Hub. This solution utilizes machine learning and AI solutions to enable clients to build more accurate risk profiles and more confidently detect and block fraudulent transactions, ultimately helping to increase approvals and stop bad actors in real time. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Mastercard, Meta Platforms, Microsoft, Netflix, Paycom Software, PayPal, Shopify, TSMC, Tesla, The Trade Desk and Visa. Holdings are subject to change at any time.

Insights From Berkshire Hathaway’s 2025 Annual General Meeting

Warren Buffett and his team shared plenty of wisdom at the recent Berkshire Hathaway AGM.

Warren Buffett is one of my investment heroes. On 3 May 2025, he held court at the 2025 Berkshire Hathaway AGM (annual general meeting).

For many years, I’ve anticipated the AGM to hear his latest thoughts. This year’s session holds special significance because it may well be his last – during the AGM, he announced that he would be stepping down as CEO of Berkshire Hathaway by the end of this year, ending an amazing 60-year run since becoming the company’s leader in 1965. Greg Abel is slated to be Berkshire Hathaway’s next CEO.

The most recent Berkshire meeting contained great insights from Buffett and other senior Berkshire executives that I wish to share and document. Before I get to them, I would like to thank my friend Thomas Chua for performing a great act of public service. Shortly after the AGM ended, Thomas posted a transcript of the session at his excellent investing website Steady Compounding

Without further ado, the italicised passages between the two horizontal lines below are my favourite takeaways after I went through Thomas’ transcript.


Buffett thinks his idea on import certificates is different from tariffs and that it’s important to have more balanced trade between countries; he also thinks that trade should not be wielded as a weapon, and that the more prosperous the world becomes, the better the USA would be

Becky Quick: Thanks Warren. This first question comes from Bill Mitchell. I received more questions about this than any other question. He writes, “Warren, in a 2003 Fortune article, you argued for import certificates to limit trade deficits and said these import certificates basically amounted to a tariff, but recently you called tariffs an act of economic war. Has your view on trade barriers changed or do you see import certificates as somehow distinct from tariffs?”

Warren Buffett: Well, the import certificates were distinct, but their goal was to balance imports against exports so that the trade deficit would not grow in an enormous way. It had various provisions to help third world countries catch up a little bit. They were designed to balance trade, and I think you can make very good arguments that balanced trade is good for the world. It makes sense for cocoa to be raised in Ghana and coffee in Colombia and a few other things…

…There’s no question that trade can be an act of war, and I think it’s led to bad things like the attitudes it’s brought out in the United States. We should be looking to trade with the rest of the world. We should do what we do best, and they should do what they do best…

…The main thing is that trade should not be a weapon. The United States has become an incredibly important country starting from nothing 250 years ago – there’s never been anything like it. And it’s a big mistake when you have 7.5 billion people who don’t like you very well and you have 300 million people crowing about how well they’ve done. I don’t think it’s right and I don’t think it’s wise. The more prosperous the rest of the world becomes, it won’t be at our expense – the more prosperous we’ll become and the safer we’ll feel and your children will feel someday.

Buffett did not look at macroeconomic factors in Japan when making the huge investments he did in five Japanese trading houses; Berkshire won’t be selling the Japanese investments for a long, long time, if at all; Berkshire would be happy to invest a lot more in Japan if there was capacity to do so; the fact that Berkshire could borrow in Japanese Yen to hedge the Japanese investments’ currency risk is merely a lucky coincidence

Question: Mr. Buffett and Mr. Munger did a very good and successful investment in Japan in the past five or six years. The recent CPI in Japan is currently above 3%, not far away from its 2% target. Bank of Japan seems very determined in raising rates while Fed, ECB, and other central banks are considering cutting them. Do you think BOJ makes sense to proceed with the rate hike? Will its planned rate hike deter you from further investing in the Japanese stock market or even considering realizing your current profits?

Warren Buffett: Well, I’m going to extend the same goodwill to Japan that you’ve just extended to me. I’ll let the people of Japan determine their best course of action in terms of economics. It’s an incredible story. It’s been about six years now since our Japanese investments. I was just going through a little handbook that probably had two or three thousand Japanese companies in it. One problem I have is that I can’t read that handbook anymore – the print’s too small. But there were these five trading companies selling at ridiculously low prices. So I spent about a year acquiring them. And then we got to know the people better, and everything that Greg and I saw, we liked better as we went along…

Greg Abel: When you think of the five companies, there’s definitely a couple meetings a year, Warren. The thing we’re building with the five companies is, one, it’s been a very good investment, but we really envision holding the investment for 50 years or forever…

Warren Bufett: We will not be selling any stock. That will not happen in decades, if then…

…It’s too bad that Berkshire has gotten as big as it is because we love that position and I’d like it to be a lot larger. Even with the five companies being very large in Japan, we’ve got at market in the range of $20 billion invested, but I’d rather have $100 billion than $20 billion…

…The Japanese situation is different because we intend to stay so long with that position and the funding situation is so cheap that we’ve attempted to some degree to match purchases against yen-denominated funding. But that’s not a policy of ours…

Greg Abel: There’s no question we were fundamentally very comfortable with investing in the five Japanese companies and recognizing we’re investing in yen. The fact we could then borrow in yen was almost just a nice incremental opportunity. But we were very comfortable both with the Japanese companies and with the currency we would ultimately realize in yen.

Just the simple act of reading about companies can lead to great investment opportunities

Warren Buffett: It’s been about six years now since our Japanese investments. I was just going through a little handbook that probably had two or three thousand Japanese companies in it…

…I never dreamt of that when I picked up that handbook. It’s amazing what you can find when you just turn the page. We showed a movie last year about “turn every page,” and I would say that turning every page is one important ingredient to bring to the investment field. Very few people do turn every page, and the ones who turn every page aren’t going to tell you what they’re finding. So you’ve got to do a little of it yourself.

Berkshire’s current huge cash position is the result of Buffett not being able to find sufficiently attractive investment opportunities; Buffett thinks that great investment opportunities appear infrequently

Becky Quick: This next question comes from Advate Prasad in New York. He writes, “Today, Berkshire holds over $300 billion in cash and short-term investments, representing about 27% of total assets, a historically high figure compared to the 13% average over the last 25 years. This has also led Berkshire to effectively own nearly 5% of the entire US Treasury market. Beyond the need for liquidity to meet insurance obligations, is the decision to raise cash primarily a de-risking strategy in response to high market valuations?…

Warren Buffett: Well, I wouldn’t do anything nearly so noble as to withhold investing myself just so that Greg could look good later on. If he gets any edge of what I leave behind, I’ll resent it. The amount of cash we have – we would spend $100 billion if something is offered that makes sense to us, that we understand, offers good value, and where we don’t worry about losing money. The problem with the investment business is that things don’t come along in an orderly fashion, and they never will. I’ve had about 16,000 trading days in my career. It would be nice if every day you got four opportunities or something like that with equal attractiveness. If I was running a numbers racket, every day would have the same expectancy that I would keep 40% of whatever the handle was, and the only question would be how much we transacted. But we’re not running that kind of business. We’re running a business which is very opportunistic.

Investing in stocks is a much better investment-bet than investing in real estate

Warren Buffett: Well, in respect to real estate, it’s so much harder than stocks in terms of negotiation of deals, time spent, and the involvement of multiple parties in the ownership. Usually when real estate gets in trouble, you find out you’re dealing with more than just the equity holder. There have been times when large amounts of real estate have changed hands at bargain prices, but usually stocks were cheaper and they were a lot easier to do.

Charlie did more real estate. Charlie enjoyed real estate transactions, and he actually did a fair number of them in the last 5 years of his life. But he was playing a game that was interesting to him. I think if you’d asked him to make a choice when he was 21 – either be in stocks exclusively for the rest of his life or real estate for the rest of his life – he would have chosen stocks. There’s just so much more opportunity, at least in the United States, that presents itself in the security market than in real estate…

…When you walk down to the New York Stock Exchange, you can do billions of dollars worth of business, totally anonymous, and you can do it in 5 minutes. The trades are complete when they’re complete. In real estate, when you make a deal with a distressed lender, when you sign the deal, that’s just the beginning. Then people start negotiating more things, and it’s a whole different game with a different type of person who enjoys the game.

Berkshire’s leaders think AI will have a massive impact on the insurance business, but they are not in a hurry to pour money into AI as they think there’s plenty of faddish capital in the space

Ajit Jain: There is no question in my mind that AI is going to be a real game-changer. It’s going to change the way we assess risk, we price risk, we sell risk, and then the way we end up paying claims. Having said that, I certainly also feel that people end up spending enormous amounts of money trying to chase the next fashionable thing…

…Right now the individual insurance operations do dabble in AI and try to figure out the best way to exploit it. But we have not yet made a conscious big-time effort in terms of pouring a lot of money into this opportunity.

Buffett prefers Ajit Jain to any kind of sophisticated AI systems when pricing insurance risks

Warren Buffett: I wouldn’t trade everything that’s developed in AI in the next 10 years for Ajit. If you gave me a choice of having a hundred billion dollars available to participate in the property casualty insurance business for the next 10 years and a choice of getting the top AI product from whoever’s developing it or having Ajit making the decisions, I would take Ajit anytime – and I’m not kidding about that.

Despite the political upheaval happening in the USA right now, Buffett still thinks the long-term future of the country is incredibly bright; in Buffett’s eyes, the USA has been through plenty of tumultuous periods and emerged stronger

Warren Buffett: America has been undergoing significant and revolutionary change ever since it was developed. I mentioned that we started out as an agricultural society with high promises that we didn’t deliver on very well. We said all men were created equal, and then we wrote a constitution that counted blacks as three-fifths of a person. In Article 2, you’ll find male pronouns used 20 times and no female pronouns. So it took until 1920, with the 19th amendment, to finally give women the vote that we had promised back in 1776.

We’re always in the process of change, and we’ll always find all kinds of things to criticize in the country. But the luckiest day in my life is the day I was born, because I was born in the United States. At that time, about 3% of all births in the world were taking place in the United States. I was just lucky, and I was lucky to be born white, among other things…

…We’ve gone through all kinds of things – great recessions, world wars, the development of the atomic bomb that we never dreamt of when I was born. So I would not get discouraged about the fact that we haven’t solved every problem that’s come along. If I were being born today, I would just keep negotiating in the womb until they said I could be in the United States.

It’s important to be patient while waiting for opportunities, but equally important to pounce when the opportunity appears

Warren Buffett: The trick when you get in business with somebody who wants to sell you something for $6 million that’s got $2 million of cash, a couple million of real estate, and is making $2 million a year, is you don’t want to be patient at that moment. You want to be patient in waiting to get the occasional call. My phone will ring sometime with something that wakes me up. You just never know when it’ll happen. That’s what makes it fun. So patience is a combination of patience and a willingness to do something that afternoon if it comes to you.

It does not pay to invest in a way that depends on the appearance of a greater fool

Warren Buffett: If people are making more money because they’re borrowing money or participating in securities that are pieces of junk but they hope to find a bigger sucker later on, you have to forget that.

Buffett does not think it’s important to manage currency risk with Berkshire’s international investments, but he avoids investments denominated in currencies that are at risk of depreciating wildly

Warren Buffett: We’ve owned lots of securities in foreign currencies. We do nothing in terms of its impact on quarterly and annual earnings. We don’t do anything based on its impact on quarterly and annual earnings. There’s never been a board meeting I can remember where I’ve said, “If we do this, our annual earnings will be this, therefore we ought to do it.” The number will turn out to be what it’ll be. What counts is where we are five or 10 or 20 years from now…

…Obviously, we wouldn’t want to own anything in a currency that we thought was really going to hell.

Buffett is worried about the tendency for governments to want to devalue their currencies, the USA included, but there’s nothing much that can be done about it; Buffett thinks the USA is running a fiscal deficit that is unsustainable over a long period of time; Buffett thinks a 3% fiscal deficit appears sustainable

Warren Buffett: That’s the big thing we worry about with the United States currency. The tendency of a government to want to debase its currency over time – there’s no system that beats that. You can pick dictators, you can pick representatives, you can do anything, but there will be a push toward weaker currencies. I mentioned very briefly in the annual report that fiscal policy is what scares me in the United States because of the way it’s made, and all the motivations are toward doing things that can cause trouble with money. But that’s not limited to the United States – it’s all over the world, and in some places, it gets out of control regularly. They devalue at rates that are breathtaking, and that’s continued…

…So currency value is a scary thing, and we don’t have any great system for beating that…

…We’re operating at a fiscal deficit now that is unsustainable over a very long period of time. We don’t know whether that means two years or 20 years because there’s never been a country like the United States. But as Herbert Stein, the famous economist, said, “If something can’t go on forever, it will end.” We are doing something that is unsustainable, and it has the aspect to it that it gets uncontrollable to a certain point….

…I wouldn’t want the job of trying to correct what’s going on in revenue and expenditures of the United States with roughly a 7% gap when probably a 3% gap is sustainable…

…We’ve got a lot of problems always as a country, but this is one we bring on ourselves. We have a revenue stream, a capital-producing stream, a brains-producing machine like the world has never seen. And if you picked a way to screw it up, it would involve the currency. That’s happened a lot of places.

Buffett thinks the key factors for a developing economy to attract investors are having a solid currency, and being business-friendly

Audience member: What advice would you give to government and business leaders of emerging markets like Mongolia to attract institutional investors like yourself?

Warren Buffett: If you’re looking for advice to give the government over there, it’s to develop a reputation for having a solid currency over time. We don’t really want to go into any country where we think there’s a significant probability of runaway inflation. That’s too hard to figure…

…If the country develops a reputation for being business-friendly and currency-conscious, that bodes very well for the residents of that country, particularly if it has some natural assets that it can build around.

Private equity firms are flooding the life insurance market, but they are doing so by taking on lots of leverage and credit risk

Ajit Jain: There’s no question the private equity firms have come into the space, and we are no longer competitive in the space. We used to do a fair amount in this space, but in the last 3-4 years, I don’t think we’ve done a single deal.

You should separate this whole segment into two parts: the property casualty end of the business and the life end of the business. The private equity firms you mentioned are all very active in the life end of the business, not the property casualty end.

You are right in identifying the risks these private equity firms are taking on both in terms of leverage and credit risk. While the economy is doing great and credit spreads are low, these firms have taken the assets from very conservative investments to ones where they get a lot more return. As long as the economy is good and credit spreads are low, they will make money – they’ll make a lot of money because of leverage.

However, there is always the danger that at some point the regulators might get cranky and say they’re taking too much risk on behalf of their policyholders, and that could end in tears. We do not like the risk-reward that these situations offer, and therefore we put up the white flag and said we can’t compete in this segment right now.

Buffett thinks Berkshire’s insurance operation is effectively unreplicable

Warren Buffett: I think there are people that want to copy Berkshire’s model, but usually they don’t want to copy it by also copying the model of the CEO having all of his money in the company forever. They have a different equation – they’re interested in something else. That’s capitalism, but they have a whole different situation and probably a somewhat different fiduciary feeling about what they’re doing. Sometimes it works and sometimes it doesn’t work. If it doesn’t work, they go on to other things. If what we do at Berkshire doesn’t work, I spend the end of my life regretting what I’ve created. So it’s just a whole different personal equation.

There is no property casualty company that can basically replicate Berkshire. That wasn’t the case at the start – at the start we just had National Indemnity a few miles from here, and anybody could have duplicated what we had. But that was before Ajit came with us in 1986, and at that point the other fellows should have given up.

Buffett thinks recent market volatility is not noteworthy at all; it’s nearly certain that significant downward moves in stocks will happen sometime in the next 20 years

Warren Buffett: What has happened in the last 30-45 days, 100 days, whatever this period has been, is really nothing. There have been three times since we acquired Berkshire that Berkshire has gone down 50% in a fairly short period of time – three different times. Nothing was fundamentally wrong with the company at any time. This is not a huge move. The Dow Jones average was at 381 in September of 1929 and got down to 42. That’s going from 100 to 11. This has not been a dramatic bear market or anything of the sort. I’ve had about 17,000 or 18,000 trading days. There have been plenty of periods that are dramatically different than this…

…You will see a period in the next 20 years that will be a “hair curler” compared to anything you’ve seen before. That just happens periodically. The world makes big mistakes, and surprises happen in dramatic ways. The more sophisticated the system gets, the more the surprises can come out of left field. That’s part of the stock market, and that’s what makes it a good place to focus your efforts if you’ve got the proper temperament for it and a terrible place to get involved if you get frightened by markets that decline and get excited when stock markets go up.

Berkshire’s leaders think the biggest change autonomous vehicles will bring to the automotive insurance industry is substitution of operator error policies by product liability policies; Berkshire’s leaders also think that the cost per repair in the event of an accident will rise significantly; the total cost of providing insurance for autonomous vehicles is still unclear; from the 1950s to today, cars have gotten 6x safer but auto insurance has become 50x pricier

Ajit Jain: There’s no question that insurance for automobiles is going to change dramatically once self-driving cars become a reality. The big change will be what you identified. Most of the insurance that is sold and bought revolves around operator errors – how often they happen, how severe they are, and therefore what premium we ought to charge. To the extent these new self-driving cars are safer and involved in fewer accidents, that insurance will be less required. Instead, it’ll be substituted by product liability. So we at GEICO and elsewhere are certainly trying to get ready for that switch, where we move from providing insurance for operator errors to being more ready to provide protection for product errors and errors and omissions in the construction of these automobiles…

…We talked about the shift to product liability and protection for accidents that take place because of an error in product design or supply. In addition to that shift, I think what we’ll see is a major shift where the number of accidents will drop dramatically because of automatic driving. But on the other hand, the cost per repair every time there’s an accident will go up very significantly because of the amount of technology in the car. How those two variables interact with each other in terms of the total cost of providing insurance, I think, is still an open issue…

Warren Buffett: When I walked into GEICO’s office in 1951, the average price of a policy was around $40 a year. Now it’s easy to get up to $2,000 depending on location and other factors. During that same time, the number of people killed in auto accidents has fallen from roughly six per 100 million miles driven to a little over one. So the car has become incredibly safer, and it costs 50 times as much to buy an insurance policy.

There’s a tax now when American companies conduct share buybacks

Warren Buffett: I don’t think people generally know that, but there is a tax that was introduced a year or so ago where we pay 1%. That not only hurts us because we pay more for it than you do – it’s a better deal for you than for us – but it actually hurts some of our investee companies quite substantially. Tim Cook has done a wonderful job running Apple, but he spent about $100 billion in a year repurchasing shares, and there’s a 1% charge attached to that now. So that’s a billion dollars a year that he pays when he buys Apple stock compared to what you pay.

Buffett is very careful with the risks that come with derivative contracts on a company’s balance sheet

Greg Abel: I’ll maybe go back to the very first meeting with Warren because it still stands out in my mind. Warren was thinking about acquiring Mid-America Energy Holdings Company at that time, and we had the opportunity with my partners to go over there on a Saturday morning. We were discussing the business and Warren had the financial statements in front of him. Like anybody, I was sort of expecting a few questions on how the business was performing, but Warren locked in immediately to what was on the balance sheet and the fact we had some derivative contracts, the “weapons of mass destruction.”

In the utility business, we do have derivatives because they’re used to match certain positions. They’re never matched perfectly, but we have them and they’re required in the regulated business. I remember Warren going to it immediately and asking about the composition and what was the underlying risk, wanting to thoroughly understand. It wasn’t that big of a position, but it was absolutely one of the risks he was concerned about as he was acquiring Mid-America, especially in light of Enron and everything that had gone on.

The followup to that was a year or 18 months later. There was an energy crisis in the US around electricity and natural gas, and various companies were making significant sums of money. Warren’s follow-up question to me was, “How much money are we making during this energy crisis? Are we making a lot? Do we have speculative positions in place?” The answer was we weren’t making any more than we would have been six months ago because all those derivatives were truly to support our business and weren’t speculative. That focus on understanding the business and the risks around it still stands out in my mind.

Buffett spends more time analysing a company’s balance sheet than other financial statements

Warren Buffett: I spend more time looking at balance sheets than I do income statements. Wall Street doesn’t pay much attention to balance sheets, but I like to look at balance sheets over an 8 or 10 year period before I even look at the income account because there are certain things it’s harder to hide or play games with on the balance sheet than with the income statement.

Buffett thinks America’s electric grid needs a massive overhaul and it can only be done in via a partnership between the private sector and the government – unfortunately, nobody has figured out the partnership model yet

Warren Buffett: t’s very obvious that the country needs an incredible improvement, rethinking, redirection to some extent in the electric grid. We’ve outgrown what would be the model that America should have. In a sense, it’s a problem something akin to the interstate highway system where you needed the power of the government really to get things done because it doesn’t work so well when you get 48 or 50 jurisdictions that each has their own way of thinking about things…

…There are certain really major investment situations where we have capital like nobody else has in the private system. We have particular knowhow in the whole generation and transmission arena. The country is going to need it. But we have to figure out a way that makes sense from the standpoint of the government, from the standpoint of the public, and from the standpoint of Berkshire, and we haven’t figured that out yet. It’s a clear and present use of hundreds of billions of dollars. You have people that set up funds and they’re getting paid for just assembling stuff, but that’s not the way to handle it. The way to handle it is to have some kind of government-private industry cooperation similar to what you do in a war.

The risk of wildfires to electric utilities is not going to go away, and in fact, will increase over time

Greg Abel: The reality is the risk around wildfires – do the wildfires occur – they’re not going away, and we know that. The risk probably goes up each year.

Berkshire’s leaders think it’s important for utilities to de-energise when wildfires occur to minimise societal damage; Berkshire is the only utility operator so far that’s willing to de-energise; but de-energising also has its drawbacks; Berkshire may not be able to solve the conundrum of de-energising

Greg Abel: the one thing we hadn’t tackled – this is very relevant to the significant event we had back in 2020 in PacifiCorp – is we didn’t de-energize the system as the fire was approaching. Our employees and the whole management team have been trained all their lives to keep the lights on, and the last thing they want to do is turn those lights off and have a system de-energized. After those events and as we looked at how we’re going to move forward in managing the assets and reducing risk, we recognized as a team that we have to de-energize those assets. Now as we get fires encroaching at a certain number of miles, we de-energize because we do not want to contribute to the fire nor harm any of our consumers or contribute to a death. We had to take our team to managing a different risk now. It’s not around keeping the lights on, it’s around protecting the general public and ensuring the fire does not spread further. We’re probably the one utility or across our utilities that does that today, and we strongly believe in that approach.

Becky Quick: Doesn’t that open you up to other risk if you shut down your system, a hospital gets shut down, somebody dies?

Greg Abel: That’s something we do deal with a lot because we have power outages that occur by accident. When we look at critical infrastructure, that’s an excellent point and we’re constantly re-evaluating it. We do receive a lot of feedback from our customer groups as to how to manage that…

Warren Buffett: There’s some problems that can’t be solved, and we shouldn’t be in the business of taking investors’ money and tackling things that we don’t know the solution for. You can present the arguments, but it’s a political decision when you are dealing with states or the federal government. If you’re in something where you’re going to lose, the big thing to do is quit.

Buffett thinks the value of electric utility companies have fallen a lot over the past two years because of societal trends and his enthusiasm for investing in electric utilities has waned considerably

Becky Quick: Ricardo Bri, a longtime shareholder based in Panama, says that he was very happy to see Berkshire acquire 100% of BHE. It was done in two steps: one in late 2022 – 1% was purchased from Greg Abel for $870 million implying a valuation of BHE of $87 billion, and then in 2024 the remaining 8% was purchased from the family of Walter Scott Jr. for $3.9 billion implying a valuation of $48.8 billion for the enterprise. That second larger transaction represented a 44% reduction in valuation in just two years. Ricardo writes that PacifiCorp liabilities seem too small to explain this. Therefore, what factors contributed to the difference in value for BHE between those two moments in time?

Warren Buffett: Well, we don’t know how much we’ll lose out of PacifiCorp and decisions that are made, but we also know that certain of the attitudes demonstrated by that particular example have analogues throughout the utility system. There are a lot of states that so far have been very good to operate in, and there are some now that are rat poison, as Charlie would say, to operate in. That knowledge was accentuated when we saw what happened in the Pacific Northwest, and it’s eventuated by what we’ve seen as to how utilities have been treated in certain other situations. So it wasn’t just a direct question of what was involved at PacifiCorp. It was an extrapolation of a societal trend…

…We’re not in the mood to sell any business. But Berkshire Hathaway Energy is worth considerably less money than it was two years ago based on societal factors. And that happens in some of our businesses. It certainly happened to our textile business. The public utility business is not as good a business as it was a couple of years ago. If anybody doesn’t believe that, they can look at Hawaiian Electric and look at Edison in the current wildfires situation in California. There are societal trends that are changing things…

…I would say that our enthusiasm for buying public utility companies is different now than it would have been a couple years ago. That happens in other industries, too, but it’s pretty dramatic in public utilities. And it’s particularly dramatic in public utilities because they are going to need lots of money. So, if you’re going to need lots of money, you probably ought to behave in a way that encourages people to give you lots of money.

Buffett thinks the future capital intensity of the USA’s large technology companies remains to be seen

Warren Buffett: It’ll be interesting to see how much capital intensity there is now with the Magnificent 7 compared to a few years ago. Basically, Apple has not really needed any capital over the years and it’s repurchased shares with a dramatic reduction. Whether that world is the same in the future or not is something yet to be seen.

Buffett thinks there’s no better system than capitalism that has been discovered so far

Warren Buffett: Capitalism in the United States has succeeded like nothing you’ve ever seen. But what it is is a combination of this magnificent cathedral which has produced an economy like nothing the world’s ever seen, and then it’s got this massive casino attached…

…In the cathedral, they’re designing things that will be producing goods and services for 300 and some million people like it’s never been done before in history. It’s an interesting system we developed, but it’s worked. It dispenses rewards in what seems like a terribly capricious manner. The idea that people get what they deserve in life – it’s hard to make that argument. But if you argue with it that any other system works better, the answer is we haven’t found one.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Alphabet, Amazon, Apple, Meta Platforms, Microsoft, and Tesla (they are all part of the Magnificent 7). Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 May 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 04 May 2025:

1. Everyone Says They’ll Pay More for “Made in the USA.” So We Ran an A/B Test – Ramon van Meer

We make filtered showerheads. Clean, sleek design. But more importantly, with the best shower filters on the market. 

Our bestselling model—manufactured in Asia (China and Vietnam)—sells for $129. But this year, as tariffs jumped from 25% to 170%, we wondered: Could we reshore manufacturing to the U.S. while maintaining margins to keep our lights on?…

…We found a U.S.-based supplier. The new unit cost us nearly 3x more to produce. To maintain our margins, we’d have to sell it for $239.

So we ran an experiment.

We created a secret landing page. The product and design were identical. The only difference? One was labeled “Made in Asia” and priced at $129. The other, “Made in the USA,” at $239…

…Add-to-carts for the U.S. version were only 24! Conversion? 0.0% (zero).

Not a single customer purchased the Made-in-USA version…

…We wanted to believe customers would back American labor with their dollars. But when faced with a real decision—not a survey or a comment section—they didn’t…

…Small brands like ours want to manufacture here. We’re willing to invest. But without serious shifts—in consumer incentives, automation, and trade policy—the math doesn’t work. Not for us. Not for our customers.

We’re still committed to exploring local manufacturing. But for now, it’s not viable.

We’re sharing this because the numbers surprised even us. And we think they’re worth talking about.

2. Perspectives from 30 business leaders on the trade war frontlines: preparing for the best, bracing for the worst – Amber Zhang

For companies with strong brand equity and pricing power, the strategy is clear: raise prices.

Anker Innovations was among the first movers, increasing prices on Amazon by roughly 20%. A person familiar with the company said its U.S. market share remains stable despite the hikes. Pop Mart hasn’t adjusted pricing yet, but told Waves that it’s “not ruling out the possibility.”

One cross-border commerce insider noted that for companies like DJI, whose supply chain advantages are hard to replicate, rising costs may squeeze margins, but competitors will struggle to gain market share in the near term.“Tariffs are a toll, not a blockade. Only great products hold long-term passports,” Anker commented…

…According to Waves, most ODM firms that rely heavily on the U.S. market are currently in a holding pattern, halting shipments and refraining from taking new orders for the coming months…

…Wei believes a resolution – or at least a temporary workaround – is likely within the next two to three months. Why? ODM manufacturers need to finalize production plans for Christmas orders by July or August. If the standoff drags on any longer, “America may be headed for a very empty Christmas.”…

…On April 2, Donald Trump signed an executive order eliminating the de minimis exemption for packages from the Chinese mainland and Hong Kong. The White House followed up on April 8, raising tariffs on these goods from 30% to 90%, effective May 2.

This presents a major setback for small Chinese sellers who rely on platforms like TikTok, Temu, and Shein to ship low-value parcels without stocking local warehouses…

…That said, some insiders point out that more than 90% of shipments to the U.S. currently rely on “Gray Customs Clearance” — unofficial customs channels where third-party brokers help exporters bypass formal declarations at minimal cost. From their perspective, the new tariffs are more likely to disrupt clearance efficiency rather than kill off the model entirely…

…Amid persistent U.S.-China trade friction, a viable strategy is to first export raw materials or semi-finished goods from China for overseas assembly and packaging. Over time, upstream production can gradually shift to local markets, supported by regional suppliers…

…In the wake of the de minimis exemption repeal, TikTok Shop recently issued a notice to U.S. sellers: starting May 2, all incoming shipments will face a 30% ad valorem tax, with an additional flat tariff of $25 per item before June 1, rising to $50 afterward. Carriers will also be required to post international bonds as part of the compliance framework. So far, Temu and Shein have yet to publicly announce specific countermeasures.

Ray Bu predicts that platform-based giants will be forced to accelerate full-scale internationalization, separating their domestic and overseas supply chains and localizing operations in major markets…

…Both Temu and Shein are ramping up local presence in multiple countries, aggressively recruiting local merchants. Shein, for example, has been building production capacity in Turkey and Brazil — jurisdictions seen as tariff-safe zones amid the current climate.

Temu, for its part, plans to launch six “native fulfillment hubs” and nine “semi-managed hubs” between April and June. The former are geared toward local legal entities, while the latter target Chinese sellers with the ability to fulfill orders domestically in target markets…

…According to Xue Yi, professor at the University of International Business and Economics, if access to the U.S. market becomes more restricted, the European Union may be China’s best substitute…

…According to Xue Feng, a partner at Fangda Partners, recent tariff hikes have dealt significant blows to sectors including furniture, toys, textiles, auto parts, chemicals, steel, and aluminum. These industries, heavily reliant on low-end supply chains, are structurally tied to the U.S. and face steep challenges when seeking alternative markets…

…At its core, America simply isn’t ready to rebuild industrial capacity at scale.

To start with, U.S. manufacturing wages are 6–8 times higher than those in emerging markets, and the country faces a chronic shortage of skilled labor. But more critical than labor costs is the disintegration of domestic supply chains. After decades of offshoring, the U.S. industrial base is severely fragmented — core components like semiconductor materials and electronic parts still depend heavily on Asian suppliers.

Even if reshoring happens — as envisioned by the Trump administration — these factories would function as isolated islands, unable to form self-sustaining industrial clusters.

3. Did ancient Rome have a stock market? – Swen Lorenz

“The New Deal in Old Rome” was published by H. J. Haskell in 1940. The original book is rare and expensive today, but reprints can easily be found. As it reports on page 11:

“Indications are there was a curiously high degree of commercial organisation in the ancient world. In the time of Cicero, in the last century before Christ, wealthy Romans were busily exploiting the eastern provinces. Companies of contractors were organised to construct public works and to collect government revenue, from which the contractors were took a large cut. They sold shares in offices on the Via Sacra, the Wall Street of Rome. Everybody, says the Greek historian Polybius, meaning all the country club crowd, bought them. … We may imagine how the bottom dropped out of Asiatic stocks on the Roman market when the news came of the concerted massacre of eighty thousand Italians at the instigation of the native ruler of an adjoining kingdom.”…

…How much of Haskell’s claims were true?

A quick Google search of “did the Romans have a stock market” produces contradicting results.

The first search result is an abstract from Prof. Pellegrino Manfra, a professor at City University New York: “Ancient Rome Economy and Investment: The Origins of the Stock Market

The origins of the stock market can be observed as far back as ancient Rome. The earliest example of organized market for equities can be found in the Roman Republic in second century B.C…. Back in Roman times, organizations called ‘Societates Publicanorum’ were formed that offered investments referred to as ‘partes’ or what we now know them as – shares. … The shares were tradable and had fluctuating prices based on the underlying project’s success. …. The place where trading occurred was the forum, near the temple of Castor.”

Prof. Manfra’s conclusion couldn’t be clearer.

However, three search results further down, you find a complete repudiation of his claims.

In 2016, Bocconi University in Milan put out a press release summarising a scientific article that its Prof. Manuela Geranio had published: “Ancient Rome Stock Exchange Is a Myth

Manuela Geranio, in a paper with Geoffrey Poitras, shows that modern claims of the existence of a market for shares of the societates publicanorum in the late Roman Republic are not supported by primary sources … Recent claims of trading in shares (partes) of tax-farming corporations (societates publicanorum) in the late Roman Republic can thus raise some skepticism. ‘Upon closer inspection there is only brief discussion of possible share trading in a few sources that, in turn, depend fundamentally on a debatable interpretation of the commercial and legal context’. The location of the proto-stock-exchange near the temple of Castor results, for example, the fruit of ‘romanticized descriptions’. … The paper … highlights the need for more careful historical and legal analyses before concluding about the existence of its peculiar institutions in ancient times.”…

…The most common source cited as evidence that modern-day share trading took place in ancient Rome is “Publicans and Sinners”, a 1972 book by Ernst Badian, an Austrian-born classical scholar who served as a professor at Harvard University from 1971-1998.

It describes how the Romans used “partes (shares) in public companies” and traded “over the counter” based on a “register” of shareholders that the companies kept. It all sounds like the ancient Romans did have an early version of a stock market!…

…Badian was elected a fellow of the American Academy of Arts and Sciences in 1974, and in 1999 his native Austria awarded him the Cross of Honor for Science and Art. His work may be dated, but it’s impossible to dismiss his writing as that of a crank.

Badian was an early member of a group that critics of this field of study today decry as the “maximalists” who allegedly were too aggressive in interpreting incomplete historical evidence in a favourable way…

…Still, another group of equally serious scientists delivered a broadside to the whole idea of the Romans operating something worthy of comparing it to modern-day stock markets.

In 2015, Bocconi University’s Manuela Geranio teamed up with Geoffrey Poitras to publish “Trading of shares in the Societates Publicanorum?

The often repeated modern claim of significant trading in ‘shares of the societates publicanorum’ (partes) during the late Roman Republic cannot be supported using the available ‘primary sources’.”

As the authors go on to argue, previous analysis of this field failed to take into consideration differences in language, nuances in interpreting complex terms, the historical context, and the lack of sufficient amounts of clear evidence.

“Even where elements related to possible share trading can be identified in the primary sources, evidence is often vague or questionable. … To avoid semantic confusions, understanding the commercial and legal context for claims of share trading and other activities involving the societates publicanorum requires definition of important terms – shares, share trading, company, joint-stock company, corporation. Appropriate definition is essential to clarify various claims made in modern sources.”

Geranio and Poitras argue that previous interpretations in this field relied too much on an “artful interpretation” of key terms.

Their conclusion is supported by the widely-known notion that the day and age a scientist lives in (as well as their language and culture of origin) can play heavily into the conclusions of research.

4. China is trying to create a national network of cloud computing centers – Andrew Stokols

I’ve written before about the Eastern Data Western Compute 东数西算 project, China’s effort to boost its’ “computing power” by constructing new data centers in the country’s West…

…The vision of the EDWC is not only about building new “nodes” of data centers, it is also about creating a “national network of computing”, 全国算力一张网 quanguo suanli yizhang wang so that computing power can be shifted between data centers depending on fluctuations in computing demand and supply in different regions, not unlike the way interconnected or smart power grids can move electricity around the grid depending on where demand is greatest. One of the premises of the EDWC is that data centers, which consume large amounts of energy, should be located in areas with ample renewable energy supply and cheaper land and energy than in the populated east coast. Networking computing resources across the country can make better use of energy in the West without having to transmit electricity from West to East…

…In a televised interview in 2022, Yu Xiaohui 余晓晖 the President of the influential think tank CAICT1, which has played a leading role in developing the EDWC plan, notes that “other countries are doing similar things [to the EDWC] but only within one cloud company, but this is an advantage of China, we are doing a systematic layout.”2 In other words, what he alludes to is the fact that the EDWC project is a state-led effort to coordinate the data center infrastructure of the entire country, which requires going beyond simply encouraging cloud/telecom providers to invest in their own new cloud computing centers. It also means creating a unified national computing network that would allow for a more dynamic allocation of computing demand across the country.

But to do this requires creating a system that can allocate demand from computing centers of different providers. From a business and data privacy standpoint this seems difficult to do. However, China’s three state-owned telecom operators are the ones playing a significant role in building out the EDWC project…

…In various documents and news announcements there have been reference to 调度中心 or “adjustment centers” in each of the data center nodes, which are supposed to function as traffic centers for nationwide computing resources, balancing supply (western data centers) and demand (eastern applications). Such “adjustment centers” allow computing tasks to be dynamically allocated to different data centers, such as allocating workloads between 8 national computing hubs and 10 data center clusters, optimizing latency, and improving energy efficiency by redirecting “non-urgent” data tasks to western renewable-powered hubs during off-peak hours…

…There are technical challenges to developing the national network. But there are also obvious financial/proprietary hurdles as well, namely how to interconnect cloud networks of separate companies who have their own proprietary businesses and systems. This may be easier to do with the three national operators (China Mobile, China Telecom, and China Unicom) than it is with private cloud operators, which are still the leading cloud platform companies (Alibaba, Tencent, Huawei, Baidu)…

…In the U.S., there are some examples of so-called “multi-cloud” interconnections, such as agreements between cloud operators Microsoft and Oracle that allow clients to connect certain cloud databases stored on different cloud platforms.7 But China’s ambition to create a national computing network would require a much greater interconnection and coordination to carry out…

…The degree to which China is able to build out the national network of cloud computing will have implications for its digital innovation, particularly in AI. While the U.S. leads China in the number of data centers by a wide margin, China’s system could develop in ways that diverge from the proprietary open-cloud model in the U.S. in which large enterprise cloud platforms dominate the market (AWS, Microsoft Azure, Google Cloud). Whether and to what degree China’s existing cloud providers continue to dominate the market will depend on their ability to innovate and maintain their edge in the face of increasing entry by state-owned cloud providers into cloud markets.

5. AI Horseless Carriages – Pete Koomen

I noticed something interesting the other day: I enjoy using AI to build software more than I enjoy using most AI applications–software built with AI.

When I use AI to build software I feel like I can create almost anything I can imagine very quickly. AI feels like a power tool. It’s a lot of fun.

Many AI apps don’t feel like that. Their AI features feel tacked-on and useless, even counter-productive.

I am beginning to suspect that these apps are the “horseless carriages” of the AI era. They’re bad because they mimic old ways of building software that unnecessarily constrain the AI models they’re built with…

…Up until very recently, if you wanted a computer to do something you had two options for making that happen:

  1. Write a program
  2. Use a program written by someone else

Programming is hard, so most of us choose option 2 most of the time. It’s why I’d rather pay a few dollars for an off-the-shelf app than build it myself, and why big companies would rather pay millions of dollars to Salesforce than build their own CRM.

The modern software industry is built on the assumption that we need developers to act as middlemen between us and computers. They translate our desires into code and abstract it away from us behind simple, one-size-fits-all interfaces we can understand.

The division of labor is clear: developers decide how software behaves in the general case, and users provide input that determines how it behaves in the specific case.

By splitting the prompt into System and User components, we’ve created analogs that map cleanly onto these old world domains. The System Prompt governs how the LLM behaves in the general case and the User Prompt is the input that determines how the LLM behaves in the specific case.

With this framing, it’s only natural to assume that it’s the developer’s job to write the System Prompt and the user’s job to write the User Prompt. That’s how we’ve always built software.

But in Gmail’s case, this AI assistant is supposed to represent me. These are my emails and I want them written in my voice, not the one-size-fits-all voice designed by a committee of Google product managers and lawyers.

In the old world I’d have to accept the one-size-fits-all version because the only alternative was to write my own program, and writing programs is hard.

In the new world I don’t need a middleman tell a computer what to do anymore. I just need to be able to write my own System Prompt, and writing System Prompts is easy!…

…In most AI apps, System Prompts should be written and maintained by users, not software developers or even domain experts hired by developers.

Most AI apps should be agent builders, not agents…

…AI-native software should maximize a user’s leverage in a specific domain. An AI-native email client should minimize the time I have to spend on email. AI-native accounting software should minimize the time an accountant spends keeping the books.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google Cloud), Amazon (parent of AWS), Microsoft (parent of Azure), Salesforce, and Tencent. Holdings are subject to change at any time.

This Book Explains The Economic Problems Facing The USA and China Today (Including Tariffs!)

The book mentioned in the title of this article is The Other Half of Macroeconomics and the Fate of Globalization written by economist Richard C. Koo (Gu Chao Ming) and published in 2018.

I first came across Koo’s book in March 2020 when I chanced upon a review of it in Mandarin, written by investor Li Lu. I can read Mandarin and I found myself agreeing to the ideas from the book that Li shared, so much so that I made a self-directed attempt at translating the review into English. But I only began reading the actual book near the start of this year and finished it about a month ago. There was even more richness in the book’s ideas about how economies should operate than what was shared in Li’s already-wonderful review.

Earlier this month, the US government, under the Trump administration, made sweeping changes to the global trading system by introducing the Reciprocal Tariff Policy, which raised tariffs, sometimes significantly so, for many of the US’s trading partners. Major driving forces behind the Reciprocal Tariff Policy ostensibly include the US’s sustained trade deficits (particularly with China) and a desire by the Trump administration to bring manufacturing jobs back to the country.

As I contemplated the Trump administration’s actions, and the Chinese government’s reactions, I realised Koo’s book explained why all these issues happened. So I’m writing my own notes and takeaways from the book for easy reference in the future and I would like to share them in this article in the hopes that they could be useful for you. I will be borrowing from my translation of Li’s Mandarin review in this article. Below the horizontal line, all content in grey font are excerpts from my translation while all italicised content are excerpts from the book. 


The three stages of economic development a country goes through

There are six important ideas that Koo discussed in his book. One of them is the concept that a country goes through three distinct stages of economic development over time. 

The first stage of development would be a country that is industrialising and has yet to reach the Lewis Turning Point (LTP). The LTP is the “point at which urban factories have finally absorbed all the surplus rural labour.” When a country starts industrialising, people are mostly living in rural areas and there are only a very few educated elite who have the knowhow to kickstart industrialisation. There is also a surplus of labour. As a result, the educated elite – the industrialists – hold the power and “most of the gains during the initial stage of industrialisation therefore go to the educated few.”  The first stage of economic development is also when income inequality widens – the gains from industrialisation continue to accumulate in the hands of the elite as they reinvest profits into their businesses because there continues to be a surplus of labour.

The second stage of development happens when an industrialising economy reaches the LTP. At this point, labour “gains the bargaining power to demand higher wages for the first time in history, which reduces the share of output accruing to business owners.” But business owners are happy to continue reinvesting their profits as they are still “achieving good returns, leading to further tightness in the labour market.” This dynamic leads to an economy’s “golden era”:

As labor’s share increases, consumption’s share of GDP will increase at the expense of investment. At the same time, the explosive increase in the purchasing power of ordinary citizens means that most businesses are able to increase profits simply by expanding existing productive capacity. Consequently, both consumption and investment will increase rapidly…

…Inequality also diminishes as workers’ share of output increases relative to that of capital… 

…With incomes rising and inequality falling, this post-LTP maturing phase may be called the golden era of economic growth…

…Higher wages force businesses to look harder for profitable investment opportunities. On the other hand, the explosive increase in the purchasing power of ordinary workers who are paid ever-higher wages creates major investment opportunities. This prompts businesses to invest for two reasons. First, they seek to increase worker productivity so that they can pay ever-higher wages. Second, they want to expand capacity to address workers’ increasing purchasing power. Both productivity- and capacity-enhancing investments increase demand for labor and capital that add to economic growth. In this phase, business investment increases workers’ productivity even if their skill level remains unchanged…

…With rapid improvements in the living standards of most workers, the post-LTP maturing phase is characterised by broadly distributed benefits from economic growth.”

The golden era has its problems too. This is because this period is when “workers begin to utilise their newfound bargaining power” such as by organising strikes. But business owners and labour tend to be able to work out their differences.

The third stage of development is what Koo calls a “post-LTP pursued economy.” When a country is in the golden era, at some point ever-growing wages creates inroads for foreign competitors – and the country starts being chased by the foreign competitors that have lower wages. This is when businesses in the country find it very challenging to “find attractive investment opportunities at home because it often makes more sense for them to buy directly from the “chaser” or to invest in that country themselves.” This is also when “the return on capital is higher abroad than at home.” During the pursued stage, “real wage growth will be minimal” and “economic growth also slows.” Although a pursued country can continue to grow economically, a major problem is that inequality once again rears its ugly head:

“Japan’s emergence in the 1970s shook the U.S. and European industrial establishments. As manufacturing workers lost their jobs, ugly trade frictions ensued between Japan and the West. This marked the first time that Western countries that had already passed their LTPs had been chased by a country with much lower wages…

…While Western companies at the forefront of technology continued to do well, the disappearance of many well-paying manufacturing jobs led to worsening income inequality in these countries…

…Some of the pain Western workers felt was naturally offset by the fact that, as consumers, they benefited from cheaper imports from Asia, which is one characteristic of import-led globalisation. Businesses with advanced technology continued to do well, but it was no longer the case that everyone in society was benefiting from economic growth. Those whose jobs could be transferred to lower-cost locations abroad saw their living standards stagnate or even fall.”

Koo wrote that Western economies – the USA and Europe – entered their golden eras around the 1950s and became pursued starting in the 1970s by Japan. During the golden era of the West, “it was in an export-led globalisation phase as it exported consumer and capital goods to the world.” But as the West started getting pursued, they entered “an import-led globalisation phase as capital seeks higher returns abroad and imports flood the domestic market.”

The four states of an economy

Another important idea from Koo’s book is the concept that an economy has four distinct states, which are summarised in Table 1 below:

An economy is always in one of four possible states depending on the presence or absence of lenders (savers) and borrowers (investors). They are as follows: (1) both lenders and borrowers are present in sufficient numbers, (2) there are borrowers but not enough lenders even at high interest rates, (3) there are lenders but not enough borrowers even at low interest rates, and (4) both lenders and borrowers are absent.

Table 1

Koo’s idea that an economy has four distinct states is important because mainstream economic-thought does not cater for the disappearance of borrowers:

“Of the four, only Cases 1 and 2 are discussed in traditional economics, which implicitly assumes there are always enough borrowers as long as real interest rates are low enough.”

There are two key reasons why an economy would be in Cases 3 and 4, i.e. when borrowers disappear. The first is when private-sector businesses are unable to find attractive investment opportunities (this is related to economies that are in the third-stage of development discussed earlier in this article, when attractive domestic investment opportunities become scarce):

The first is one in which private‐sector businesses cannot find investment opportunities that will pay for themselves. The private sector will only borrow money if it believes it can pay back the debt with interest. And there is no guarantee that such opportunities will always be available. Indeed, the emergence of such opportunities depends very much on scientific discoveries and technological innovations, both of which are highly irregular and difficult to predict.

In open economies, businesses may also find that overseas investment opportunities are more attractive than those available at home. If the return on capital is higher in emerging markets, for example, pressure from shareholders will force businesses to invest more abroad while reducing borrowings and investments at home. In modern globalized economies, this pressure from shareholders to invest where the return on capital is highest may play a greater role than any technological breakthroughs, or lack thereof, in the decision as to whether to borrow and invest at home.”

The second reason for the disappearance of borrowers is named by Koo as a “balance sheet recession” which is described as such:

“In the second set of circumstances, private‐sector borrowers have sustained huge losses and are forced to rebuild savings or pay down debt to restore their financial health. Such a situation may arise following the collapse of a nationwide asset price bubble in which a substantial part of the private sector participated with borrowed money. The collapse of the bubble leaves borrowers with huge liabilities but no assets to show for the debt. Facing a huge debt overhang, these borrowers have no choice but to pay down debt or increase savings in order to restore their balance sheets, regardless of the level of interest rates.

Even when the economy is doing well, there will always be businesses that experience financial difficulties or go bankrupt because of poor business decisions. But the number of such businesses explodes after a nationwide asset bubble bursts.

For businesses, negative equity or insolvency implies the potential loss of access to all forms of financing, including trade credit. In the worst case, all transactions must be settled in cash, since no supplier or creditor wants to extend credit to an entity that may seek bankruptcy protection at any time. Many banks and other depository institutions are also prohibited by government regulations from extending or rolling over loans to insolvent borrowers in order to safeguard depositors’ money. For households, negative equity means savings they thought they had for retirement or a rainy day are no longer there. Both businesses and households will respond to these life‐threatening conditions by focusing on restoring their financial health—regardless of the level of interest rates—until their survival is no longer at stake.

A balance sheet recession can be a huge problem for a country’s economy if it is unresolved as it can lead to a rapidly shrinking economy as a manifestation of the “fallacy of composition” problem:

“One person’s expenditure is another person’s income…

…The interaction between thinking and reacting households and businesses create a situation where one plus one does not necessarily equal two. For example, if A decides to buy less from B in order to set aside more savings for an uncertain future, B will have less income to buy things from A. That will lower A’s income, which in turn will reduce the amount A can save.

This interaction between expenditure and income also means that, at the national level, if one group is saving money, another group must be doing the opposite – “dis-saving” – to keep the economy running. In most cases, this dis-saving takes the form of borrowing by businesses that seek to expand their operations. If everyone is saving and no one is dis-saving on borrowing, all of those savings will leak out of the economy’s income stream, resulting in less income for all.

For example, if a person with an income of $1,000 decides to spend $900 and save $100, the $900 that is spent becomes someone else’s income and continues circulating in the economy. The $100 that is saved is typically deposited with a financial institution such as a bank, which then lends it to someone else who can make use of it. When that person borrows and spends the $100, total expenditures in the economy amount to $900 plus $100, which is equal to the original $1,000, and the economy moves forward…

…If there are no borrowers for $100 in savings in the above example, even at zero interest rates, total expenditures in the economy will drop to $900, while the saved $100 remains unborrowed in financial institutions or under mattresses. The economy has effectively shrunk by 10 percent, from $1,000 to $900. That $900 now becomes someone else’s income. If that person decides to save 10 percent, and there are still no borrowers, only $810 will be spent, causing the economy to contract to $810. This cycle will repeat, and the economy will shrink to $730, if borrowers remain on the sidelines. This process of contraction is called a “deflationary spiral.”…

…Keynes had a name for this state of affairs, in which everyone wants to save but is unable to do so because no one is borrowing. He called it the paradox of thrift. It is a paradox because if everyone tries to save, the net result is that no one can save.

The phenomenon of right behaviour at the individual level leading to a bad result collectively is known as the “fallacy of composition.””

Japan was the “first advanced country to experience a private-sector shift to debt minimization for balance sheet reasons since the Great Depression.” Japan’s real estate bubble burst in 1990, where real estate prices fell by 87%, “devastating the balance sheets of businesses and financial institutions across the country” and leading to a disappearance of borrowers:

Demand for funds shrank rapidly when the bubble finally burst in 1990. Noting that the economy was also slowing sharply, the BOJ took interest rates down from 8 percent at the height of the bubble to almost zero by 1995. But demand for funds not only failed to recover but actually turned negative that year. Negative demand for funds means that Japan’s entire corporate sector was paying down debt at a time of zero interest rates, a world that no economics department in university or business school had ever envisioned. The borrowers not only stopped borrowing but began moving in the opposite direction by paying down debt and continued doing so for a full ten years, until around 2005…

…While in a textbook economy the household sector saves and the corporate sector borrows, both sectors became net-savers in post-1999 Japan, with the corporate sector becoming the largest saver in the country from 2002 onward in spite of zero interest rates.”

The Western economies experienced their own balance sheet recessions starting in 2008 with the bursting of housing bubbles in that year. When the “bubbles collapsed on both sides of the Atlantic in 2008, the balance sheets of millions of households and many financial institutions were devastated.” Borrowers also disappeared; the “private sectors in virtually all major advanced nations have been increasing savings or paying down debt since 2008 in spite of record low interest rates.” For example, “the U.S. private sector saved 4.1 percent of GDP at near-zero interest rates in the four quarters through Q1 2017” and the “Eurozone’s overall private sector is saving 4.6 percent of GDP in spite of negative interest rates.

Prior to the Japanese episode, the most recent example was the Great Depression:

Until 2008, the economics profession considered a contractionary equilibrium (the $500 economy) brought about by a lack of borrowers to be an exceptionally rare occurrence – the only recent example was the Great Depression, which was triggered by the stock market crash in October 1929 and during which the U.S. lost 46 percent of nominal GNP. Although Japan fell into a similar predicament when its asset price bubble burst in 1990, its lessons were almost completely ignored by the economics professional until the Lehman shock of 2008.”

The appropriate macroeconomic policies

The third important idea from Koo’s book is that depending on which stage (pre-LTP, golden era, or pursued) and which state (Case 1, Case 2, Case 3, or Case 4) a country’s economy is in, there are different macroeconomic policies that would be appropriate.

First, it’s important to differentiate the two policies governments can wield, namely, monetary policy and fiscal policy:

“The government also has two types of policy, known as monetary and fiscal policy, that it can use to help stabilise the economy by matching private-sector savings and borrowings. Themore frequently used is monetary policy, which involves raising or lowering interest rates to assist the matching process. Since an excess of borrowers is usually associated with a strong economy, a higher policy rate might be appropriate to prevent overheating and inflation. Similarly, a shortage of borrowers is usually associated with a weak economy, in which case a lower policy rate might be needed to avert a recession or deflation.

With fiscal policy, the government itself borrows and spends money on such projects as highways, airports, and other social infrastructure. While monetary policy decisions can be made very quickly by the central bank governor and his or her associates, fiscal policy tends to be very cumbersome in a peacetime democracy because elected representatives must come to an agreement on how much to borrow and where to spend the money. Because of the political nature of these decisions and the time it takes to implement them, most recent economic fluctuations were dealt with by central banks using monetary policy.”

Fiscal policy is more important than monetary policy when a country’s economy is in the pre-LTP stage, but the relative importance of the two types of policies switches once the economy enters the golden era; fiscal policy once again becomes the more important type of policy when the economy is in the pursued stage: 

“In the early phases of industrialisation, economic growth will rely heavily on manufacturing, exports, and the formation of capital etc. At this juncture, the government’s fiscal policies can play a huge role. Through fiscal policies, the government can gather scarce resources and invest them into basic infrastructure, resources, and export-related services etc. These help emerging countries to industrialise rapidly. Nearly every country that was in this stage of development saw their governments implement policies that promote active governmental support.

In the second stage of development, the twin engines of economic growth are rising wages and consumer spending. The economy is already in a state of full employment, so an increase in wages in any sector or field will inevitably lead to higher wages in other areas. Rising wages lead to higher spending and savings, and companies will use these savings to invest in productivity to improve output. In turn, profits will grow, leading to companies having an even stronger ability to raise wages to attract labour. All these combine to create a positive feedback loop of economic growth. Such growth comes mainly from internal sources in the domestic economy. Entrepreneurs, personal and household investing behaviour, and consumer spending patterns are the decisive players in promoting economic growth, since they are able to nimbly grasp business opportunities in the shifting economic landscape. Monetary policies are the most effective tool in this phase, compared to fiscal policies, for a few reasons. First, fiscal policies and private-sector investing both tap on a finite pool of savings. Second, conflicts could arise between the private sector’s investing activities and the government’s if poorly thought-out fiscal policies are implemented, leading to unnecessary competition for resources and opportunities. 

When an economy reaches the third stage of development (the stage where it’s being chased), fiscal policy regains its importance. At this stage, domestic savings are high, but the private sector is unwilling to invest domestically because the investing environment has deteriorated – domestic opportunities have dwindled, and investors can get better returns from investing overseas. The government should step in at this juncture, like what Japan did, and invest heavily in infrastructure, education, basic research and more. The returns are not high. But the government-led investments can make up for the lack of private-sector investments and the lack of consumer-spending because of excessive savings. In this way, the government can protect employment in society and prevent the formation of a vicious cycle of a decline in GDP. In contrast, monetary policy is largely ineffective in the third stage.”

It’s worth noting that an economy that is in the pre-LTP stage is likely to be in Case 4, where borrowers and lenders are both absent. Meanwhile, an economy that is in the Golden Era is likely to be in Case 1 (where both borrowers and lenders are present in abundance) and in Case 2 (where borrowers are present, but lenders are absent) during a run-of-the-mill recession, although a Case 1 Golden Era economy can also quickly be in Case 3 or Case 4. Once an economy is in the pursued stage, it is likely to be in Case 3 (where borrowers are absent but lenders are present) because of a lack of domestic investment opportunities or a balance sheet recession, or Case 4 (where borrowers and lenders are both absent) because of a balance sheet recession.

When a country’s economy is in Case 1 or Case 2, monetary policy is more important:

“Case 1 requires a minimum of policy intervention – such as slight adjustments to interest rates – to match savers and borrowers and keep the economy going. Case 1, therefore, is associated with ordinary interest rates and can be considered the ideal textbook case.

The causes of Case 2 (insufficient lenders) can be traced to both macro and financial factors. The most common macro factor is when the central bank tightens monetary policy to rein in inflation. The tighter credit conditions that result certainly leave lenders less willing to lend. Once inflation is under control, however, the central bank typically eases monetary policy, and the economy returns to Case 1. A country may also be too poor or underdeveloped to save. If the paradox of thrift leaves a country too poor to save, the situation would be classified as Case 3 or 4 because it is actually attributable to a lack of borrowers.

Financial factors weighing on lenders may also push the economy into Case 2. One such factor is an excess of non-performing loans (NPLs) in the banking system, which depresses banks’ capital ratios and prevents them from lending. This is what is typically called a “credit crunch.” Over-regulation of financial institutions by the authorities can also lead to a credit crunch. When many banks encounter NPL problems at the same time, mutual distrust may lead not only to a credit crunch, but also to a dysfunctional interbank market, a state of affairs typically referred to as a “financial crisis.”…

…Non-developmental causes of a shortage of lenders all have well-known remedies… For example, the government can inject capital into the banks to restore their ability to lend, or it can relax regulations preventing financial institutions from serving as financial intermediaries.

In the case of a dysfunctional interbank market, the central bank can act as lender of last resort to ensure the clearing system continues to operate. It can also relax monetary policy. The conventional emphasis on monetary policy and concerns over the crowding-out effect of fiscal policy are justified in Cases 1 and 2 where there are borrowers but (for a variety of reasons in Case 2) not enough lenders.””

When a country’s economy is in Case 3 or Case 4, fiscal policy is more important because monetary policy does not work when borrowers disappear, although the appropriate type of fiscal policy can also differ:

“It should be noted that in the immediate aftermath of a bubble collapse, the economy is usually in Case 4, characterized by a disappearance of both lenders and borrowers. The lenders stop lending because they provided money to borrowers who participated in the bubble and are now facing technical or real insolvency. Banks themselves may be facing severe solvency problems when many of their borrowers are unable to service their debts…

…In a financial crisis, therefore, the central bank must act as lender of last resort to ensure that the settlement system continues to function…

Once the bubble bursts and households and businesses are left facing debt overhangs, no amount of monetary easing by the central bank will persuade them to resume borrowing until their balance sheets are fully repaired…

…When private-sector borrowers disappear and monetary policy stops working, the correct way to prevent a deflationary spiral is for the government to borrow and spend the excess savings of the private sector… 

…In other words, the government should mobilize fiscal policy and serve as borrower of last resort when the economy is in Case 3 or 4. 

If the government borrows and spends the $100 left unborrowed by the private sector, total expenditures will amount to $900 plus $100, or $1,000, and the economy will move on. This way, the private sector will have the income it needs to pay down debt or rebuild savings…

…It has been argued that the fiscal stimulus is essential when the economy is in Case 3 or 4. But there are two kinds of fiscal stimulus: government spending and tax cuts. If the economy is in a balance sheet recession, the correct form of fiscal stimulus is government spending. If the economy is suffering from a lack of domestic investment opportunities, the proper response would be a combination of tax cuts and deregulation to encourage innovation and risk taking… augmented by government spending…

…The close relationship observed prior to 2008 between central-bank-supplied liquidity, known as the monetary base, and growth in money supply and private-sector credit broke down completely after the bubbles burst and the private sector began minimizing debt. Here money supply refers to the sum of all bank accounts plus bills and coins circulating in the economy, and credit means the amount of money lent to the private sector by financial institutions…

…In this textbook world, a 10 percent increase in central bank liquidity would increase both the money supply and credit by 10 percent. This means there were enough borrowers in the private sector to borrow all the funds supplied by the central bank, and the economies were tin Case 1…

…But after the bubble burst, which forced the private sector to minimize debt in order to repair its balance sheet, no amount of central bank accommodation was able to increase private-sector borrowings. The U.S. Federal Reserve, for example, expanded the monetary base by 349 percent after Lehman Brothers went under. But the money supply grew by only 76 percent and credit by only 27 percent. A 27 percent increase in private-sector credit over a period of nearly nine years represents an average annual increase of only 2.75 percent, which is next to nothing.”

Fiscal stimulus equates to government-spending, which increases public debt. Koo suggests that (1) when an economy is in Case 3 or Case 4, rising and/or high public debt is not necessarily a problem, and (2) the limits of public debt should be determined by the bond market:

“Debt is simply the flip side of savings. Somebody has to be saving for debt to grow, and it is bound to increase as long as someone in the economy continues to save. Moreover, if someone is saving but debt levels fail to grow (i.e., if no one borrows and spends the saved funds), the economy will fall into the $1000 – $900 – $810 – $730 deflationary spiral….

…Growth in debt (excluding debt financed by the central bank) is merely a reflection of the fact that the private sector has continued to save. 

If debt is growing faster than actual savings, it simply means there is double counting somewhere, i.e., somebody has borrowed the money but instead of using it himself, he lent it to someone else, possibly with a different maturity structure (maturity transfer) or interest rates (fixed to floating or vice versa). With the prevalence of carry trades and structured financial products involving multiple counterparties, debt numbers may grow rapidly on the surface, but the actual debt can never be greater than the actual savings. 

Furthermore, the level of debt anyone can carry also depends on the level of interest rates and the quality of projects financed with the debt. If the projects earn enough to pay back both borrowing costs and principal, then no one should care about the debt load, no matter how large, because it does not represent a future burden on anyone. Similarly, no matter how great the national debt, if the funds are invested in public works projects capable of generating returns high enough to pay back both interest and principal, the projects will be self-financing and will not increase the burden on future taxpayers…

…Whether or not fiscal policy has reached its limits should be decided by the bond market, not by some economist using arbitrarily chosen criteria. 

During the golden era, when the private sector has strong demand for funds to finance productivity- and capacity-enhancing investments, fiscal stimulus will have a minimal if not negative impact on the economy because of the crowding-out effect. The bond market during this era correctly assigns very low prices (high yields) to government bonds, indicating that such stimulus is not welcome.

During the pursued era or during balance sheet recessions, however, private-sector demand for funds is minimal if not negative. At such times, fiscal stimulus is not only essential, but it has maximum positive impact on the economy because there is no danger of crowding out. During this period, the bond market correctly sets very high prices (low yields) for government bonds, indicating they are welcome…

…Ultra-low bond yields in economies in Cases 3 and 4 are also a signal to the government to look for public works projects capable of producing a social rate of return in excess of those rates. If such projects can be found, fiscal stimulus centered on them will ultimately place no added burden on future taxpayers.” 

The experience of the economies of the US, the UK, Japan, and Europe in the aftermath of the housing bubble bursting in 2008 which thrust them into balance sheet recessions is instructive on the importance of fiscal policy in combating balance sheet recessions:

In November 2008, just two months after Lehman Brothers went under, the G20 countries agreed at an emergency meeting in Washington to implement fiscal stimulus. That decision kept the world economy from falling into a deflationary spiral. But in 2010, the fiscal orthodoxy of those who did not understand balance sheet recessions reasserted itself at the Toronto G20 meeting, where members agreed to cut deficits in half even though private-sector balance sheets were nowhere near a healthy state. The result was a sudden loss of forward momentum for the global economy that prolonged the recession unnecessarily in many parts of the world. After 2010, those countries that understood the danger of balance sheet recessions did well, while those that did not fell by the wayside…

…Bernanke and Yellen both understood this, and they used the expression “fiscal cliff” to warn Congress about the danger posed by fiscal consolidation, which the Republicans and many orthodox economists supported. The extent of Bernanke’s concerns about fiscal consolidation can be gleaned from a press conference on April 25, 2012, when he was asked what the Fed would do if Congress pushed the U.S. economy off the fiscal cliff. He responded, “There is . . . absolutely no chance that the Federal Reserve could or would have any ability whatsoever to offset that effect on the economy.”10 Bernanke clearly understood that the Fed’s monetary policy not only cannot offset the negative impact of fiscal consolidation, but would also lose its effectiveness if the government refused to act as borrower of last resort.

Even though the U.S. came frighteningly close to falling off the fiscal cliff on a number of occasions, including government shutdowns, sequesters, and debt‐ceiling debates, it ultimately managed to avoid that outcome thanks to the efforts of officials at the Fed and the Obama administration. And that is why the U.S. economy is doing so much better than Europe, where virtually every country did fall off the fiscal cliff…

…The warnings about the fiscal cliff set the Fed apart from its counterparts in Japan, the UK, and Europe. In the UK, then-BOE Governor Mervyn King publicly supported David Cameron’s rather draconian austerity measures, arguing that his bank’s QE policy would provide necessary support for the British economy. At the time, the UK private sector was saving a full 9 percent of GDP when interest rates were at their lowest levels in 300 years. That judgement led to the disastrous performance of the UK economy during the first two years of the Cameron administration…

…BOJ Governor Haruhiko Kuroda also argued strongly in favor of hiking the consumption tax rate, believing a Japanese economy supported by his quantitative easing regime would be strong enough to withstand the shock of fiscal consolidation. This was in spite of the fact that Japanese private sector was saving 6.2 percent of GDP at a time of zero interest rates. The tax hike, which was carried out in April 2014, threw the Japanese economy back into recession…

…ECB President Mario Draghi has admonished member governments to meet the austerity target imposed by the Stability and Growth Pact at every press conference, even though his own inflation forecasts have been revised downwards almost every time they are updated. He seems to be completely oblivious to the danger posed by fiscal austerity when the Eurozone private sector has been saving an average of 5 percent of GDP since 2008 despite zero or even negative interest rates.” 

Koo also noted that when Japan’s real estate bubble burst in 1990, the government was “quick to administer fiscal stimulus to stop the implosion” and that “the economy responded positively each time fiscal stimulus was implemented, but lost momentum each time the stimulus was removed.” The Japanese government was under enormous pressure to cut fiscal stimulus in the aftermath of the bubble, but the government did not completely cave, and the Japanese economy managed to fare better than it would otherwise:

“The orthodox fiscal hawks who dominated the press and academia also tried to stop fiscal stimulus at every step of the way, arguing that large deficits would soon lead to skyrocketing interest rates and a fiscal crisis. These hawks forced politicians to cut stimulus as soon as the economy showed signs of life, prompting another downturn. The resulting on-again, off-again fiscal stimulus did not imbue the public with confidence in the government’s handling of the economy. Fortunately, the LDP [Liberal Democratic Party] had enough pork-barrel politicians to keep a minimum level of stimulus needed in place, and as a result, Japanese GDP never once fell below its bubble peak. Nor did the Japanese unemployment rate ever exceed 5.5 percent.

That was a fantastic achievement in view of the fact that the Japanese private sector was saving an average of 8 percent of GDP from 1995 to 2005, and the Japanese lost three times as much wealth (as a share of GDP) as Americans did during the Great Depression, when nominal GNP fell 46 percent.”

The reason for US backlash against globalisation & the conflict between free-trade and free-capital

The fourth and fifth important ideas from Koo’s book are connected and they are respectively, (a) the possible reasons behind the backlash against globalisation that is seen from the current US government under the Trump administration, and (b) the possible conflict between free trade and free-movement of capital. Again, Koo’s book was published in 2018, so it was discussing Donald Trump’s first term as President. But the ideas appear to me to be very applicable to today’s context.

Koo advanced that the Western economies’ entrance into the third stage of economic development – pursued stage – is a reason for the backlash against globalisation:

“One reason for the frustration and social backlash witnessed in the advanced countries is that these countries are experiencing the post-Lewis Turning Point (LTP) pursued phase for the first time in history… 

…Many were caught offguard, having assumed that the golden era that they enjoyed into the 1970s would last forever. It comes as no surprise that those who have seen no improvement in their living standards for many years but still remember the golden age, when everyone was hopeful and living standards were steadily improving, would long for the “good old days.”…

…In the U.S. too, the Trump phenomenon, which has depended largely on the support of blue-collar white males, suggests that people are longing for the life they enjoyed during the golden era, when U.S. manufacturing was the undisputed leader of the world.

Participants in this social backlash in many of the pursued economies view globalization as the source of all evil and are trying to slow down the free movement of both goods and people. Donald Trump and others like him are openly hostile toward immigration while arguing in favour of protectionism and the scuttling of agreements such as the TPP that seek even freer trade.”

Koo described the mainstream view that free trade creates overall gains for trading partners, but cautioned that the view has a flawed assumption, in that imports and exports will be largely balanced as free trade grows, and it is that wrong assumption that also contributed to the backlash against globalisation:

“Economists have traditionally argued that while free trade creates both winners and losers within the same country, it offers significant overall welfare gains for both trading partners because the gains of the winners are greater than the losses of the losers. In other words, there should be more winners than losers from free trade…

…This conclusion, however, is based on one key assumption: that imports and exports will be largely balanced as free trade expands. When – as in the U.S. during the past 30 years – that assumption does not hold and a nation continues to run massive trade deficits, free trade may produce far more losers than theory would suggest. With the U.S. running a trade deficit of almost [US]$740bn a year, or about four percent of GDP, there were apparently enough losers from free trade to put the protectionist Donald Trump into the White House. The fact that Hillary Clinton was also nominated to be the Democratic Party’s candidate for president in the arena full of banners saying “No to TPP” indicates that the social backlash has grown very large indeed.”

Koo clarified that free trade is important and has its benefits, but the way free trade has taken place since World War II is hugely problematic because of (1) the way free trade is structured, and (2) the free movement of capital that is happening in parallel:

“Outright protectionism is likely to benefit the working class in the short term only. In the long run, history has repeatedly shown that protected industries always fall behind on competitiveness and technological advances, which means the economy will stagnate and be overtaken by more dynamic competitors…

…This does not mean that free trade as practiced since 1945 and globalism in general have no problems. They both have major issues, but these can be addressed if properly understood. A correct understanding is important here because even though increasing imports is the most visible feature of an economy in a pursued phase, trade deficits and the plight of workers displaced by imports have been made far worse by the free movement of capital since 1980…

…Once the U.S. opened up its massive markets to the world after 1945 and the GATT-based [General Agreement on Tariffs and Trade] system of free trade was adopted, nations belonging to this system found that it was possible to achieve economic growth without territorial expansion as long as they could produce competitive products. The first countries to recognize this were the vanquished nations of Japan and West Germany, which then decided to devote their best people to developing globally competitive products…

…By the end of the 1970s, however, the West began losing its ability to compete with Japanese firms as the latter overtook the U.S. and European rivals in many sectors, including home appliances, shipbuilding, steel, and automobiles. This led to stagnant income growth and disappearing job opportunities for Western workers.

When Japan joined the GATT in 1963, it still had many tariff and non-tariff trade barriers. In other words, while Western nations had been steadily reducing their own trade barriers, they were suddenly confronted with an upstart from Asia that still had many barriers in place. But as long as Japan’s maximum tariff rates were falling as negotiated and the remaining barriers applied to all GATT members equally, GATT members who had opened their markets earlier could do little under the agreement’s framework to force Japan to open its market (the same problem resurfaced when China joined the WTO 38 years later)…

…When U.S.-Japan trade frictions began to flare up in the 1970s, however, exchange rates still responded correctly to trade imbalances. In other words, when Japanese exports to the U.S. outstripped U.S. exports to Japan, there were more Japanese exporters selling dollars and buying yen to pay employees and suppliers in Japan than there were U.S. exporters selling yen and buying dollars to pay employees and suppliers in the U.S.

Since foreign exchange market participants in those days consisted mostly of exporters and importers, excess demand for yen versus the dollar caused the yen to strengthen against the dollar. That, in turn, made Japanese products less competitive in the U.S. As a result, trade frictions between the U.S. and Japan were prevented from growing any worse than they did because the dollar fell from ¥360 in mid-1971 to less than ¥200 in 1978 in response to widening Japanese trade surpluses with the U.S..

But this arrangement, in which the foreign exchange market acted as a trade equalizer, broke down with financial liberalization, which began in the U.S. with the Monetary Control Act of 1980…

…These changes prompted huge capital outflows from Japan as local investors sought higher-yielding U.S. Treasury securities. Since Japanese investors needed dollars to buy Treasuries, their demand for dollars in the currency market outstripped the supply of dollars from Japanese exporters and pushed the yen back to ¥280 against the dollar. This rekindled the two countries’ trade problems, because few U.S. manufacturers were competitive vis-a-vis the Japanese at that exchange rate.

When calls for protectionism engulfed Washington, President Ronald Reagan, a strong supporter of free trade, responded with the September 1985 Plaza Accord, which took the dollar from ¥240 in 1985 down to ¥120 just two years later. The dollar then rose to ¥160 in 1990 but subsequently fell as low as ¥79.75 in April 1995, largely ending the trade-related hostilities that had plagued the two nations’ relationship for nearly two decades…

…Capital transactions made possible by the liberalization of cross-border capital flows also began to dominate the currency market. Consequently, capital inflows to the U.S. have led to continued strength of the dollar – and stagnant or declining incomes for U.S. workers – even as U.S. trade deficits continue to mount. In other words, the foreign exchange market lost its traditional function as an automatic stabilizer for trade balances, and the resulting demands for protectionism in deficit countries are now at least as great as they were before the Plaza Accord in 1985.”

Specifically with regards to the belligerent relationship the US has with China today, Koo suggested that it was because of flaws in the free trade framework of the World Trade Organisation (WTO):

“…a key contradiction in the WTO framework: the fact that China levies high tariffs on imports from all WTO nations is no reason why the U.S.—which runs a huge trade deficit with China—should have to settle for lower tariffs on imports from China.

This problem arose because the developed‐world members of the WTO had already lowered tariffs among themselves before developing countries such as China, with their significantly lower wages and higher tariffs, were allowed to join. When they joined, developing countries could argue that they were still underdeveloped and needed higher tariffs to allow infant domestic industries to grow and to keep their trade deficits under control. Although that was a valid argument for developing countries at the time and their maximum tariff rates have come down as negotiated, the effective rates remained higher than those of advanced countries long after those countries became competitive enough to run trade surpluses with the developed world…

…Because the WTO system is based on the principle of multilateralism, with rules applied equally to all member nations, this framework provides no way of addressing bilateral imbalances between the U.S. and China. It is therefore not surprising that the Trump administration has decided to pursue bilateral, not multilateral, trade negotiations.

In retrospect, what the WTO should have done is to impose a macroeconomic condition stating that new members must lower their tariff and non‐tariff barriers to advanced‐country norms after they start to run significant trade surpluses with the latter. Here the term “significant” might be defined to mean running a trade surplus averaging more than, say, two percent of GDP for three years. If a country fails to reduce its tariffs to the advanced‐nation norm within say five years after reaching that threshold, the rest of the WTO community should then be allowed to raise tariffs on products from that country to the same level that country charges on its imports. The point is that if the country is competitive enough to run trade surpluses vis‐à‐vis advanced countries, then it should be treated as one.

If this requirement had existed when Japan joined the GATT in 1963 or when China joined the WTO in 2001, subsequent trade frictions would have been far more manageable. Under the above rules, Japan would have had to lower its tariffs starting in 1976, and China would have had to lower its tariffs from the day it joined the WTO in 2000! Such a requirement would also have enhanced the WTO’s reputation as an organization that supports not only free trade but also fair trade.”

Koo also noted that the term globalisation actually has two components, namely, free trade and free movement of capital, and that the former is important for countries to continue maintaining because of the benefits it brings, while the latter system needs improvement:

“The term “globalization” as used today actually has two components: free trade and the free movement of capital. 

Of the two, it was argued in previous chapters that the system of free trade introduced by the U.S. after 1947 led to unprecedented global peace and prosperity. Although free trade produces winners and losers and providing a helping hand to the losers is a major issue in the pursued economies, the degree of improvement in real living standards since 1945 has been nothing short of spectacular in both pursued and pursuing countries…

…The same cannot be said for the free movement of capital, the second component of globalization. Manufacturing workers and executives in the pursued economies feel so insecure not only because imports are surging but also because exchange rates driven by portfolio capital flows of questionable value are no longer acting to equilibrate trade.

To better understand this problem, let us take a step back and consider a world in which only two countries – the U.S. and Japan – are engaged in trade, and each country buys $100 in goods from the other. The next year, both countries will have the $100 earned from exporting to its trading partner, enabling it to buy another $100 in goods from that country. The two nations’ trade accounts are in balance, and the trade relationship is sustainable. 

But if the U.S. buys $100 from Japan, and Japan only buys $50 from the U.S., Japan will have $100 to use the next year, but the U.S. will have only $50, and Japanese exports to the U.S. will fall to $50 as a result. Earning only $50 from the U.S., the Japanese may have to reduce their purchases from the U.S. the following year. This sort of negative feedback loop may push trade into a “contractionary equilibrium.”

When exchange rates are added to the equation, the Japanese manufacturer that exported $100 in goods to the U.S. must sell those dollars on the currency market to buy the yen it needs to pay domestic suppliers and employees. However, the only entity that will sell it those yen is the U.S. manufacturer that exported $50 in goods to Japan.

With $100 of dollar selling and only $50 worth of yen selling, the dollar’s value versus the yen will be cut in half. This is how a surplus country’s exchange rate is pushed higher to equilibrate trade…

…If Japanese life insurers, pension funds, or other investors who need dollars to invest in the U.S. Treasury bonds sold yen and bought the remaining $50 the Japanese exporters wanted to sell, there would then be a total of $100 in dollar-buying demand for the $100 the Japanese exporter seeks to sell, and exchange rates would not change. If Japanese investors continued buying $50-worth of dollar investments each year, exchange rates would not change, in spite of the sustained $50 trade imbalances. 

Although the above arrangement may continue for a long time, the Japanese investors would effectively be lending money to the U.S. This means that at some point the money would have to be paid back. 

Unless the U.S. sells goods to Japan, there will be no U.S. exporters to provide the Japanese investors with the yen they need when they sell their U.S. Treasury bonds to pay yen obligations to Japanese pensioners and life insurance policyholders. Unless Japan is willing to continue lending to the U.S. in perpetuity, therefore, the underlying 100:50 trade imbalance will manifest itself when the lending stops.

At that point, the value of the yen will increase, resulting in large foreign exchange losses for Japanese pensioners and life insurance policyholders. Hence this scenario is also unsustainable in the long run. The U.S., too, would prefer a healthy relationship in which it sells goods to Japan and uses the proceeds to purchase goods from Japan to an unhealthy one in which it funds its purchases via constant borrowings…

…When financial markets are liberalized, capital moves to equalize the expected return in all markets. To the extent that countries with strong domestic demand tend to have higher interest rates than those with weak demand, money will flow from the latter to the former. Such flows will strengthen the currency of the former and weaken the currency of the latter. They may also add to already strong investment activity in the former by keeping interest rates lower than they would be otherwise, while depressing already weak investment activity in the latter by pushing interest rates higher than they would be otherwise.

To the extent that countries with strong domestic demand tend to run trade deficits and those with weak domestic demand run trade surpluses, these capital flows will exacerbate trade imbalances between the two by pushing the deficit country’s currency higher and pushing the surplus country’s currency lower. In other words, these flows are not only not in the best interests of individual countries, but are also detrimental to the attainment of balanced trade between countries. The widening imbalances then increase calls for protectionism in deficit countries.”

Prior to the liberalization of capital flows in the 1980s, “trade was free, but capital flows were regulated, so the foreign exchange market was driven largely by trade-related transactions.” This also meant that currency transactions could play the role they were meant to in terms of driving balanced trade:

“The currencies of trade surplus nations therefore tended to strengthen, and those of trade deficit nations to weaken. That encouraged surplus countries to import more and deficit countries to export more. In other words, the currency market acted as a natural stabilizer of trade between nations.”

As a sign of how free movement of capital has distorted the currency market, Koo noted that when the book was published “only about five percent of foreign exchange transactions involve trade, while the remaining 95 percent are attributable to capital flows.”

The problems with China’s economy

The sixth and last important idea from Koo’s book is a discussion of the factors that affect China’s economic growth, and why the country’s growth rate has slowed in recent years from the scorching pace seen in the 1990s and 200s. One issue, described by Koo, is that China no longer has a demographic tailwind to drive rapid economic growth, and is now facing a “middle-income trap” after passing the LTP around 2012:

“China actually passed the LTP around 2012 and is now experiencing sharp increases in wages. This means the country is now in its golden era, or post‐LTP maturing phase. However, because the Chinese government is wary of strikes, labor disputes, or other public disturbances of any kind, it is trying to pre‐empt such conflict by administering significant wage increases each year, with businesses required to raise wages under directives issued by local governments. In some regions, wages had risen at double‐ digit rates in a bid to prevent labor disputes. It remains to be seen whether such top‐down actions can substitute for a process in which employers and employees learn through confrontation what can reasonably be expected from the other party.

Just as China was passing the LTP, its working‐age population—defined as those aged 15 to 594—started shrinking in 2012. From a demographic perspective, it is highly unusual for the entire labor supply curve to begin shifting to the left just as a country reaches the LTP. Japan, Taiwan, and South Korea all enjoyed about 30 years of workforce growth after reaching their LTPs. The huge demographic bonus China enjoyed until 2012 is not only exhausted, but has now reversed… That means China that will not be able to maintain the rapid pace of economic growth seen in the past, and in fact growth has already slowed sharply. 

Higher wages in China are now leading both Chinese and foreign businesses to move factories to lower-wage countries such as Vietnam and Bangladesh, prompting fears that China will become stuck in the so-called “middle-income trap”. This trap arises from the fact that once a country loses its distinction as the lowest-cost producer, many factories may leave for lower-cost destinations, resulting in less investment and less growth. In effect, the laws of globalization and free trade that benefited China when it was the lowest-cost producer are now posing real challenges for the country.”

Koo proposed ideas for China to reinvigorate China’s growth, such as investing in productivity-enhancing measures for domestic workers. 

Another important factor affecting China’s economic growth involves the appropriate type of policy the government should implement to govern the economy. Since the country had passed the LTP more than a decade ago, and is in its golden era, fiscal policy – the act of the government directing the economy – is no longer the effective way to govern the economy. But is the government relinquishing control? To complicate matters, there are early signs now that China may already be in the pursued stage, in which case, fiscal policy will be important again. It remains to be seen what would be the most appropriate way for the government to lead China’s economy. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q4) – Part 1

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the fourth quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

I’ve split the latest commentary into two parts for the sake of brevity. This is Part 1, and you can Part 2 here. With that, I’ll let the management teams take the stand… 

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is early and has yet to fundamentally change the travel market for any of the large travel platforms; most travel companies are starting with AI on trip planning but management thinks AI is still too early for trip planning

Here’s what I think about AI. I think it’s still really early. It’s probably similar to like the mid- to late ’90s for the Internet. So I think it’s going to have a profound impact on travel, but I don’t think it’s yet fundamentally changed for any of the large travel platforms…

…So most companies, what they’re actually doing is they’re doing integrations of these other platforms on trip planning. But the trip planning, it’s still early. I don’t think it’s quite bit ready for prime time.

Airbnb’s management is starting with AI in customer service; Airbnb will roll out AI-powered customer support later in 2025; management thinks AI can provide great customer support partly because it can speak all languages 24/7 and read thousands of pages of documents; management will eventually graduate the customer-support AI into a travel and living concierge; management thinks AI can help improve efficiency at Airbnb in customer service

We’re actually starting with customer service. So later this year, we’re going to be rolling out, as part of our Summer Release, AI-powered customer support. As you imagine, we get millions of contacts every year. AI can do an incredible job of customer service. It can speak every language 24/7. It can read a corpus of thousands of pages of documents. And so we’re starting with customer support. And over the coming years, what we’re going to do is we’re going to take the AI-powered customer service agent, and we’re going to bring it into essentially Airbnb search to eventually graduate to be a travel and living concierge…

…[Question] With respect to the AI, I appreciate your answer with respect to outward-looking and how it might change the landscape. What do you think the potential is internally to apply AI for efficiencies inside the company and create an additional layer of potential margin efficiency and/or free cash flow conversion in the years ahead?

[Answer] There’s like a couple like efficiencies that you could imagine at Airbnb. One is obviously customer service. I think that’s like one of the biggest ones. I’ve kind of already covered that, but I think that’s like a massive change for Airbnb.

Airbnb’s management thinks that AI models are getting cheaper and are starting to be commoditised

I think it’s a really exciting time in the space because you’ve seen like with DeepSeek and more competition with models is models are getting cheaper or nearly free. They’re getting faster and they’re getting more intelligent. And they are, for all intent and purpose, starting to get commoditized.

Airbnb’s management thinks that a lot of the value from AI is going to accrue to platforms, and they want Airbnb to be the platform for travel and living that will reap most of the value from AI

What I think that means is a lot of value is going to accrue to the platform. And ultimately, I think the best platform, the best applications are going to be the ones that like most accrue the value from AI. And I think we’re going to be the one to do that with traveling and living.

Airbnb’s management thinks AI can help improve efficiency at Airbnb in engineering productivity; in the short-term, the improvement in engineering productivity has not been material; over the next few years, management thinks AI can drive a 30% increase in engineering productivity at Airbnb; over the long-term, management thinks there can an order of magnitude more productivity; management think younger, more innovative companies, could benefit from AI more than incumbent enterprises

[Question] With respect to the AI, I appreciate your answer with respect to outward-looking and how it might change the landscape. What do you think the potential is internally to apply AI for efficiencies inside the company and create an additional layer of potential margin efficiency and/or free cash flow conversion in the years ahead?

[Answer] The other, I assume, you refer to is essentially engineering productivity. We are seeing some productivity gains. I’ve talked to a lot of other tech CEOs, and here’s what I’ve heard talking to other like tech CEOs. Most of them haven’t seen a material like change in engineering productivity. Most of the engineers are using AI tools. They’re seeing some productivity. I don’t think it’s flowing to like a fundamental step-change in productivity yet. I think a lot of us believe in some kind of medium term of a few years, you could easily see like a 30% increase in technology and engineering productivity. And then, of course, beyond that, I mean, I think it could be like an order of magnitude more productivity because — but that’s going to be like down the road. And I think that’s going to be something that almost all companies benefit from. I think the kind of younger, more innovative, startup-like companies might benefit a little bit more because they’ll have engineers who are more likely to adopt the tools.

Alphabet (NASDAQ: GOOG)

AI Overviews in Search is now available in more than 100 countries; AI Overviews drive higher user satisfaction and search usage; Google’s Gemini model is being used in AI Overviews; with AI Overviews, usage growth of Search is growing over time, especially with younger users; management recently launched ads in AI Overviews and AI Overviews is currently monetising at nearly the same rate as Google Search; Google Search has continued to perform well in this AI age, as overall usage has continued to grow, with stronger growth seen in AI Overviews across all segments

In Search, AI overviews are now available in more than 100 countries. They continue to drive higher satisfaction and search usage…

…That includes Search where Gemini is pairing our AI overviews. People use search more with AI overviews and usage growth increases over-time as people learn that they can ask new types of questions. This behavior is even more pronounced with younger users who really appreciate the speed and efficiency of this new format…

…We’ve already started testing Gemini 2.0 in AI overviews and plan to roll it out more broadly later in the year…

…We recently launched the ads within AI Overviews on mobile in the U.S., which builds on our previous rollout of ads above and below. And as I talked about before, for the AI Overviews, overall, we actually see monetization at approximately the same rate, which I think really gives us a strong base on which we can innovate even more…

…On Search usage, overall, our metrics are healthy. We are continuing to see growth in Search on a year-on-year basis in terms of overall usage. Of course, within that, AI Overviews has seen stronger growth, particularly across all segments of users, including younger users, so it’s being well received. But overall, I think through this AI moment, I think Search is continuing to perform well.

Circle to Search is now available on more than 200 million Android devices; Circle to Search is opening new Search use cases; Circle to Search is popular with younger users; Circle to Search is used to start more than 10% of searches among users who have tried it before

Circle to Search is now available on over 200 million Android devices…

…Circle to Search is driving additional Search use and opening up even more types of questions. This feature is also popular among younger users. Those who have tried Circle to Search before now use it to start more than 10% of their searches…

…In Search, we’re seeing people increasingly ask entirely new questions using their voice, camera or in ways that were not possible before, like with Circle to search.

Alphabet’s management believes Google has a unique infrastructure advantage in AI because the company has developed each component of its technology stack; Alphabet broke ground on 11 new cloud regions and data center campuses in 2024, and announced plans for 7 new subsea cable projects; Google data centers now deliver 4x more computing power per unit of electricity compared to 5 years ago

We have a unique advantage because we develop every component of our technology stack, including hardware, compilers, models and products. This approach allows us to drive efficiencies at every level from training and serving to develop our productivity. In 2024, we broke ground on 11 new cloud regions and data center campuses in places like South Carolina, Indiana, Missouri and around the world.

We also announced plans for seven new subsea cable projects, strengthening global connectivity. Our leading infrastructure is also among the world’s most efficient. Google data centers deliver nearly four times more computing power per unit of electricity compared to just five years ago.

Google Cloud customers consume 8x more compute capacity for training and inference compared to 18 months ago; first-time commitments to Google Cloud more than doubled in 2024; Google Cloud closed a few deals in 2024 worth more than $1 billion each; Google Cloud’s AI hypercomputer utilises both GPUs (graphics processing units) and TPUs (tensor processing units), and has helped Wayfair improve performance and scalability by 25%; Google saw strong uptake of Trillium, its 6th generation TPU, in 2024 Q4; Trillium is 4x better in training and has 3x higher inference throughput than the 5th generation TPU; Google Cloud is offering NVIDIA’s H200 GPUs to customers; Google Cloud is the first cloud provider to provide NVIDIA’s Blackwell GPUs; the capex for Google Cloud is mostly for Google’s own self-designed data centers and TPUs (tensor processing units)

Today, cloud customers consume more than eight times the compute capacity for training and inferencing compared to 18 months ago…

…In 2024, the number of first-time commitments more than double compared to 2023…

…Last year, we closed several strategic deals over $1 billion and the number of deals over $250 million doubled from the prior year…

…We continue to see strong growth across our broad portfolio of AI-powered cloud solutions. It begins with our AI hypercomputer, which delivers leading performance and cost across both GPUs and TPUs. These advantages help Citadel with modeling markets and training and enabled Wayfair to modernize its platform, improving performance and scalability by nearly 25%. 

In Q4, we saw strong uptake of Trillium, our sixth-generation TPU, which delivers four times better training performance and three times greater inference throughput compared to the previous generation. We also continue our strong relationship with NVIDIA. We recently delivered their H200 based platforms to customers. And just last week, we were the first to announce a customer running on the highly-anticipated Blackwell platform…

……Our strategy is mostly to rely our own self-design and build data centers. So, they’re industry-leading in terms of both cost and power efficiency at scale. We have our own customized TPUs. They’re customized for our own workload, so they do deliver outstanding the superior performance and capex efficiency. So, we’re going to be looking at all that when we make decisions as to how we’re going to progress capital investments throughout the coming years.

Google launched an experimental version of its Gemini 2.0 Flash model in December 2024, but the model will be generally available for developers and customers; Google debuted its experimental Gemini 2.0 Flash Thinking model in late-2024 and it has gathered extremely positive reviews; Google is working on even better thinking models; Gemini 2.0’s advances in multimodality and native tool use helps Google build a universal AI assistant; an example of this universal assistant can be seen in Deep Research; Deep Research was launched in Gemini Advanced in December and is being rolled out to Android users globally; the consumer Gemini app debuted on iOS in November 2024 and has seen great product momentum; Project Mariner and Project Astra are AI agent products currently being tested and they will appear in the Gemini app sometime in 2025; Gemini and Google’s video and image generation models consistently excel in industry leaderboards and benchmarks; 4.4 million developers are using Gemini models today, double from just six months ago; Google has 7 products with over 2 billion users each, and all 7 products use Gemini; all Google Workspace business and enterprise customers were recently given access to all of Gemini’s AI capabilities; Gemini 2.0 Flash is one of the most capable models people can access for free; management’s current intention for monetisation of Gemini is through subscriptions and improving the user experience, but they have an eye on advertising revenues

In December, we unveiled Gemini 2.0, our most capable AI model yet, built for the agent era. We launched an experimental version of Gemini 2.0 Flash, our workhorse model with low-latency and enhanced performance. Flash has already rolled-out to the Gemini app, and tomorrow we are making 2.0 Flash generally available for developers and customers, along with other model updates…

…Late last year, we also debuted our experimental Gemini 2.0 Flash Thinking model. The progress to scale thinking has been super-fast and the review so-far have been extremely positive. We are working on even better thinking models and look-forward to sharing those with the developer community soon.

Gemini 2.0’s advances in multimodality and native tool use enable us to build new agents that bring us closer to our vision of a universal assistant. One early example is deep research. It uses agent capabilities to explore complex topics on your behalf and give key findings along with sources. It launched in Gemini Advanced in December and is rolling out to Android users all over the world.

We are seeing great product momentum with our consumer Gemini app, which debuted on iOS last November.

…We have opened up trusted tester access to a handful of research prototypes, including Project Mariner, which can understand and reason across information on a browser screen to complete tasks and Project Astra. We expect to bring features from both to the Gemini app later this year…

…Veo2, our state-of-the-art video generation model and Imagine3, our highest-quality text image model. These generative media models as well as Gemini consistently top industry leaderboards and score top marks across industry benchmarks. That’s why more than 4.4 million developers are using our Gemini models today, double the number from just six months ago…

…We have seven products and platforms with over 2 billion users and all are using Gemini…

…We recently gave all Google Workspace business and enterprise customers access to all of our powerful Gemini AI capabilities to help boost their productivity…

…2.0 Flash. I mean, I think that’s one of the most capable models you can access at the free tier…

…[Question] How should we think about the future monetization opportunity of Gemini? Today, it’s really a premium subscription offering or a free offering. Over time, do you see an ad component?

[Answer] On the monetization side, obviously, for now, we are focused on a free tier and subscriptions. But obviously, as you’ve seen in Google over time, we always want to lead with user experience. And we do have very good ideas for native ad concepts, but you’ll see us lead with the user experience. And — but I do think we’re always committed to making the products work and reach billions of users at scale. And advertising has been a great aspect of that strategy. And so, just like you’ve seen with YouTube, we’ll give people options over time. But for this year, I think you’ll see us be focused on the subscription direction.

Google Cloud’s AI developer platform, Vertex AI, had a 5x year-on-year increase in customers in 2024 Q4; Vertex AI offers more than 200 foundation models form Google; Vertex AI’s usage grow 20x in 2024

Our AI developer platform, Vertex AI, saw a 5x increase in customers year-over-year with brands like, International and WPP building new applications and benefiting from our 200 plus foundation models. Vertex usage increased 20x during 2024 with particularly strong developer adoption of Gemini Flash, Gemini 2.0, and most recently VEO.

Alphabet’s management will be introducing Veo2, Google’s video generation model, for creators in Youtube in 2025; advertisers around the world can now promote Youtube creator videos and ad campaigns across all AI-powered campaign types and Google ads

Expanding on our state-of-the-art video generation model, we announced Veo2, which creates incredibly high-quality video in a wide range of subjects and styles. It’s been inspiring to see how people are experimenting with it. We’ll make it available to creators on YouTube in the coming months…

…. All advertisers globally can now promote YouTube creator videos and ad campaigns across all AI-powered campaign types and Google Ads, and creators can tag partners in their brand videos…

Alphabet’s management announced the first beta of Android 16 in January 2025; there will be deeper Gemini integration for the new Samsung Galaxy S25 smartphone series; Alphabet has announced Android XR, the first Android platform for the Gemini era

Last month, we announced the first beta of Android 16 plus new Android updates, including a deeper Gemini integration coming to the new Samsung Galaxy S25 series. We also recently-announced Android XR, the first Android platform built for the Gemini era. Created with Samsung and Qualcomm, Android XR is designed to power an ecosystem of next-generation extended reality devices like headsets and glasses.

Waymo is now serving more than 150,000 trips per week (was 150,000 in 2024 Q3); Waymo is expanding in new markets in the USA this year and in 2026; Waymo will soon be launched in Tokyo; Waymo is developing its 6th-gen driver, which will significantly reduce hardware costs

It’s now averaging over 150,000 trips each week and growing. Looking ahead, Waymo will be expanding its network and operations partnerships to open up new markets, including Austin and Atlanta this year and Miami next year. And in the coming weeks, Waymo One vehicles will arrive in Tokyo for their first international road trip. We are also developing the sixth-generation Waymo driver, which will significantly lower hardware costs.

Alphabet’s management introduced a new Google shopping experience, infused with AI, in 2024 Q4, and there was 13% more daily active users in Google shopping in December 2024 compared to a year ago; the new Google Shopping experience helps users speed up their shopping research

Google is already present in over half of journeys where a new brand, product or retailer are discovered by offering new ways for people to search, we’re expanding commercial opportunities for our advertisers…

…Retail was particularly strong this holiday season, especially on Black Friday and Cyber Monday, which each generated over $1 billion in ad revenue. Interestingly, despite the U.S. holiday shopping season being the shortest since 2019, retail sales began much earlier in October, causing the season to extend longer than anticipated.

People shop more than 1 billion times a day across Google. Last quarter, we introduced a reinvented Google shopping experience, rebuilt from the ground up with AI. This December saw roughly 13% more daily active users in Google shopping in the U.S., compared to the same period in 2023…

…The new Google Shopping experiences specifically to your question, users to really intelligently show the most relevant products, helping to speed up and simplify your research. You get an AI-generated brief with top things to consider for your search plus maybe products that meet your needs. So, shoppers very often want low prices. So, the new page not only includes like deal-finding tools like price comparison, price insights, price tracking throughout. But it’s also a new and dedicated personalized deals page, which can browse deals for you, and all this is really built on the backbone of AI.

Shoppers can now take a photo and use Lens to quickly find information about the product; Lens is now used in over 20 billion visual searches per month (was over 20 billion in 2024 Q3); majority of Lens searches are incremental

Shoppers can now take a photo of a product and using Lens quickly find information about the product, reviews, similar products and where they can get it for a great price. Lens is used for over 20 billion visual search queries every month and the majority of these searches are incremental.

Alphabet’s management continues to infuse AI capabilities into Google’s advertising business; Petco used Demand Gen campaigns to achieve a 275% increase in return on ad spend and a 74% increase in click-through rates compared to social benchmarks; Youtube Select Creator Takeovers is now generally available in the US and will be rolled out across the world; PMax was recently strengthened with new controls and easier reporting functions; Event Ticket Center used PMax and saw a 5x increase in production of creative assets, driving a 300% increase in conversions compared to using manual assets; Meridian, Google’s marketing mix model, was recently made generally available and it delivers 17% higher return on advertising spend on Youtube compared to manual campaigns

We continue investing in AI capabilities across media buying, creative and measurement. As I’ve said before, we believe that AI will revolutionize every part of the marketing value chain.

And over the past quarter, we’ve seen how our customers are increasingly focusing on optimizing the use of AI. As an example, [ Petco ], used Demand Gen campaigns across targeting, creative generation and bidding to find new pet parent audiences across YouTube. They achieved a 275% higher return on ad spend and a 74% higher click-through rate than their social benchmarks.

On media buying, we made YouTube Select Creator Takeovers generally available in the U.S. and will be expanding to more markets this year. Creators know their audience the best and creator takeovers help businesses connect with consumers through authentic and relevant content.

Looking at Creative, we introduced new controls and made reporting easier in PMax, helping customers better understand and reinvest into their best-performing assets. Using asset generation in PMax, Event Ticket Center achieved a 5x increase in production of creative assets saving time and effort. They also increased conversions by 300% compared to the previous period when they used manual assets…

…Last week, we made Meridian, our marketing mix model, generally available for customers, helping more business reinvest into creative and media buying strategies that they know work. Based on the Nielsen meta analysis of marketing mix models, on average, Google AI-powered video campaigns on YouTube delivered 17% higher return on advertising spend than manual campaigns.

Sephora used Demand Gen Shorts-only channel for advertising that drove an 82% increase in searches for Sephora Holiday

Sephora used demand gen Shorts-only channel to boost traffic and brand searches for the holiday gift guide campaign and leverage greater collaborations to find the best gift. This drove an 82% relative uplift in searches for Sephora holiday.

Citi is using Google Cloud for its generative AI initiatives across customer service, document summarisation, and search

Another expanding partnership is with Citi, who is modernizing its technology infrastructure with Google Cloud to transform employee and customer experiences. Using Google Cloud, it will improve its digital products, streamline employee workflows and use advanced high-performance computing to enable millions of daily computations. This partnership also fuels Citi’s generate AI initiatives across customer service, document summarization and search to reduce manual processing.

Google Cloud had 30% revenue growth in 2024 Q4 (was 35% in 2024 Q3) driven by growth in core GCP products, AI infrastructure, and generative AI solutions; operating margin was 17.5% (was 17% in 2024 Q3 and was 9.4% in 2023 Q4); GCP grew at a much higher rate than Google Cloud overall; Google Cloud had more AI demand than capacity in 2024 Q4; management is thinking about Google’s capital intensity, but they want to invest because they are seeing strong AI demand both internally and externally; the capex Google is making can be repurposed across its different businesses

Turning to the Google Cloud segment, which continued to deliver very strong results this quarter. Revenue increased by 30% to $12 billion in the fourth quarter, reflecting growth in GCP, across core GCP products, AI infrastructure, and generative AI solutions. Once again, GCP grew at a rate that was much higher than cloud overall. Healthy Google Workspace growth was primarily driven by increase in average revenue per seat. Google Cloud operating income increased to $2.1 billion and operating margin increased from 9.4% to 17.5%…

…We do see and have been seeing very strong demand for our AI products in the fourth quarter in 2024. And we exited the year with more demand than we had available capacity. So, we are in a tight supply demand situation, working very hard to bring more capacity online…

…[Question] How do you think about long-term capital intensity for this business?

[Answer] On the first one, certainly, we’re looking ahead, but we’re managing very responsibly. It was a very rigorous, even internal governance process, looking at how do we allocate the capacity and what would we need to support the customer demand externally, but also across the Google — the Alphabet business. And as you’ve seen in the comment I’ve just made on Cloud, we do have demand that exceeds our available capacity. So, we’ll be working hard to address that and make sure we bring more capacity online. We do have the benefit of having a very broad business, and we can repurpose capacity, whether it’s through Google Services or Google Cloud to support, as I said, whether it’s search or GDM, or Google Cloud customers, we can do that in a more efficient manner.

Alphabet’s management thinks Google’s AI models are in the lead when compared to DeepSeek’s, and this is because of Google’s full-stack development

If you look at one of the areas in which the Gemini model shines is the Pareto frontier of cost, performance, and latency. And if you look at all three attributes, I think we are — we lead this period of frontier. And I would say both our 2.0 Flash models, our 2.0 Flash thinking models, they are some of the most efficient models out there, including comparing to DeepSeek’s V3 and R1. And I think a lot of it is our strength of the full stack development, end-to-end optimization, our obsession with cost per query.

Alphabet’s management has seen the proportion of AI spend on inference growing over the last 3 years when compared to training; management thinks reasoning AI models will accelerate this trend

A couple of things I would say are if you look at the trajectory over the past three years, the proportion of the spend toward inference compared to training has been increasing, which is good because, obviously, inferences to support businesses with good ROIC…

…I think the reasoning models, if anything, accelerates that trend because it’s obviously scaling upon inference dimension as well.

Alphabet’s management thinks that AI agents and Google Search are not competing in a zero-sum game

[Question] With your own project Mariner efforts and a competitor’s recent launch, it seems there’s suddenly really strong momentum on AI consumer agents and kind of catching up to that old Google Duplex Vision. I think when you look a few years ahead, where do you see consumer agents going? And really, what does it mean to Google Search outside of Lens? Is there room for both to flourish?

[Answer] Gemini 2.0 was definitely built with the view of enabling more agentic use cases. And so, I actually — we are definitely seeing progress inside. And I think we’ll be able to do more agentic experiences for our users. Look, I actually think all of this expands the opportunity space. I think it — historically, we’ve had information use cases, but now you can have — you can act on your information needs in a much deeper way. It’s always been our vision when we have talked about Google Assistant, etc. So, I think the opportunity space expands. I think there’s plenty of it, feels very far from a zero-sum game. There’s plenty of room, I think, for many new types of use cases to flourish. And I think for us, we have a clear sense of additional use cases we can start to tackle for our users in Google Search.

Alphabet’s management has been passing on cost differentiations arising from Google Cloud’s end-to-end stack approach to customers

Part of the reason we have taken the end-to-end stack approach is so that we can definitely drive a strong differentiation in end-to-end optimizing and not only on a cost but on a latency basis, on a performance basis. Be it the Pareto frontier we mentioned, and I think our full stack approach and our TPU efforts all play give a meaningful advantage. And we plan — you already see that. I know you asked about the cost, but it’s effectively captured when we price outside, we pass on the differentiation. 

Amazon (NASDAQ: AMZN)

AWS grew 19% year-on-year in 2024 Q4, and is now at a US$115 billion annualised revenue run rate; management expects lumpiness in AWS’s growth in the next few years, but is incredibly optimistic about AWS’s growth; management thinks the future will be one where (a) every app is infused with generative AI that has inference as a core building block, and (b) companies will have AI agents accomplishing tasks and interacting with each other; management believes this future will be built on the cloud, and mostly on AWS; the shift by enterprises from on-premises to the cloud, which is a non-AI activity, continues for AWS; AWS continues to innovate in non-AI areas; AWS’s growth in 2024 Q4 was driven by both generative AI and non-generative AI offerings; AWS had a massive 48% year-on-year jump in operating income in 2024 Q4, helped partly by an increase in estimated useful life of servers that started in 2024; management sees AWS being capable of faster growth today if not for supply constraints; the constraints relate to (1) chips from 3rd-party partners (most likely referring to NVIDIA), (2) AWS’s own Trainium chips, (3) power for data centers, and (4) other supply chain components; management sees the AWS constraints starting to relax in 2025 H2; AWS’s AI services come with lower margins right now, but management thinks the AI-related margin will over time be on par with the non-AI margin

In Q4, AWS grew 19% year-over-year and now has a $115 billion annualized revenue run rate. AWS is a reasonably large business by most folks’ standards. And though we expect growth will be lumpy over the next few years as enterprise adoption cycles, capacity considerations and technology advancements impact timing, it’s hard to overstate how optimistic we are about what lies ahead for AWS’ customers and business…

…While it may be hard for some to fathom a world where virtually every app has generative AI infused in it, with inference being a core building block just like compute, storage and database, and most companies having their own agents that accomplish various tasks and interact with one another, this is the world we’re thinking about all the time. And we continue to believe that this world will mostly be built on top of the cloud with the largest portion of it on AWS…

…While AI continues to be a compelling new driver in the business, we haven’t lost our focus on core modernization of companies’ technology infrastructure from on-premises to the cloud. We signed new AWS agreements with companies, including Intuit, PayPal, Norwegian Cruise Line Holdings, Northrop Grumman, The Guardian Life Insurance Company of America, Reddit, Japan Airlines, Baker Hughes, The Hertz Corporation, Redfin, Chime Financial, Asana, and many others. Consistent customer feedback from our recent AWS re:Invent gathering was appreciation that we’re still inventing rapidly in non-AI key infrastructure areas like storage, compute, database and analytics…

…During the fourth quarter, we continued to see growth in both generative AI and non-generative AI offerings as companies turn their attention to newer initiatives, bring more workloads to the cloud, restart or accelerate existing migrations from on-premise to the cloud, and tap into the power of generative AI…

…AWS reported operating income of $10.6 billion, an increase of $3.5 billion year-over-year. This is a result of strong growth, innovation in our software and infrastructure to drive efficiencies, and continued focus on cost control across the business. As we’ve said in the past, we expect AWS operating margins to fluctuate over time driven in part by the level of investments we’re making. Additionally, we increased the estimated useful life of our servers starting in 2024, which contributed approximately 200 basis points to the AWS margin increase year-over-year in Q4…

……It is true that we could be growing faster, if not for some of the constraints on capacity. And they come in the form of, I would say, chips from our third-party partners, come a little bit slower than before with a lot of midstream changes that take a little bit of time to get the hardware actually yielding the percentage-healthy and high-quality servers we expect. It comes with our own big new launch of our own hardware and our own chips and Trainium2, which we just went to general availability at re:Invent, but the majority of the volume is coming in really over the next couple of quarters, the next few months. It comes in the form of power constraints where I think the world is still constrained on power from where I think we all believe we could serve customers if we were unconstrained. There are some components in the supply chain, like motherboards, too, that are a little bit short in supply for various types of servers…

…I predict those constraints really start to relax in the second half of ’25…

…At the stage we’re in right now, AI is still early stage. It does come originally with lower margins and a heavy investment load as we’ve talked about. And in the short term, over time, that should be a headwind on margins. But over the long term, we feel the margins will be comparable in non-AI business as well.

Amazon’s management sees NVIDIA being an important partner of AWS for a long time; management does not see many large-scale generative AI apps existing right now; when generative AI apps reach scale, their costs to operate can rise very quickly, and management believes this will drive customers to demand better price performance from chips, which is why AWS built its custom AI chips; Trainium 2, AWS’s custom AI chip, was launched in December 2024; EC2 instances powered by Trainium 2 is 30%-40% more price performant than instances powered by other GPUs; important technology companies such as Adobe, Databricks, and Qualcomm have seen impressive results after testing Trainium 2; Anthropic is building its future frontier models on Trainium 2; AWS is collaborating with Anthropic on Project Rainier, which is a cluster of a few hundred thousand Trainium 2 chips that have 5x the exaflops Anthropic used to train its current set of models; management is already 

Most AI compute has been driven by NVIDIA chips, and we obviously have a deep partnership with NVIDIA and will for as long as we can see into the future. However, there aren’t that many generative AI applications of large scale yet. And when you get there, as we have with apps like Alexa and Rufus, cost can get steep quickly. Customers want better price performance and it’s why we built our own custom AI silicon. Trainium2 just launched at our AWS re:Invent Conference in December. And EC2 instances with these chips are typically 30% to 40% more price performant than other current GPU-powered instances available. That’s very compelling at scale. Several technically-capable companies like Adobe, Databricks, Poolside and Qualcomm have seen impressive results in early testing of Trainium2. It’s also why you’re seeing Anthropic build their future frontier models on Trainium2. We’re collaborating with Anthropic to build Project Rainier, a cluster of Trainium2 UltraServers containing hundreds of thousands of Trainium2 chips. This cluster is going to be 5x the number of exaflops as the cluster that Anthropic used to train their current leading set of cloud models. We’re already hard at work on Trainium3, which we expect to preview late in ’25 and defining Trainium4 thereafter.

Building outstanding performant chips that deliver leading price performance has become a core strength of AWS’, starting with our Nitro and Graviton chips in our core business and now extending to Trainium and AI and something unique to AWS relative to other competing cloud providers.

Amazon’s management has seen Amazon SageMaker AI, AWS’s fully-managed AI service, become the go-to service for AI model builders; SageMaker’s HyperPod automatically splits training workloads across many AI accelerators and prevents interruptions, saving training time up tp 40%; management recently released new features for SageMaker, such as the ability to prioritise which workloads to receive capacity when budgets are reached; the latest version of SageMaker is able to integrate all of AWS’s data analytics and AI services into one surface

I won’t spend a lot of time in these comments on Amazon SageMaker AI, which has become the go-to service for AI model builders to manage their AI data, build models, experiment and deploy these models, except to say that SageMaker’s HyperPod capability, which automatically splits training workloads across many AI accelerators, prevents interruptions by periodically saving checkpoints, and automatically repairing faulty instances from their last saved checkpoint and saving training time by up to 40%. It continues to be a differentiator, received several new compelling capabilities at re:Invent, including the ability to manage costs at a cluster level and prioritize which workloads should receive capacity when budgets are reached, and is increasingly being adopted by model builders…

…There were several key launches customers were abuzz about, including Amazon Aurora DSQL, our new serverless distributed SQL database that enables applications with the highest availability, strong consistency, PostgreS compatibility and 4x faster reads and writes compared to other popular distributed SQL databases; Amazon S3 tables, which make S3 the first cloud object store with fully managed support for Apache Iceberg for faster analytics; Amazon S3 Metadata, which automatically generates queryable metadata, simplifying data discovery, business analytics, and real-time inference to help customers unlock the value of their data in S3; and the next generation of Amazon SageMaker, which brings together all of the data analytics services and AI services into one interface to do analytics and AI more easily at scale.

Amazon Bedrock is AWS’s fully-managed service for developers to build generative AI applications by leverage on frontier models; management recently introduced more than 100 popular emerging models on Bedrock, including DeepSeek’s R1 models; management recently introduced new features to Bedrock to help customers lower cost and latency in inference workloads; management is seeing Bedrock resonate strongly with customers; management recently released Amazon’s own Nova family of frontier models on Bedrock; customers are starting to experiment with DeepSeek’s models

Amazon Bedrock is our fully managed service that offers the broadest choice of high-performing foundation models with the most compelling set of features that make it easy to build a high-quality generative AI application. We continue to iterate quickly on Bedrock announcing Luma AI poolside and over 100 other popular emerging models to Bedrock at re:Invent. In short order, we also just added DeepSeek’s R1 models to Bedrock and SageMaker…

…We delivered several compelling new Bedrock features at re:Invent, including prompt caching, intelligent prompt routing and model distillation, all of which help customers achieve lower cost and latency in their inference. Like SageMaker AI, Bedrock is growing quickly and resonating strongly with customers…

…We also just launched Amazon’s own family of frontier models in Bedrock called Nova…

…We moved so quickly to make sure that DeepSeek was available both in Bedrock and in SageMaker faster than you saw from others. And we already have customers starting to experiment with that.

The Nova family has comparable intelligence with other leading AI models, but also offers lower latency and price, and integration with important Bedrock features; many large enterprises, including Palantir, Deloitte, and SAP, are already using Nova

We also just launched Amazon’s own family of frontier models in Bedrock called Nova. These models compare favorably in intelligence against the leading models in the world but offer lower latency; lower price, about 75% lower than other models in Bedrock; and are integrated with key Bedrock features like fine-tuning, model distillation, knowledge bases of RAG and agentic capabilities. Thousands of AWS customers are already taking advantage of the capabilities and price performance of Amazon Nova models, including Palantir, Deloitte, SAP, Dentsu, Fortinet, Trellix, and Robinhood, and we’ve just gotten started.

Amazon’s management still thinks Amazon Q is the most capable AI-powered software development assistant; early testing shows that Amazon Q can now shorten a multi-year mainframe migration by 50%

Amazon Q is the most capable generative AI-powered assistant for software development and to leverage your own data…

…We obliged with our recent deliveries of Q Transformations that enable moves from Windows.NET applications to Linux, VMware to EC2, and accelerates mainframe migrations. Early customer testing indicates that Q can turn what was going to be a multiyear effort to do a mainframe migration into a multi-quarter effort, cutting by more than 50% the time to migrate mainframes. This is a big deal and these transformations are good examples of practical AI.

Amazon’s management expects capital expenditures of around US$105 billion for the whole of 2025 (was around $75 billion in 2024); the capex in 2025 will be for AWS as well as the retail business, but will primarily be for AWS’s AI infrastructure; reminder that the faster AWS grows, the faster Amazon needs to invest capital for hardware; management will only spend on capex if they see significant signals of demand; management thinks AI is a once-in-a-lifetime business opportunity, and that it’s a good sign on the long-term growth opportunities AWS has when capex is expanding

Capital investments were $26.3 billion in the fourth quarter, and we think that run rate will be reasonably representative of our 2025 capital investment rate. Similar to 2024, the majority of the spend will be to support the growing need for technology infrastructure. This primarily relates to AWS, including to support demand for our AI services, as well as tech infrastructure to support our North America and International segments. Additionally, we’re continuing to invest in capacity for our fulfillment and transportation network to support future growth. We’re also investing in same-day delivery facilities and our inbound network as well as robotics and automation to improve delivery speeds and to lower our cost to serve. These capital investments will support growth for many years to come…

…The vast majority of that CapEx spend is on AI for AWS. The way that AWS business works and the way the cash cycle works is that the faster we grow, the more CapEx we end up spending because we have to procure data center and hardware and chips and networking gear ahead of when we’re able to monetize it. We don’t procure it unless we see significant signals of demand. And so when AWS is expanding its CapEx, particularly in what we think is one of these once-in-a-lifetime type of business opportunities like AI represents, I think it’s actually quite a good sign, medium to long term, for the AWS business…

…We also have CapEx that we’re spending this year in our Stores business, really with an aim towards trying to continue to improve the delivery speed and our cost to serve. And so you’ll see us expanding the number of same-day facilities from where we are right now. You’ll also see us expand the number of delivery stations that we have in rural areas so we can get items to people who live in rural areas much more quickly, and then a pretty significant investment as well on robotics and automation so we can take our cost to serve down and continue to improve our productivity.

Amazon’s management completed a useful life study for its servers and network equipment in 2024 Q4 and has decreased the useful life estimate; management early retired some servers and network equipment in 2024 Q4; the decrease in useful life estimate and the early retirement will lower Amazon’s operating income, primarily in the AWS segment

In Q4, we completed a useful life study for our servers and network equipment, and observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning. As a result, we’re decreasing the useful life for a subset of our servers and network equipment from 6 years to 5 years, beginning in January 2025. We anticipate this will decrease full year 2025 operating income by approximately $700 million. In addition, we also early retired a subset of our servers and network equipment. We recorded a Q4 2024 expense of approximately $920 million from accelerated depreciation and related charges and expect this will also decrease full year 2025 operating income by approximately $600 million. Both of these server and network equipment useful life changes primarily impact our AWS segment.

Amazon’s management sees AI as the biggest opportunity since cloud and the internet

From our perspective, we think virtually every application that we know of today is going to be reinvented with AI inside of it and with inference being a core building block, just like compute and storage and database. If you believe that, plus altogether new experiences that we’ve only dreamed about are going to actually be available to us with AI, AI represents, for sure, the biggest opportunity since cloud and probably the biggest technology shift and opportunity in business since the Internet.

Amazon’s management has been impressed with DeepSeek’s innovations

I think like many others, we were impressed with what DeepSeek has done, I think in part impressed with some of the training techniques, primarily in flipping the sequencing of reinforcement learning being earlier and without the human-in-the-loop. We thought that was interesting ahead of the supervised fine-tuning. We also thought some of the inference optimizations they did were also quite interesting

Amazon’s management’s core belief remains that generative AI apps will use multiple models and different customers will use different AI models for different workloads 

You have a core belief like we do that virtually all the big generative AI apps are going to use multiple model types, and different customers are going to use different models for different types of workloads.

Amazon’s management thinks that the cheaper AI inference becomes, the more inference spending there will be; management believes that the cost of AI inference will fall substantially over time

Sometimes people make the assumptions that if you’re able to decrease the cost of any type of technology component, in this case, we’re really talking about inference, that somehow it’s going to lead to less total spend in technology. And we have never seen that to be the case. We did the same thing in the cloud where we launched AWS in 2006, where we offered S3 object storage for $0.15 a gigabyte and compute for $0.10 an hour, which, of course, is much lower now many years later, people thought that people would spend a lot less money on infrastructure technology. And what happens is companies will spend a lot less per unit of infrastructure, and that is very, very useful for their businesses, but then they get excited about what else they could build that they always thought was cost prohibitive before, and they usually end up spending a lot more in total on technology once you make the per unit cost less. And I think that is very much what’s going to happen here in AI, which is the cost of inference will substantially come down. What you heard in the last couple of weeks, DeepSeek is a piece of it, but everybody is working on this. I believe the cost of inference will meaningfully come down. I think it will make it much easier for companies to be able to infuse all their applications with inference and with generative AI.

Amazon’s management currently sees 2 main ways that companies are getting value out of AI; the 1st way is through productivity and cost savings, and it is the lowest-hanging fruit; the 2nd way is by building new experiences

There’s kind of two macro buckets of how we see people, both ourselves inside Amazon as well as other companies using AWS, how we see them getting value out of AI today. The first macro bucket, I would say, is really around productivity and cost savings. And in many ways, this is the lowest-hanging fruit in AI…

….I’d say the other big macro bucket are really altogether new experiences.

Amazon has built a chatbot with generative AI and it has lifted customer satisfaction by 500 basis points; Amazon has built a generative AI application for 3rd-party sellers to easily fill up their product detail pages; Amazon has built generative AI applications for inventory management that improve inventory forecasting by 10% and regional predictions by 20%; the brains of Amazon’s robotics are infused with generative AI

If you look at customer service and you look at the chatbot that we’ve built, we completely rearchitected it with generative AI. It’s delivering. It already had pretty high satisfaction. It’s delivering 500 basis points better satisfaction from customers with the new generative AI-infused chatbot.

If you look at our millions of third-party selling partners, one of their biggest pain points is, because we put a high premium on really organizing our marketplace so that it’s easy to find things, there’s a bunch of different fields you have to fill out when you’re creating a new product detail page, but we’ve built a generative AI application for them where they can either fill in just a couple of lines of text or take a picture of an image or point to a URL, and the generative AI app will fill in most of the rest of the information they have to fill out, which speeds up getting selection on the website and easier for sellers.

If you look at how we do inventory management and trying to understand what inventory we need, at what facility, at what time, the generative AI applications we’ve built there have led to 10% better forecasting on our part and 20% better regional predictions.

In our robotics, we were just talking about the brains in a lot of those robotics are generative AI-infused that do things like tell the robotic claw what’s in a bin, what it should pick up, how it should move it, where it should place it in the other bin that it’s sitting next to. So it’s really in the brains of most of our robotics.

Amazon’s Rufus is an AI-infused shopping assistant that is growing significantly; users can take a picture of a product with Amazon Lens and have the service surface the exact item through the use of AI; Amazon is using AI to know the relative sizing of clothes and shoes from different brands so that it can recommend the right sizes to shoppers; Amazon is using AI to improve the viewing experience of sporting events; Rufus provides a significant improvement to the shopping experience for shoppers and management expects the usage of Rufus to increase throughout 2025

You see lots of those in our retail business, ranging from Rufus, which is our AI-infused shopping assistant, which continues to grow very significantly; to things like Amazon Lens, where you can take a picture of a product that’s in front of you, you check it out in the app, you can find it in the little box at the top, you take a picture of an item in front of you, and it uses computer vision and generative AI to pull up the exact item in search result; to things like sizing, where we basically have taken the catalogs of all these different clothing manufacturers and then compare them against one another so we know which brands tend to run big or small relative to each other. So when you come to buy a pair of shoes, for instance, it can recommend what size you need; to even what we’re doing in Thursday Night Football, where we’re using generative AI for really inventive features like it sends alerts where we predict which players are going to put quarterback or defensive vulnerabilities, where we were able to show viewers what area of the field is vulnerable…

…I do think that Rufus, if you look at how it impacts the customer experience and if you actually use it month-to-month, it continues to get better and better. If you’re buying something and you’re on our product detail page, our product detail pages provide so much information that sometimes it’s hard, if you’re trying to find something quickly, to scroll through and find that little piece of information. And so we have so many customers now who just use Rufus to help them find a quick fact about a product. They also use Rufus to figure out how to summarize customer reviews so they don’t have to read 100 customer reviews to get a sense of what people think about that product. If you look at the personalization, really, most prominently today, your ability to go into Rufus and ask what’s happened to an order or what did I just order or can you pull up for me this item that I ordered 2 months ago, the personalization keeps getting much better. And so we expect throughout 2025, that the number of occasions where you’re not sure what you want to buy and you want help from Rufus are going to continue to increase and be more and more helpful to customers.

Amazon has around 1,000 generative AI applications that it has built or is building

We’ve got about 1,000 different generative AI applications we’ve either built or in the process of building right now.

Apple (NASDAQ: AAPL)

Apple Intelligence was first released in the USA in October 2024, with more features and countries introduced in December 2024; Apple Intelligence will be rolled out to even more countries in April 2025; management sees Apple Intelligence as a breakthrough for privacy in AI; SAP is using Apple Intelligence in the USA to improve the employee as well as customer experience; the Apple Intelligence features that people are using include Writing Tools, Image Playground, Genmoji, Visual Intelligence, Clean Up, and more; management has found Apple Intelligence’s email summarisation feature to be very useful; management thinks that different users will find their own “killer feature” within Apple Intelligence

In October, we released the first set of Apple Intelligence features in U.S. English for iPhone, iPad and Mac, and we rolled out more features and expanded to more countries in December.

Now users can discover the benefits of these new features in the things they do every day. They can use Writing Tools to help find just the right words, create fun and unique images with Image Playground and Genmoji, handle daily tasks and seek out information with a more natural and conversational Siri, create movies of their memories with a simple prompt and touch up their photos with Clean Up. We introduced visual intelligence with Camera Control to help users instantly learn about their surroundings. Users can also seamlessly access ChatGPT across iOS, iPadOS and macOS.

And we were excited to recently begin our international expansion with Apple Intelligence now available in Australia, Canada, New Zealand, South Africa and the U.K. We’re working hard to take Apple Intelligence even further. In April, we’re bringing Apple Intelligence to more languages, including French, German, Italian, Portuguese, Spanish, Japanese, Korean and simplified Chinese as well as localized English to Singapore and India. And we’ll continue to roll out more features in the future, including an even more capable Siri.

Apple Intelligence builds on years of innovations we’ve made across hardware and software to transform how users experience our products. Apple Intelligence also empowers users by delivering personal context that’s relevant to them. And importantly, Apple Intelligence is a breakthrough for privacy in AI with innovations like Private Cloud Compute, which extends the industry-leading security and privacy of Apple devices into the cloud…

…We’re excited to see leading enterprises such as SAP leverage Apple Intelligence in the U.S. with features like Writing Tools, summarize and priority notifications to enhance both their employee and customer experiences…

…In terms of the features that people are using, they’re using all of the ones that I had referenced in my opening comments, from Writing Tools to Image Playground and Genmoji, to visual intelligence and more. And so we see all of those being used. Clean Up is another one that is popular, and people love seeing that one demoed in the stores as well…

…I know from my own personal experience, once you start using the features, you can’t imagine not using them anymore. I know I get hundreds of e-mails a day, and the summarization function is so important…

…[Question] Do you guys see the upgraded Siri expected in April as something that will, let’s say, be the killer application among the suite of features that you have announced in Apple Intelligence?

[Answer] I think the killer feature is different for different people. But I think for most, they’re going to find that they’re going to use many of the features every day. And certainly, one of those is the — is Siri, and that will be coming over the next several months.

Many customers are excited about the iPhone 16 because of Apple Intelligence; the iPhone 16’s year-on-year performance was stronger in countries where Apple Intelligence was available compared to countries where Apple Intelligence was not available

Our iPhone 16 lineup takes the smartphone experience to the next level in so many ways, and Apple Intelligence is one of many reasons why customers are excited…

…We did see that the markets where we had rolled out Apple Intelligence, that the year-over-year performance on the iPhone 16 family was stronger than those where Apple Intelligence was not available…

Apple’s management thinks the developments in the AI industry brought on by DeepSeek’s emergence is a positive for Apple

[Question] There’s a perception that you’re a big beneficiary of lower cost of compute. And I was wondering if you could give your worldly perspective here on the DeepSeek situation.

[Answer] In general, I think innovation that drives efficiency is a good thing. And that’s what you see in that model. Our tight integration of silicon and software, I think, will continue to serve us very well.

Arista Networks (NYSE: ANET)

Cloud and AI titans were a significant contributor to Arista Networks’ revenue in 2024; management considers Oracle an AI titan too

Now shifting to annual sector revenue for 2024. Our cloud and AI titans contributed significantly at approximately 48%, keeping in mind that Oracle is a new member of this category.

Arista Networks’ core cloud AI and data center products are built off its extensible OS (operating system) and goes up to 800 gigabit Ethernet speeds

Our core cloud AI and data center products are built off a highly differentiated, extensible OS stack and is successfully deployed across 10, 25, 100, 200, 400 and 800 gigabit Ethernet speeds. It delivers power efficiency, high availability, automation and agility as the data centers demand, insatiable bandwidth capacity and network speeds for both front-end and back-end storage, compute and AI zones.

Arista Networks’ management expects the company’s 800 gigabit Ethernet switch to emerge as an AI back-end cluster in 2025

We expect 800 gigabit Ethernet to emerge as an AI back-end cluster in 2025.

Arista Networks’ management is still optimistic that AI revenues will reach $1.5 billion in 2025, including $750 million in AI back-end clusters; the $750 million in revenue from AI back-end clusters will have a major helping hand from 3 of the 5 major AI trials Arista Networks is working on that are rolling out a cumulative 100,000 GPUs in 2025 (see more below)

We remain optimistic about achieving our AI revenue goal of $1.5 billion in AI centers, which includes the $750 million in AI back-end clusters in 2025…

…[Question] You are reiterating $750 million AI back-end sales this year despite the stalled or the fifth customer. Can you talk about where is the upside coming from this year? Is it broad-based or 1 or 2 customers?

[Answer] We’re well on our way and 3 customers deploying a cumulative of 100,000 GPUs is going to help us with that number this year. And as we increased our guidance to $8.2 billion, I think we’re going to see momentum both in AI, cloud and enterprises. I’m not ready to break it down and tell you which where. I think we’ll see — we’ll know that much better in the second half. But Chantelle and I feel confident that we can definitely do the $8.2 billion that we historically don’t call out so early in the year. So having visibility if that helps.

Arista Networks is building some of the world’s greatest Arista AI centers at production scale and it’s involved with both the back-end clusters and front-end networks; Arista Networks’ management sees the data traffic flow of AI workloads as having significant differences from traditional cloud workloads and Arista AI centers can seamlessly connect to the front end compute storage with its backend Ethernet portfolio; Arista’s AI networking portfolio consists of 3 families and over 20 Etherlink switches

Networking for AI is also gaining traction as we move into 2025, building some of the world’s greatest Arista AI centers at production scale. These are constructed with both back-end clusters and front-end networks…

…The fidelity of the AI traffic differs greatly from cloud workloads in terms of diversity, duration and size of flow. Just one slow flow can flow the entire job completion time for a training workload. Therefore, Arista AI centers seamlessly connect to the front end of compute storage WAN and classic cloud networks with our back-end Arista Etherlink portfolio. This AI accelerated networking portfolio consists of 3 families and over 20 Etherlink switches, not just 1 point switch.

Arista Networks’ management’s AI for Networking strategy is doing well and it includes software that have superior AI ops

Our AI for networking strategy is also doing well, and it’s about curating the data for higher-level network functions. We instrument our customer’s networks with our published subscribed state Foundation with our software called Network Data Lake to deliver proactive, predictive and prescriptive platforms that have superior AI ops with A care support and product functions.

Arista Networks’ management is still committed to 4 of the 5 major AI trials that they have been discussing in recent earnings calls; the remaining AI trial is still stalled and the customer is not a Cloud Titan and is waiting for funding; 3 of the 4 trials that are active are expected to roll out a cumulative 100,000 GPUs in 2025 and they are all waiting for the next-generation NVIDIA GPU; Arista Networks’ management expects to do very well on the back-end with those 3 trials; the remaining trial of the 4 active trials is migrating from Infiniband to Ethernet to test the viability of Ethernet, and Arista Networks’ management expects to enter production in 2026

I want to say Arista is still committed to 4 out of our 5 AI clusters that I mentioned in prior calls, but just one is a little bit stalled. It is not a Cloud Titan. They are awaiting GPUs and some funding too, I think. So I hope they’ll come back next year, but for this year, we won’t talk about them. But the remaining 4, let me spend some — jgive you some color, 3 out of the 4 customers are expected to this year rolled out a cumulative of 100,000 GPUs. So we’re going to do very well with 3 of them on the back end. And you can imagine, they’re all pretty much one major NVIDIA class of GPU — it’s — they will be waiting for the next generation of GPUs. But independent of that, we’ll be rolling out fairly large numbers. On the fourth one, we are migrating right now from InfiniBand to proving that Ethernet is a viable solution, so we’re still — they’ve historically been InfiniBand. And so we’re still in pilot and we expect to go into production next year. We’re doing very well in 4 out of 4, the Fifth one installed and 3 out of the 4 expected to be 100,000 GPUs this year.

Arista Networks thinks the market for AI networking is large enough that there will be enough room for both the company and other whitebox networking manufacturers; management also thinks Arista Networks’ products have significant differentiation from whitebox products, especially in the AI spine in a typical leaf-spine network, because Arista Networks’ products can automatically provide an alternate connection when a GPU in the network is in trouble

[Question] Can you maybe share your perspective that when it comes to AI network especially the back-end networks, how do you see the mix evolving white box versus OEM solution?

[Answer] This TAM is so huge and so large. We will always coexist with white boxes and operating systems that are non-EOS, much like Apple coexists on the iPhone with other phones of different types. When you look at the back end of an AI cluster, there are typically 2 components, the AI lead and the AI spine. The AI lead connects to the GPUs and therefore, is the first, if you will, point of connection. And the AI spine aggregates all of these AI leads. Almost in all the back-end examples we’ve seen, the AI spine is generally 100% Arista-branded EOS. You’ve got to do an awful lot of routing, scale, features, capabilities that are very rich that would be difficult to do in any other environment. The AI leads can vary. So for example, the — let’s take the example of the 5 customers I mentioned a lot, 3 out of the 5 are all EOS in the [indiscernible] spine. 2 out of the 5 are kind of hybrids. Some of them have some form of SONic or FBOSS. And as you know, we co-develop with them and coexist in a number of use cases where it’s a real hybrid combination of EOS and an open OS. So for most part, I’d just like to say that white box and Arista will coexist and will provide different strokes for different folks…

…A lot of our deployments right now is 400 and 800 gig, and you see a tremendous amount of differentiation, not only like I explained to you in scale and routing features, but cost and load balancing, AI visibility and analytics at real time, personal queuing, congestion control, visibility and most importantly, smart system upgrade because you sort of want your GPUs to come down because you don’t have the right software to accelerate so that the network provides the ideal foundation that if the GPU is in trouble, we can automatically give a different connection and an alternate connection. So tremendous amount of differentiation there and even more valid in a GPU which costs typically 5x as much as a CPU…

…When you’re buying these expensive GPUs that cost $25,000, they’re like diamonds, right? You’re not going to string a diamond on a piece of thread. So first thing I want to say is you need a mission-critical network, whether you want to call it white box, blue box, EOS or some other software, you’ve got to have mission-critical functions, analytics, visibility, high availability, et cetera. As I mentioned, and I want to reiterate, they’re also typically a leaf spine network. And I have yet to see an AI spine deployment that is not EOS-based. I’m not saying it can’t happen or won’t happen. But in all 5 major installations, the benefit of our EOS features for high availability for routing, for VXLAN, for telemetry, our customers really see that. And the 7800 is the flagship AI spine product that we have been deploying last year, this year and in the future. Coming soon, of course, is also the product we jointly engineered with Meta, which is the distributed [Ecolink] switch. And that is also an example of a product that provides that kind of leaf spine combination, both with FBOSS and EOS options in it. So in my view, it’s difficult to imagine a highly resilient system without Arista EOS in AI or non-AI use cases.

On the leaf, you can cut corners. You can go with smaller buffers, you may have a smaller installation. So I can imagine that some people will want to experiment and do experiment in smaller configurations with non-EOS. But again, to do that, you have to have a fairly large staff to build the operations for it. So that’s also a critical element. So unless you’re a large Cloud Titan customer, you’re less likely to take that chance because you don’t have the staff.

Arista Networks’ management is seeing strong demand from its Cloud Titan customers

Speaking specifically to Meta, we are obviously in a number of use cases in Meta. Keep in mind that our 2024 Meta numbers is influenced by more of their 2023 CapEx, and that was Meta’s year of efficiency where their CapEx was down 15% to 20%. So you’re probably seeing some correlation between their CapEx being down and our revenue numbers being slightly lower in ’24. In general, I would just say all our cloud titans are performing well in demand, and we shouldn’t confuse that with timing of our shipments. And I fully expect Microsoft and Meta to be greater than 10% customers in a strong manner in 2025 as well. Specific to the others we added in, they’re not 10% customers, but they’re doing very well, and we’re happy with their cloud and AI use cases.

Arista Networks’ management thinks the emergence of DeepSeek will lead to AI development evolving from back-end training that’s concentrated in a handful of users, to being distributed more widely across CPUs and GPUs; management also thinks DeepSeek’s emergence is a positive for Arista Networks because DeepSeek’s innovations can drive the AI industry towards a new class of CPUs, GPUs, AI accelerators and Arista Networks is able to scale up network for all kinds of XPUs

DeepSeek certainly deep fixed many stocks, but I actually see this as a positive because I think you’re now going to see a new class of CPUs, GPUs, AI accelerators and where you can have substantial efficiency gains that go beyond training. So that could be some sort of inference or mixture of experts or reasoning and which lowers the token count and therefore, the cost. So what I like about all these different options is Arista can scale up network for all kinds of XPUs and accelerators. And I think the eye-opening thing here for all of our experts who are building all these engineering models is there are many different types and training isn’t the only one. So I think this is a nice evolution of how AI will not just be a back-end training only limited to 5 customers type phenomenon, but will become more and more distributed across a range of CPUs and GPUs.

Arista Networks’ management thinks hyper-scale GPU clusters, such as Project Stargate, will drive the development of vertical rack integration in the next few years and Andy Bechtolsheim, an Arista Networks co-founder, is personally involved in these projects

If you look at how we have classically approached GPUs and connected libraries, we’ve largely looked at it as 2 separate building blocks. There’s the vendor who provides the GPUs and then there’s us who provides the scale-out networking. But when you look at Stargate and projects like this, I think you’ll start to see more of a vertical rack integration where the processor, the scale up, the scale out and all of the software to provide a single point of control and visibility starts to come more and more together. This is not a 2025 phenomenon, but definitely in ’26 and ’27, you’re going to see a new class of AI accelerators for — and a new class of training and inference, which is extremely different than the current more pluggable label type of version. So we’re very optimistic about it.

Andy Bechtolsheim is personally involved in the design of a number of these next-generation projects and the need for this type of shall we say, pushing Moore’s Law of improvements in density of performance that we saw in the 2000s is coming back, and you can boost more and more performance per XPU, which means you have to boost the network scale from 800 gig to 1.16.

Arista Networks’ management sees a $70 billion total addressable market in 2028, of which roughly a third is related to AI

[Question] If you can talk to the $70 billion TAM number for 2028, how much is AI?

[Answer] On the $70 billion TAM in 2028, I would roughly say 1/3 is AI, 1/3 is data center and cloud and 1/3 is campus and enterprise. And obviously, absorbed into that is routing and security and observability. I’m not calling them out separately for the purpose of this discussion.

Arista Networks’ management sees co-packaged optics (CPO) as having weak adoption compared to co-packaged copper (CPC) because CPO has been experiencing field failures

Co-packaged optics is not a new idea. It’s been around 10 to 20 years. So the fundamental reason, let’s go through why co-packaged optics has had a relatively weak adoption so far is because of field failures and most of it is still in proof of concept today. So going back to networking, the most important attribute of a network switch is reliability and troubleshooting. And once you solder a co-packaged optics on a PCB, you lose some of that flexibility and you don’t get the serviceability and manufacturing. That’s been the problem. Now a number of alternatives are emerging, and we’re a big fan of co-packaged copper as well as pluggable optics that can complement this like linear drive or LTO as we call it.

Now we also see that if co-packaged optics improves some of the metrics it has right now. For example, it has a higher channel count than the industry standard of 8-channel pluggable optics, but we can do higher channel pluggable optics as well. So some of these things improve, we can see that both CPC and CPO will be important technologies at 224 gig or even 448 gig. But so far, our customers have preferred a LEGO approach that they can mix and match pluggable switches and pluggable optics and haven’t committed to soldering them on the PCB. And we feel that will change only if CPO gets better and more reliable. And I think CPC can be a nice alternative to that.

Arista Networks’ management is seeing customers start moving towards actual use-cases for AI, but the customers are saying that these AI projects take time to implement

For the AI perspective, speaking with the customers, it’s great to move from kind of a theory to more specific conversation, and you’re seeing that in the banks and some of the higher tier Global 2000, Fortune 500 companies. And so they’re moving from theory to actual use cases they’re speaking to. And the way they describe it is it takes a bit of time. They’re working mostly with cloud service providers at the beginning, kind of doing some training and then they’re deciding whether they bring that on-prem and inference. So they’re making those decisions.

Arista Networks’ management is seeing a new class of Tier 2 specialty AI cloud providers emerge

We are seeing a new class of Tier 2 specialty cloud providers emerge that want to provide AI as a service and want to be differentiated there. And there’s a whole lot of funding, grant money, real money going in there. So service providers, too early to call. But Neo clouds and specialty providers, yes, we’re seeing lots of examples of that.

The advent of AI has accelerated the speed-transitions in networking data switches, but there’s still going to be a long runway for Arista Networks’ 400 gig and 800 gig products, with 1.6 tera products being deployed in a measured way

The speed transitions because of AI are certainly getting faster. It used to take when we went from 200 gig, for example, at Meta or 100 gig in some of our Cloud Titans to 400, that speed transition typically took 3 to 4, maybe even 5 years, right? In AI, we see that cycle being almost every 2 years…

…2024 was the year of real 400 gig. ’25 and ’26, I would say, is more 800 gig. And I really see 1.6T coming into the picture because we don’t have chips yet, maybe in what do you say, John, late ’26 and real production maybe in ’27. So there’s a lot of talk and hype on it, just like I remember talk and hype on 400 gig 5 years ago. But I think realistically, you’re going to see a long runway for 400 and 800 gig. Now as we get into 1.6T, part of the reason I think it’s going to be measured and thoughtful is many of our customers are still awaiting their own AI accelerators or NVIDIA GPUs, which with liquid cooling that would actually push that kind of bandwidth. So new GPUs will require new bandwidth, and that’s going to push it out a year or 2.

Arista Networks’ management sees a future where the market share between NVIDIA GPUs and custom AI accelerators (ASICs) is roughly evenly-split, but Arista Networks’ products will be GPU-agnostic

[Question] There’s been a lot of discussion over the last few months between the general purpose GPU clusters from NVIDIA and then the custom ASIC solutions from some of your popular customers. I guess just in your view, over the longer term, does Arista’s opportunity differ across these 2 chip types?

[Answer] I think I’ve always said this, you guys often spoke about NVIDIA as a competitor. And I don’t see it that way. I see that — thank you, NVIDIA. Thank you, Jensen, for the GPUs because that gives us an opportunity to connect to them, and that’s been a predominant market for us. As we move forward, we see not only that we connect to them, but we can connect to AMD GPUs and built in in-house AI accelerators. So a lot of them are in active development or in early stages. NVIDIA is the dominant market share holder with probably 80%, 90%. But if you ask me to guess what it would look like 2, 3 years from now, I think it could be 50-50. So Arista could be the scale-out network for all types of accelerators. We’ll be GPU agnostic. And I think there’ll be less opportunity to bundle by specific vendors and more opportunity for customers to choose best-of-breed. 

ASML (NASDAQ: ASML)

AI will be the biggest driver of ASML’s growth and management sees customers benefiting very strongly from it; management thinks ASML will hit the upper end of the revenue guidance range for 2025 if its customers can bring on additional AI-related capacity during the year, but there are also risks that could result in only the lower end of the guidance coming true

We see total revenue for 2025 between €30 billion and €35 billion and the gross margin between 51% and 53%. AI is the clear driver. I think we started to see that last year. In fact, at this point, we really believe that AI is creating a shift in the market and we have seen customers benefiting from it very strongly…

…If AI demand continues to be strong and customers are successful in bringing on additional capacity online to support that demand, there is potential opportunity towards the upper end of our range. On the other hand, there are also risks related to customers and geopolitics that could drive results towards the lower end of the range.

ASML’s management is still very positive on the long-term outlook for ASML, with AI being a driver for growth; management expects AI to create a shift in ASML’s end-market products towards more HPC (high performance computing) and HBM (high bandwidth memory), which requires more advanced logic and DRAM, which in turn needs more critical lithography exposures

I think our view on the long term is still, I would say, very positive…

…Looking longer term, overall the semiconductor market remains strong with artificial intelligence creating growth but also a shift in market dynamics as I highlighted earlier. These dynamics will lead to a shift in the mix of end market products towards more HPC and HBM which requires more advanced logic and DRAM. For ASML, we anticipate that an increased number of critical lithography exposures for these advanced Logic and Memory processes will drive increasing demand for ASML products and services. As a result, we see a 2030 revenue opportunity between 44 billion euros and 60 billion euros with gross margins expected between 56 percent and 60 percent, as we presented in Investor Day 2024.

ASML’s management is seeing aggressive capacity addition among some DRAM memory customers because of demand for high bandwidth memory (HBM), but apart from HBM, other DRAM memory customers have a slower recovery

 I think that high-bandwidth memory is driving today, I would say, also an aggressive capacity addition, at least for some of the customer. I think on the normal DRAM, I would say, my comment is similar to the one on mobile [ photology ] before. I think there are also nothing spectacular, but there is some recovery, which also called for more capacity. So that’s why we still see DRAM pretty strong in 2025.

Datadog (NASDAQ: DDOG)

Datadog launched LLM Observability in 2024; management continues to see increased interest in next-gen AI; 3,500 Datadog customers at the end of 2024 Q4 used 1 or more Datadog AI integrations; when it comes to AI inference, management is seeing most customers using a 3rd-party AI model through an API or a 3rd-party inference platform, and these customers want to observe whether the model is doing the right thing, and this need is what LLM Observability is serving; management is seeing very few customers running the full AI inference stack currently, but they think this could happen soon and it would be an exciting development

We launched LLM observability, in general availability to help customers evaluate, safely deploy and manage their models in production, and we continue to see increased interest in next-gen AI. At the end of Q4, about 3,500 customers use 1 or more Datadog AI integrations to send this data about their machine learning, AI, and LLM usage…

…On the inference side, the — mostly still what customers do is they use a third-party model either through an API or through a third-party inference platform. And what they’re interested in is measuring whether that model is doing the right thing. And that’s what we serve right now with LLM observability, for example, as well, we see quite a bit of adoption that does not come largely from the AI-native companies. So that’s what we see today.

In terms of operating the inference stack fully and how we see relatively few customers with that yet, we think that’s something that’s going to come next. And by the way, we’re very excited by the developments we see in the space. So it looks like there is many, many different options that are going to be viable for running your AI inference. There’s a very healthy set of commercial API-gated services. There’s models that you can install in the open source. There are models in the open source today that are rivalling in quality with the best closed API models. So we think the ecosystem is developing into a rich diversification that will allow customers to have a diversity of modalities for using AI, which is exciting. 

AI-native customers accounted for 6% of Datadog’s ARR in 2024 Q4 (was 6% 2024 Q3); AI-native customers contributed 5 percentage points to Datadog’s year-on-year growth in 2024 Q4, compared to 3 percentage points in 2023 Q4; among customers in the AI-native cohort, management has seen optimisation of usage and volume discounts related to contracts in 2024 Q4, and management thinks these customers will continue to optimise cloud and observability usage in the future; the dynamics with the AI-native cohort that happened in 2024 Q4 was inline with management’s expectations

We continue to see robust contribution from AI native customers who represented about 6% of Q4 ARR roughly the same as the quarter — as last quarter and up from about 3% of ARR in the year-ago quarter. AI native customers contributed about 5 percentage points of year-over-year revenue growth in Q4 versus 4 points in the last quarter and about 3 points in the year-ago quarter. So we saw strong growth from AI native customers in Q4. We believe that adoption of AI will continue to benefit Datadog in the long term. Meanwhile, we did see some optimization and volume discounts related to contract renewals in Q4. We remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms, and as they may choose to optimize cloud and observability usage. ..

… [Question] I’m trying to understand if the AI usage and commits are kind of on the same trajectory that they were on or whether you feel that there are some oscillations there.

[Answer] What happened during the quarter is pretty much what we thought would happen when we discussed it in the last earnings call. When you look at the AI cohort, we definitely saw some renewals with higher commit, better terms and optimization usage all at the same time, which is fairly typical, which typically happens with larger end customers in particular is at the time of renewal, customers are going to trying and optimize what they can. They’re going to get better prices from us, up their commitments and we might see a flat or down a month or quarter after that, with a still sharp growth from the year before and growth to come in the year to come. So that’s what we typically see. When you look at the cohort as a whole, even with that significant renewal optimization and better unit economics this quarter is wholly stable, this quarter as a whole is stable quarter-to-quarter in its revenue and it’s growing a lot from the quarter before, even with all that.

Datadog’s management sees some emerging opportunities in Infrastructure Monitoring that are related to the usage of GPUs (Graphics Processing Units), but the opportunities will only make sense if there is broad usage of GPUs by a large number of customers, which is not happening today

There’s a number of new use cases that are emerging that are related to infrastructure that we might want to cover. Again, we — when I say they’re emerging, they’re actually emerging, like we still have to see what the actual need is from a large number of customers. I’m talking in particular about infrastructure concerns around GPU management, GPU optimization, like there’s quite a lot going on there that we can potentially do. But we — for that, we need to see broad usage of the raw GPUs by a large number of customers as opposed to usage by a smaller number of native customers, which is mostly what we still see today.

Datadog’s management thinks it’s hard to tell where AI agents can be the most beneficial for Datadog’s observability platform because it’s still a very nascent field and management has observed that things change really quickly; when management built LLM Observability, the initial use cases were for AI chatbots and RAG (retrieval augmented generation), but now the use cases are starting to shift towards AI agents

[Question] Just curious, when we think about agents, which parts of the core observability platform that you think are most relevant or going to be most beneficial to your business as you start to monitor those?

[Answer] It’s a bit hard to tell because it’s a very nascent field. So my guess is in a year if we probably look different from what it looks like today. Just like this year, it looks very different from what it looks like last year. What we do see, though, is that — so when we built — we started building our LLM Observability product, most of the use cases we saw there from customers were chatbot in nature or RAG in nature, trying to access information and return the information. Now we see more and more customers building agents on top of that and sending the data from their agents. So we definitely see a growing trend there of adoption and the LLM Observability product is a good level of abstraction, at least for the current iteration of these agents to get them. So that’s what we can see today.

Datadog’s management sees AI touching many different areas of Datadog, such as how software is being written and deployed, how customer-support is improved, and more

What’s fascinating about the current evolution of AI, in particular, is that it touches a lot of the different areas of the business. The first area for company like ours the first area to be transformed is really the way software is being built. What engineers use, how they write software, how they debug software, how do they also operate systems. And part of that is outside tooling we’re using for writing software. Part of that is dogfooding, or new products for incident resolution and that sort of thing. So that’s the first area. There’s a number of other areas that are going to see large improvements in productivity. Typically, everything that has to do with supporting customers, helping with onboarding and helping troubleshoot issues like all of that is in acceleration. In the end, we expect to see improvements everywhere, from front office to back office.

Fiverr (NYSE: FVRR)

Fiverr’s management launched Fiverr Go in February 2025, an open platform for personalised AI tools designed to give creators full control over their creative processes and rights; Fiverr Go enables freelancers to build personalised AI models (there was a presentation on this recently) without having to know AI engineering; Fiverr Go is designed to be personalised for the creator, so the creator becomes more important compared to the AI technology; Fiverr Go is generative AI technology with human accountability (will be interesting to see if Fiverr Go is popular; people can create designs/images with other AI models, so customers who use Fiverr Go are those who need the special features that Fiverr Go offers); Fiverr Go generates content that is good enough for mission critical business tasks, unlike what’s commonly happening with other AI-generated content; Fiverr Go is no different from a direct order from the freelancer themself, except it is faster and easier for buyers; Fiverr Go has personalised AI assistants for freelancers; Fiverr Go has an open developer platform for 3rd-party developers to build generative AI apps

Fiverr Go is an open platform for personalized AI tools that include the personalized AI assistant and the AI creation model. Different from other AI platforms that often exploit human creativity without proper attribution or compensation, Fiverr Go is uniquely designed to reshape this power dynamic by giving creators full control over their creative process and rights. It also enables freelancers to build personalized AI models without the need to collect training data sets or understand AI engineering, thanks to Fiverr’s unparalleled foundation of over 6.5 billion interactions and nearly 150 million transactions on the marketplace and most importantly, it allows freelancers to become a one-person production house, making more money while focusing on the things that matter. By giving freelancers control over configuration, pricing and creative rights and leveling the playing field of implementing AI technology, Fiverr Go ensures that creators remain at the center of the creative economy. It decisively turned the power dynamic between humans and AI towards the human side…

For customers, Fiverr Go is also fundamentally different from other AI platforms. It is GenAI with human accountability. AI results often feel unreliable, generic and very hard to edit. What is good enough for a simple question and answer on ChatGPT does not cut it for business mission-critical tasks. In fact, many customers come to Fiverr today with AI-generated content because they miss the confidence that comes from another human eye and talent, helping them perfect the results for their needs. Fiverr Go eliminates all of this friction and frustration. Every delivery on Fiverr Go is backed by the full faith of the creator behind it with an included revision as the freelancer defines. This means that the quality and service you get from Fiverr Go is no different from a direct order from the freelancers themselves, except for a faster, easier and more delightful experience. The personalized AI assistant on Fiverr Go can communicate with potential clients when the freelancer is away or busy, handle routine tasks and provide actionable business insights, effectively becoming the creator’s business partner. It often feels smarter than an average human assistant because it’s equipped with all the history of how the freelancer works as well as knowledge of trends and best practices on the Fiverr marketplace…

…We’ve also announced an open developer platform on Fiverr Go to allow AI specialists and model developers to build generative AI applications across any discipline. We provide them with an ecosystem to collaborate with domain experts on Fiverr and the ability to leverage Fiverr’s massive data assets so that these applications can solve real-world problems and most important of all, Fiverr provides them an avenue to generate revenue from those applications through our marketplace…

…So from our experience with AI, what we come to learn is that a lot of the creation process using AI is very random and take you through figuring out what are the best tools because there’s thousands of different options around AI. And each one operates slightly differently. And you need to master each one of them. And you need to become a prompt engineer. And then editing is extremely, extremely hard. Plus you don’t get the feedback that comes from working with a human being that can actually look at the creation from a human eye and give you a sense if this is actually capturing what you’re trying to do. It allows us or allows freelancers to design their own model in a way that rewards them but remains extremely accurate to their style, allowing customers to get the results they expect to get because they see the portfolio of their freelancer, like the style of writing or design or singing or narration, and they can get exactly this. So we think that, that combination and that confidence that comes from the fact that the creator itself is always there.

The AI personal assistant in Fiverr Go can help to respond to customer questions based on individual freelancers; the first 3 minutes after a buyer writes to a freelancer is the most crucial time for conversion and this is where the AI assistant can help; there are already thousands of AI assistants running on Fiverr Go, converting customers

Fiverr Go is actually a tool for conversion. That’s the entire idea because we know that customers these days expect instant responses and instant results. And as a result of that, we designed those 2 tools, the AI personal assistant, which is able to answer customer questions immediately even if the freelancer is away or busy. We know that the first 3 minutes after a customer writes to a freelancer are the most crucial time for conversion and this is why we designed this tool. And this tool is essentially encapsulating the entire knowledge of the freelancer and basing itself on it, being able to address any possible question and bring it to conversion…

…It’s fresh from yesterday, but we have many thousands of assistants running on the system, converting customers already, which is an amazing sign.

Fiverr Go is a creator tool that can create works based off freelancers’ style and allows customers to get highly-accurate samples of a freelancers’ work to lower friction in selecting freelancers

When we think about the creation model, the creation model allows customers to get the confidence that this is the freelancer, this is the style that they’re looking for, because now instead of asking a freelancer for samples, waiting for it, causing the freelancer to essentially work for free, they can get those samples right away. Now the quality of these samples is just mind-blowing. The level of accuracy that these samples produce are exact match with the style of the freelancer, which gives the customer the confidence that if they played and liked it, this is the type of freelancer that they should engage with.

The Fiverr Go open developer platform is essentially an app store for AI apps; the open developer platform allows developers to train AI models on Fiverr’s transactional data set, which is probably the largest dataset of its kind in existence

Now what we’re doing with this is actually we’re opening up the Go platform to outside developers. Think about it as an app store in essence. So what we’re doing is we’re allowing them to develop models, APIs, workflows, but then train those models on probably the biggest transactional data set in existence today that we hold so that they can actually help us build models that freelancers can join — can enjoy from. And we believe that by doing so and giving those developers incentives to do so because every time their app is going to be used for a transaction, they’re going to make money out of it.

Fiverr Go’s take rate will be the same for now and management will learn as they go

[Question] Would your take rate be different in Fiverr Go?

[Answer] For now, the take rate remains the same for Go. And as we roll it out and as we see usage, we will figure out what to do or what’s the right thing to do. For now, we treat it as a normal transaction with the same take rate.

Mastercard (NYSE: MA)

Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future has been deploying AI for over a decade, just like Mastercard has; Recorded Future uses AI to analyse threat data across the entire Internet; the acquisition of Recorded Future improves Mastercard’s cybersecurity solutions

Our diverse capabilities in payments and services and solutions, including the acquisition of Recorded Future this quarter set us apart…

…Recorded Future is the world’s largest threat intelligence company with more than 1,900 customers across 75 countries. Customers include over 50% of the Fortune 100 and government agencies in 45 countries, including more than half of the G20. We’ve been deploying AI at scale for well over a decade, so has Recorded Future. They leverage AI-powered insights to analyze threat data from every corner of the Internet and customers gain real-time visibility and actionable insights to proactively reduce risks. We now have an even more robust set of powerful intelligence, identity, dispute, fraud and scan prevention solutions. Together, these uniquely differentiated technologies will enable us to create smarter models, distribute these capabilities more broadly and help our customers anticipate threats before cyber-attacks can take place. That means better protection for governments, businesses, banks, consumers the entire ecosystem and well beyond the payment transactions. We’re also leveraging our distribution at scale to deepen market penetration of our services and solutions.

Meta Platforms (NASDAQ: META)

Meta’s management expects Meta AI to be the leading AI assistant in 2025, reaching more than 1 billion people; Meta AI is already the most-used AI assistant in the world with more than 700 million monthly actives; management believes Meta AI is at a scale that allows it to develop a durable long-term advantage; management has an exciting road map for Meta AI in 2025 that focuses on personalisation; management does not believe that there’s going to be only one big AI that is the same for everyone; there are some fun surprises for Meta AI in 2025 that management has up their sleeves; Meta AI can now remember certain details of people’s prior queries; management sees a few possible paths for Meta AI’s monetisation, but their focus right now is just on building a great user experience; WhatsApp has the strongest Meta AI usage, followed by Facebook; people are using Meta AI on WhatsApp for informational, educational, and emotional purposes 

 In AI, I expect that this is going to be the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people, and I expect Meta AI to be that leading AI assistant. Meta AI is already used by more people than any other assistant. And once a service reaches that kind of scale, it usually develops a durable long-term advantage.

We have a really exciting road map for this year with a unique vision focused on personalization. We believe that people don’t all want to use the same AI. People want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world. I don’t think that there’s just going to be one big AI that everyone uses that does the same thing. People are going to get to choose how their AI works and what it looks like for them. I continue to think that this is going to be one of the most transformative products that we’ve made, and we have some fun surprises that I think people are going to like this year…

… Meta AI usage continues to scale with more than 700 million monthly actives. We’re now introducing updates that will enable Meta AI to deliver more personalized and relevant responses by remembering certain details from people’s prior queries and considering what they engage with on Facebook and Instagram to develop better intuition for their interest and preferences…

…Our initial focus for Meta AI is really about building a great consumer experience, and that’s frankly, where all of our energies are kind of directed to right now. There will, I think, be pretty clear monetization opportunities over time, including paid recommendations and including a premium offering, but really not where we are focused in terms of the development of Meta AI today…

…WhatsApp continues to see the strongest Meta AI usage across our family of apps. People there are using it most frequently for information seeking and educational queries along with emotional support use cases. Most of the WhatsApp engagement is in one-on-one threads, though we see some usage in group messaging. And on Facebook, which is the second largest driver of Meta AI engagement, we’re seeing strong engagement from our feed deep dives integration that lets people ask Meta AI questions about the content that is recommended to that. So across, I would say, all query types, we continue to see signs that Meta AI is helping people leverage our apps for new use cases. We talked about information gathering, social interaction and communication Lots of people use it for humor and casual conversation. They use it for writing and editing research recommendations. 

Meta’s management thinks Llama will become the most advanced and widely-used AI model in 2025; Llama 4 is making great progress; Meta has a reasoning Llama model in the works; management’s goal for Llama 4 is for it be the leading AI model; Llama 4 is built to be multi-modal and agentic; management expects Llama 4 to unlock a lot of new use cases

I think this will very well be the year when Llama and open-source become the most advanced and widely used AI models as well. Llama 4 is making great progress in training, Llama 4 Mini is doing with pretraining and our reasoning models and larger model are looking good too. 

Our goal with Llama 3 was to make open source competitive with closed models. And our goal for Llama 4 is to lead. Llama 4 will be natively multimodal. It’s an omni model, and it will have agenetic capabilities. So it’s going to be novel, and it’s going to unlock a lot of new use cases.

Meta’s management thinks it will be possible in 2025 to build an AI engineering agent that is as capable as a human mid-level software engineer; management believes that the company that builds this AI engineering agent first will have a meaningful advantage in advancing AI research; Meta already has internal AI coding assistants, powered by Llama; management has no plan to release the AI engineer as an external product anytime soon, but sees the potential for it in the longer-term; management does not expect the AI engineer to be extremely widely deployed in 2025, with the dramatic changes happening in 2026 and beyond

I also expect that 2025 will be the year when it becomes possible to build an AI engineering agent that has coding and problem-solving abilities of around a good mid-level engineer. And this is going to be a profound milestone and potentially one of the most important innovations in history, like as well as over time, potentially a very large market. Whichever company builds this first, I think it’s going to have a meaningful advantage in deploying it to advance their AI research and shape the field…

…As part of our efficiency focus over the past 2 years, we’ve made significant improvements in our internal processes and developer tools and introduce new tools like our AI-powered coding assistant, which is helping our engineers write code more quickly. Looking forward, we expect that the continuous advancements in Llama’s coding capabilities will provide even greater leverage to our engineers, and we are focused on expanding its capabilities to not only assist our engineers in writing and reviewing our code but to also begin generating code changes to automate tool updates and improve the quality of our code base…

…And then the AI engineer piece, I’m really excited about it. I mean, I don’t know that that’s going to be an external product anytime soon. But I think for what we’re working on, our goal is to advance AI research and advance our own development internally. And I think it’s just going to be a very profound thing. So I mean that’s something that I think will show up through making our products better over time. But — and then as that works, there will potentially be a market opportunity down the road. But I mean, for now and this year, we’re really — I think this is — I don’t think you’re going to see this year like an AI engineer that is extremely widely deployed, changing all of development. I think this is going to be the year where that really starts to become possible and lays the groundwork for a much more dramatic change in ’26 and beyond.

The Ray-Ban Meta AI glasses are a big hit so far but management thinks 2025 will be the pivotal year to determine if the AI glasses can be on a path towards being the next computing platform and selling hundreds of millions, or more, units; management continues to think that glasses are the perfect form factor for AI; management is optimistic about AI glasses, but there’s still uncertainty about the long-term trajectory

Our Ray-Ban Meta AI glasses are a real hit. And this will be the year when we understand the trajectory for AI glasses as a category. Many breakout products in the history of consumer electronics have sold 5 million to 10 million units and they’re third generation. This will be a defining year that determines if we’re on a path towards many hundreds of millions and eventually billions of AI glasses and glasses being the next computing platform like we’ve been talking about for some time or if this is just going to be a longer grind. But it’s great overall to see people recognizing that these glasses are the perfect form factor for AI as well as just great stylish glasses…

…There are a lot of people in the world who have glasses. It’s kind of hard for me to imagine that a decade or more from now, all the glasses aren’t going to basically be AI glasses as well as a lot of people who don’t wear glasses today, finding that to be a useful thing. So I’m incredibly optimistic about this…

…But look, the Ray-Ban Metas were hit. We still don’t know what the long-term trajectory for this is going to be. And I think we’re going to learn a lot this year. 

Meta will bring ~1 gigawatt of AI data center capacity online in 2025 and is building an AI data center that is at least 2 gigawatts in capacity; management intends to fund the investments through revenue growth that is driven by its AI advances; most of Meta’s new headcount growth will go towards its AI infrastructure and AI advances; management expects compute will be very important for the opportunities they want Meta to pursue; management is simultaneously growing Meta’s capacity and increasing the efficiency of its workloads; Meta is increasing the useful lives of its non-AI and AI servers to 5.5 years (from 4-5 years previously), which will lead to lower depreciation expenses per year; Meta started deploying its own MTIA (Meta Training and Inference Accelerator) AI chips in 2024 for inference workloads; management expects to ramp up MTIA usage for inference in 2025 and for training workloads in 2026; management will continue to buy third-party AI chips (likely referring to NVIDIA), but wants to use in-house chips for unique workloads; management thinks MTIA helps Meta achieve greater compute efficiency and performance per cost and power; management has been thinking about the balance of compute used in pre-training versus inference, but this does not mean that Meta will need less compute; management thinks that inference-time compute (or test-time compute) scaling will help Meta deliver a higher quality of service and that Meta has a strong business model to support the delivery of inference-time compute scaling; management believes that investing heavily in AI infrastructure is still going to be a strategic advantage over time, but it’s possible the reverse may be true in the future; management thinks it’s too early to tell what the long-run capacity intensity will look like

I announced last week that we expect to bring online almost a gigawatt of capacity this year. And we’re building a 2 gigawatt and potentially bigger AI data center that is so big that it will cover a significant part of Manhattan if we were placed there. We’re planning to fund all of this by, at the same time, investing aggressively in initiatives that use these AI advances to increase revenue growth…

…That’s what a lot of our new headcount growth is going towards and how well we execute on this will also determine our financial trajectory over the next few years…

…We expect compute will be central to many of the opportunities we’re pursuing as we advance the capabilities of Llama, drive increased usage of generative AI products and features across our platform and fuel core ads and organic engagement initiatives. We’re working to meet the growing capacity needs for these services by both scaling our infrastructure footprint and increasing the efficiency of our workloads…

…Our expectation going forward is that we’ll be able to use both our non-AI and AI [indiscernible] servers for a longer period of time before replacing them, which we estimate will be approximately 5.5 years. This will deliver savings in annual CapEx and resulting depreciation expense, which is already included in our guidance.

Finally, we’re pursuing cost efficiencies by deploying our custom MTIA silicon in areas where we can achieve a lower cost of compute by optimizing the chip to our unique workloads. In 2024, we started deploying MTIA to our ranking and recommendation inference workloads for ads and organic content. We expect to further ramp adoption of MTIA for these use cases throughout 2025, before extending our custom silicon efforts to training workloads for ranking and recommendations next year…

…We expect that we are continuing to purchase third-party silicon from leading providers in the industry. And we are certainly committed to those long-standing partnerships, but we’re also very invested in developing our own custom silicon for unique workloads, where off-the-shelf silicon isn’t necessarily optimal and specifically because we’re able to optimize the full stack to achieve greater compute efficiency and performance per cost and power because our workloads might require a different mix of memory versus network, bandwidth versus compute and so we can optimize that really to the specific needs of our different types of workloads.

Right now, the in-house MTIA program is focused on supporting our core ranking and recommendation inference workloads. We started adopting MTIA in the first half of 2024 for core ranking and recommendations in [indiscernible]. We’ll continue ramping adoption for those workloads over the course of 2025 as we use it for both incremental capacity and to replace some GPU-based servers when they reach the end of their useful lives. Next year, we’re hoping to expand MTIA to support some of our core AI training workloads and over time, some of our Gen AI use cases…

…There’s already sort of a debate around how much of the compute infrastructure that we’re using is going to go towards pretraining versus as you get more of these reasoning time models or reasoning models where you get more of the intelligence by putting more of the compute into inference, whether just will mix shift how we use our compute infrastructure towards that. That was already something that I think a lot of the — the other labs and ourselves were starting to think more about and already seemed pretty likely even before this, that — like of all the compute that we’re using, that the largest pieces aren’t necessarily going to go towards pre-training. But that doesn’t mean that you need less compute because one of the new properties that’s emerged is the ability to apply more compute at inference time in order to generate a higher level of intelligence and a higher quality of service, which means that as a company that has a strong business model to support this, I think that’s generally an advantage that we’re now going to be able to provide a higher quality of service than others who don’t necessarily have the business model to support it on a sustainable basis…

…I continue to think that investing very heavily in CapEx and infra is going to be a strategic advantage over time. It’s possible that we’ll learn otherwise at some point, but I just think it’s way too early to call that…

…I think it is really too early to determine what long-run capital intensity is going to look like. There are so many different factors. The pace of advancement in underlying models, how efficient can they be? What is the adoption and use case of our Gen AI products, what performance gains come from next-generation hardware innovations, both our own and third party and then ultimately, what monetization or other efficiency gains our AI investments unlock. 

In 2024 H2, Meta introduced a new machine learning system for ads ranking, in partnership with Nvidia, named Andromeda; Andromeda has enabled a 10,000x increase in the complexity of AI models Meta uses for ads retrieval, driving an 8% increase in quality of ads that people see; Andromeda can process large volumes of ads and positions Meta well for a future where advertisers use the company’s generative AI tools to create and test more ads

In the second half of 2024, we introduced an innovative new machine learning system in partnership with NVIDIA called Andromeda. This more efficient system enabled a 10,000x increase in the complexity of models we use for ads retrieval, which is the part of the ranking process where we narrow down a pool of tens of millions of ads to the few thousand we consider showing someone. The increase in model complexity is enabling us to run far more sophisticated prediction models to better personalize which ads we show someone. This has driven an 8% increase in the quality of ads that people see on objectives we’ve tested. Andromeda’s ability to efficiently process larger volumes of ads also positions us well for the future as advertisers use our generative AI tools to create and test more ads.

Advantage+ has surpassed a $20 billion annual revenue run rate and grew 70% year-on-year in 2024 Q4; Advantage+ will now be turned on by default for all campaigns that optimise for sales, app, or lead objectives; more than 4 million advertisers are now using at least one of Advantage+’s generative AI ad creative tools, up from 1 million six months ago; Meta’s first video generation tool, released in October, already has hundreds of thousands of advertisers using it monthly

 Adoption of Advantage+ shopping campaigns continues to scale with revenues surpassing a $20 billion annual run rate and growing 70% year-over-year in Q4. Given the strong performance and interest we’re seeing in Advantage+ shopping and our other end-to-end solutions, we’re testing a new streamlined campaign creation flow. So advertisers no longer need to choose between running a manual or Advantage+ sales or app campaign. In this new setup, all campaigns optimizing for sales, app or lead objectives will have Advantage+ turned on from the beginning. This will allow more advertisers to take advantage of the performance Advantage+ offers while still having the ability to further customize aspects of their campaigns when they need to. We plan to expand to more advertisers in the coming months before fully rolling it out later in the year.

Advantage+ Creative is another area where we’re seeing momentum. More than 4 million advertisers are now using at least one of our generative AI ad creative tools, up from 1 million six months ago. There has been significant early adoption of our first video generation tool that we rolled out in October, Image Animation, with hundreds of thousands of advertisers already using it monthly.

Meta’s management thinks the emergence of DeepSeek makes it even more likely for a global open source standard for AI models to develop; the presence of DeepSeek also makes management think it’s important that the open source standard be made in America and that it ’s even more important for Meta to focus on building open source AI models; Meta is learning from DeepSeek’s innovations in building AI models; management currently does not have a strong opinion on how Meta’s capex plans for AI infrastructure will change because of the recent news with DeepSeek 

I also just think in light of some of the recent news, the new competitor DeepSeek from China, I think it also just puts — it’s one of the things that we’re talking about is there’s going to be an open source standard globally. And I think for our kind of national advantage, it’s important that it’s an American standard. So we take that seriously, and we want to build the AI system that people around the world are using and I think that if anything, some of the recent news has only strengthened our conviction that this is the right thing for us to be focused on…

…I can start on the DeepSeek question. I think there’s a number of novel things that they did that I think we’re still digesting. And there are a number of things that they have advances that we will hope to implement in our systems. And that’s part of the nature of how this works, whether it’s a Chinese competitor or not…

…It’s probably too early to really have a strong opinion on what this means for the trajectory around infrastructure and CapEx and things like that. There are a bunch of trends that are happening here all at once.

Meta’s capex in 2025 is going to grow across servers, data centers, and networking; within each of servers, data centers, and networking, management expects growth in both AI and non-AI capex; management expects most of the AI-related capex in 2025 to be directed specifically towards Meta’s core AI infrastructure, but the infrastructure Meta is building can support both AI and non-AI workloads, and the GPU servers purchased can be used for both generative AI and core AI purposes

[Question] As we think about the $60 billion to $65 billion CapEx this year, does the composition change much from last year when you talked about servers as the largest part followed by data centers and networking equipment. And how should we think about that mix between like training and inference

[Answer] We certainly expect that 2025 CapEx is going to grow across all 3 of those components you described.

Servers will be the biggest growth driver that remains the largest portion of our overall CapEx budget. We expect both growth in AI capacity as we support our gen AI efforts and continue to invest meaningfully in core AI, but we are also expecting growth in non-AI capacity as we invest in the core business, including to support a higher base of engagement and to refresh our existing servers.

On the data center side, we’re anticipating higher data center spend in 2025 to be driven by build-outs of our large training clusters and our higher power density data centers that are entering the core construction phase. We’re expecting to use that capacity primarily for core AI and non-AI use cases.

On the networking side, we expect networking spend to grow in ’25 as we build higher-capacity networks to accommodate the growth in non-AI and core AI-related traffic along with our large Gen AI training clusters. We’re also investing in fiber to handle future cross-region training traffic.

And then in terms of the breakdown for core versus Gen AI use cases, we’re expecting total infrastructure spend within each of Gen AI, non-AI and core AI to increase in ’25 with the majority of our CapEx directed to our core business with some caveat that, that is — that’s not easy to measure perfectly as the data centers we’re building can support AI or non-AI workloads and the GPU-based servers, we procure for gen AI can be repurposed for core AI use cases and so on and so forth.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Fiverr, Mastercard, and Meta Platforms. Holdings are subject to change at any time.