Last month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q1). In it, I shared commentary in earnings conference calls for the first quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2025’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
Here they are, in no particular order:
Adobe (NASDAQ: ADBE)
Adobe’s management sees the Firefly App as a place for creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control; the Firefly app also supports 3rd-party models from Google, OpenAI, and others, with more coming soon; Firefly is attracting new customers for Adobe with first-time subscribers up 30% sequentially in 2025 Q1 (FY2025 Q2); management recently rolled out new Firefly offerings such as the (1) Firefly Image Model 4 for life-like images, (2) Firefly Image Model 4 Ultra for impeccable detail, (3) Firefly Video Model; users of Firefly can now collaborate with other users through Firefly Boards; management is monetising Firefly through new Firefly App subscription plans (ranging from US$10 per month to US$200 per month) and the Creative Cloud Pro plan; traffic to the Firefly App was up 30% sequentially in 2025 Q1 (FY2025 Q2); paid subscriptions to the Firefly App nearly doubled sequentially in 2025 Q1 (FY2025 Q2); Firefly has powered 24 billion generations (20 billion in 2024 Q4) since its launch in March 2023; management believes that the only commercially safe way to build AI models is to do with content where the creators are willing participants in the process and this is how Firefly was trained; companies are choosing Firefly because of its commercial safety; management thinks Firefly will be the ultimate creative destination because even if it’s used only for ideation, users will want intellectual property that is safe for production; management sees Creative Cloud Pro (CC Pro) as the place for where Adobe’s AI and generative capabilities will increasingly be best available
The Firefly App is a new destination for AI-assisted content ideation, creation and production with Adobe’s comprehensive family of commercially safe Firefly creative models and an expansive ecosystem of third-party models. Firefly empowers creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control, iterate on their creations through Adobe’s creative apps and seamlessly deliver them into production. Our support for third-party models, including from Google, OpenAI and Black Forest Labs gives creators the flexibility to choose the AI that works best for them, with Firefly upholding our standards for IP safety and transparency…
…The Firefly App is attracting new users to the Adobe franchise with first-time subscribers growing 30% quarter-over-quarter…
…Earlier this quarter, we launched the new Firefly Image Model 4 for life-like images and the Firefly Image Model 4 Ultra for impeccable detail in complex visuals. We also made the Firefly Video Model generally available for the first time, empowering creators to generate 4K footage from text prompts and images with unprecedented creative control and extend video clips in our tools like Premiere Pro…
…In addition to supporting our own Firefly Models, the Firefly App now supports a growing family of third-party models for creative ideation. Firefly offers the flexibility to explore the diverse aesthetic styles of Google’s Imagen and Veo models OpenAI’s GPT-image model and Black Forest Labs’ Flux image model with Runway, Ideogram, Fal.ai, Luma and Pika coming soon. With the release of the Firefly Boards public beta earlier this quarter, creators can now ideate and collaborate when generating content with Firefly and our third-party models.
To monetize this incredible innovation, we have introduced a comprehensive set of offerings aimed at new and existing creators and creative professionals across all routes to market. The new Firefly App subscription plans are ideal for creators starting their creative journey and are now globally available. Creative Cloud Pro, which combines Creative Cloud All Apps and the Firefly App represents the best value for content creation and is now available in North America. Creative Cloud Pro will be released in other geographies over the next few months…
…Traffic to the Firefly App grew over 30 percent quarter over quarter and paid subscriptions nearly doubled in the same period…
…Excitement for and adoption of generative AI innovation, such as Generative Fill in Photoshop Generative Remove in Lightroom, Generative Expand in Illustrator, Generative Extend in Premiere Pro, video generation in the Firefly App and production workflows in Firefly Services, continues to accelerate with over 24 billion cumulative generations exiting Q2…
…One of the core things that we believe from the very beginning is that the right transparent and really the only commercially safe way to build these models is to do it on a set of content that — where the contributors are themselves excited and willing participants in the process. And so we have trained our Firefly models, as many of you know, on Stock and other content that we have access to. We do have a contributor fund that pays out to those individuals. And as a result, we feel like we’re in a very advantaged position when it comes to people choosing models. I’ll say, especially in enterprises, we see a lot of companies selecting Firefly partially because of the quality, partially because of the controllability of it but also very, very strongly because of the commercial safety of it…
…I think Firefly, with the support for all of those models, will be the ultimate creative destination. And to, I think, punctuate what David said, in the enterprise, the value proposition that we have resonates because even if you use it for ideation, you’re not going to use something that’s not being designed to be the intellectual property being correct for production…
…We can meet the needs of creators with the new Firefly plans that we’ve released recently, whether it’s Firefly Standard for, in the U.S., $10; Firefly Pro for $20; or Firefly Premium, which is unlimited access to video generation as well, for $200 a month…
…All of the AI and generative capability will increasingly be best available for our customers through the CC Pro application.
Adobe’s management sees marketing professionals being required to create huge amounts of personalised content and this is where Adobe’s AI-powered vertical solutions can help; management is seeing increasing demand from customers for personalisation capabilities in Adobe’s Digital Experience suite of products
Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile apps, e-mail, websites, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom commercially safe models and purpose-built agents tailored to address the inefficiencies of the content supply chain. Marketing practitioners, Chief Marketing Officers and Chief Digital Officers need solutions that enable them to acquire, engage and delight customers across a variety of channels and geographies.
Adobe’s strategy is to deliver a comprehensive marketing technology platform leveraging AI to offer vertical solutions that integrate content, customer data and profiles across journeys in both B2B and B2C industries. Adobe GenStudio and Firefly Services are revolutionizing the content supply chain across enterprises, empowering marketers to activate personalized on-brand content across millions of touch points. For marketing professionals, Adobe Experience Platform and apps and purpose-built agents are redefining the future of customer connection by enabling real-time orchestration of content, data and journeys…
… At our scale, the bigger metric that we track is in DX. Let’s talk about DX. And in DX, how much of this technology that we have been delivering is being adopted? What is the scale at which we’re driving, whether it’s campaigns, whether it’s engagement through e-mail or SMS, the amount of transactions that are going through AEP and apps? And all of that is because the agility of marketing and the ability to personalize these experiences with customers is dramatically increasing. So that’s one underlying trend that we clearly see. And the demand for that is only increasing and not decreasing.
Adobe’s AI-influenced ARR (annual recurring revenue) is in the billions; Adobe’s AI-first products is tracking ahead of management’s target of $250 million in ending ARR by end-FY2025; management thinks Adobe is still very early in AI monetisation and feels good about it
While our AI influenced ARR is already contributing billions of dollars, our AI book of business from AI-first products, such as Acrobat AI assistant, Firefly App and Services and GenStudio for Performance Marketing is tracking ahead of the $250 million ending ARR target by the end of fiscal 2025…
…It’s very early in terms of the AI monetization, but we’re very advanced in terms of how much innovation we’ve delivered. And so it feels really good right now.
Adobe’s management thinks the infusion of conversational experiences in Adobe Acrobat and generative AI models in Express is allowing users to combine the 2 products in novel ways; Adobe’s Acrobat and Express products have combined monthly active users of more than 700 million, up 25% year-on-year; Express capabilities within Acrobat saw adoption grow 3x sequentially and 11x year-on-year in 2025 Q1 (FY2025 Q2); there’s a 75% increase in students gaining access to Acrobat AI Assistant and/or Express premium plans; Acrobat AI Assistant and Express added 35,000 new businesses in 2025 Q1 (FY2025 Q2), with Express adding 8,000; monthly active users (MAUs) in Acrobat’s AI Assistant and Express’s generative AI grew 3x year-in-year in 2025 Q1 (FY2025 Q2); Acrobat AI Assistant saw number of questions asked nearly doubling sequentially in 2025 Q1 (FY2025 Q2)
Our investments in conversational experiences in Acrobat and generative AI models in Express allow users to combine the 2 products in novel ways that empower users to accelerate their time to insight and ability to create compelling presentations. Sales professionals can gather industry reports on a prospect, use AI system to quickly identify effective sales conversations and automatically generate a pitch deck with Express. A social media marketer can ask AI Assistant for help identifying buying behaviors in market research documents and use that information to create better TikTok videos in Express…
…We’re seeing steady growth across our family of Acrobat and Express products with combined monthly active user growth accelerating to over 25% year-over-year and crossing 700 million monthly active users as Acrobat users increasingly rely on Acrobat AI Assistant to enhance content consumption and Express to create richer PDFs, customized presentations and animated designs. Due to increasing customer demand for creative functionality through Acrobat, we saw an approximately 3x quarter-over-quarter and approximately 11x year-over-year increase in the adoption of Express capabilities within Acrobat…
…With students, we’re driving over 75% year-over-year increase in students gaining access to Acrobat AI Assistant and/or Express premium plans. These products are also seeing strong adoption by businesses with over 35,000 new businesses added in Q2. Express alone added around 8,000 new businesses this quarter, approximately 6x growth year-over-year including companies such as Microsoft, ServiceNow, Workday, Intuit and top sports leagues like MLB, the NFL and Premier League…
…Use of generative AI features continues to grow quickly with AI Assistant MAU in Acrobat and generative AI MAU in Express growing over 3x year over year; Acrobat AI Assistant engagement continues to accelerate with the number of questions asked nearly doubling quarter over quarter;
Adobe’s management has launched GenStudio Foundation to provide visibility and actionable insights into campaign plans, projects and assets; Adobe has GenStudio for Performance Marketing for users to create on-brand content for websites and social media; GenStudio for Performance Marketing grew 45% sequentially in 2025 Q1 (FY2025 Q2); management thinks Adobe can work well with Meta even though Meta is increasing usage of AI to automate advertising creation
We launched GenStudio Foundation, a unified interface to bring together data from our full suite of content supply chain applications providing visibility and actionable insights into campaign plans, projects and assets. GenStudio for Performance Marketing empowers teams to create their own on-brand content, supporting ad creation and activation for Google, LinkedIn, Meta, Microsoft, Snap and TikTok…
…Momentum for GenStudio for Performance Marketing with growth of over 45 percent quarter over quarter…
…[Question] With respect to outside of your traditional competitive environment, maybe just coopetition with vendors like Meta where it — at least it’s a little harder for some investors to understand given their increasing usage of AI to automate kind of ad creation and campaign optimization. To what extent does that overlap versus partner with some of the GenStudio offerings?
[Answer] In terms of the ad platforms, obviously, their primary goal is to grow the ad revenue. The best way to do that is to make sure that the creative is optimized and the ROI from the advertisers’ perspective, is clear to the advertisers, which is where our marketing stack and everything that we’re doing around GenStudio for Performance Marketing comes together really well.
Adobe’s management is seeing high enterprise demand for and adoption of Firefly Services and Custom Models for marketing use cases; solutions that customers desire from Firefly Services and Custom Models include video reframe and support for 3rd-party models; Adobe collaborated with Coca-Cola to develop the AI-powered Project Fizzion on Firefly Services and Custom Models; Project Fizzion can scale creative output up to 10x faster while reducing misinterpretation of brand guidelines in AI content; Firefly Services and Custom Models within the GenStudio solution had 4x year-on-year growth in ARR (annual recurring revenue) in 2025 Q1 (FY2025 Q2)
We’re seeing high enterprise demand for and adoption of Firefly Services and Custom Models to automate and scale on-brand content production for marketing use cases…
…We are building on the momentum behind Firefly Services and Custom Models, addressing additional highly desired solutions, including video reframe and support of third-party models for automation and cost efficiency.
With The Coca-Cola Company, we co-developed a new AI-powered design intelligence system called Project Fizzion, built on Firefly Services and Custom Models. Project Fizzion is designed to scale creative output up to 10x faster while tackling the common challenge of misinterpreting brand guidelines in AI-powered content…
…Continued demand for Firefly Services and Custom Models as part of the GenStudio solution, resulting in 4x year-over-year ARR growth.
The Adobe Experience Platform (AEP) has the AEP AI Assistant that allows users to interact with data through natural language; management has introduced native AI agents into AEP that can orchestrate customer journeys in real time; the NFL (National Football League) in the USA is using AEP to enable all 32 clubs in the league to scale personalized fan touch points across different channels; management recently introduced 11 AI agents, including the most recent Product Support Agent, to improve Adobe’s customers’ customer-experience; the AI agents leverage the Adobe Experience Platform; companies such as Wegmans Food Markets and dentsu Merkle are already using Product Support Agent; AEP’s subscription revenue grew 40% year-on-year in 2025 Q1 (FY2025 Q2);
Adobe Experience Platform and native applications are central to delivering unified, personalized customer experiences. With the introduction of AEP AI Assistant, we’ve extended the platform’s value by enabling teams across the business to interact with data through natural language, streamlining ingestion, insight generation, audience segmentation and experience delivery. Building on this momentum, we are now expanding AEP with native AI agents that intelligently orchestrate customer journeys in real time. These innovations empower our customers to leverage their first-party customer data and deliver more relevant high-impact advertising experiences rooted in direct customer relationships.
The National Football League expanded our global partnership combining content data and journeys to deliver a new level of AI-powered fan experiences. Adobe will enable all 32 clubs to scale personalized fan touch points across NFL channels through project management, audience and campaign development, creative production and performance optimization.
At Adobe Summit in March, we introduced the Adobe AI platform with an agentic layer to scale Customer Experience Orchestration. We unveiled 10 agents purpose built for creative, marketing and technology teams that leverage Adobe Experience Platform to act intelligently and in alignment with business goals. These agents coordinate across systems to accelerate the delivery of exceptional experiences. We recently launched a Product Support Agent to help enterprises anticipate, troubleshoot and resolve operational issues.
Customers like Wegmans Food Markets, and dentsu Merkle are already using it to streamline onboarding and feature deployment and drive faster resolutions and greater efficiency…
…Strong demand for AEP and native apps, with Q2 subscription revenue growing over 40 percent year over year.
Adobe’s core creative business subscription revenue has been accelerating over the past few quarters, driven by AI features
In terms of the pricing part of that equation, we talked about the increased value that we have in Creative Cloud Pro. That gives us some opportunity to match the value we’re providing with the pricing, and then in terms of the value is around Firefly Services and GenStudio. So that’s really the growth algorithm. The thing to note is that as we go down this path, some of this will take some time to play out because we have — for the quantity side, we have premium and lower-priced offers. But we’re starting to see the early signs of that. And if you do the math — and I’ll maybe turn it over to Dan. If you do the math, our core creative business subscription revenue has been accelerating over the past few quarters…
…If you take a look at the supplemental disclosure that we provided between the subscription revenue for creative and marketing professionals, the subscription revenue for DX, you can pretty quickly derive what the subscription revenue is for the Creative and Creative Pro audience that we serve. And I think what you’ll see is, in the current quarter, it growing 10.1% year-over-year, which is up from 10% in Q1. And when you think about the acceleration over the last 4 or 5 quarters, in the year ago period, that same 10.1% would have been about 7.9%, so just over 2% acceleration over the last 4 quarters.
Adobe’s management is not looking to increase Adobe’s headcount dramatically because employees are using AI to become more efficient
We’re not really looking to grow our head count very dramatically. We are finding a lot more efficiency. People are using AI to be more efficient within the enterprise.
Meituan (OTC: MPNGY)
Meituan’s management sees 3 layers in AI, which are infrastructure, products, and work; Meituan has made good progress in 2025 Q1 in all 3 AI layers
When we talk about AI, I think there are at least 3 layers, the AI infrastructure and the AI in products and AI at work. So that’s how we view AI. And this quarter, we iterate our foundation large language model, and we have launched a new AI application and services for external users. At the same time, we also enhanced the suite of employee productivity to boost our own efficiency and improve the work experience. So it’s fair to say we have made good progress on all 3 fronts.
Meituan continued tweaking its foundational LLM (large language model) in 2025 Q1; Meituan launched a new AI application and services for external users in 2025 Q1; Meituan’s in-house large language model named Longcat can now seamlessly switch between reasoning and non-reasoning modes; Longcat’s performance in both reasoning and non-reasoning modes is at the leading edge; Meituan updated its voice interaction model, Longcat F, in 2025 Q1 and its performance now closely approaches OpenAI’s GPT-4o
On AI infrastructure. We continue to increase our investment for large language model and allocating resources not only to infrastructure CapEx but also to recruiting top-tier AI talents and to ensure our foundation of large language model is among the best tier in China. And during this quarter, we made continuous upgrade to our LongCat, large language model. The enhanced model can now seamlessly switch between reasoning and non-reasoning modes with the performance in both modes reaching the caliber of China’s leading models. Now we have also updated our end-to-end voice interaction model, LongCat F. So this updated model demonstrate advanced capabilities in understanding nuanced information, including the emotion or contextual environments and engaging in natural voice conversation. So it performance closely approach that of GPT-4o.
Meituan will soon launch an AI-powered business assistant for the food service industry; the food service industry’s AI business assistant will help with dish selection, new store location selection, menu development, and store operations
Next month in June, we plan to launch Kangaroo [foreign language]. It will be an AI-powered business decision assistant for the food service industry. It will act as an intelligent operational assistant for food service merchants and industry professionals covering 4 key scenarios, the cuisine dish selection and the new store location selection and menu development and store operations.
A key priority in Meituan’s AI initiatives is to use AI to enhance employee productivity and the workplace experience; about 52% of new code in Meituan is generated by AI, with over 90% of team members in some teams using AI coding tools intensively; management’s goal is to gradually achieve 100% adoption of AI coding tools across all engineers; Meituan has a no-code platform that is widely adopted internally, with 62% of product managers and 28% of business analysts using it; management has launched the no-code platform for public users free of charge; public users have created 9,410 applications with the no-code platform, with 1,600 of then published and used actively
We believe developing internal AI tools, as AI at work. We want to use AI to enhance employee productivity and the workplace experience. That remains a key priority in our AI initiative. So in the last quarter, we continued to improve the AI coding capabilities for engineers and actively promote internal adoption of AI coding. So currently, about 52% of new code in our company is generated by AI. And in some R&D teams, over 90% of the team members use AI coding tools intensively. And our goal is to gradually achieve 100% adoption across all engineers.
And we have our own no-code platform, and it’s for all employees and it has been widely adopted internally. The no-code platform allows user to quickly generate applications through natural language dialogue without requiring prior coding experience. And no-code is now used by all professional roles within our company, including product managers, user experience designers, business analysts, HR and finance staff. They leverage no-code for creating product prototypes, interactive pages and efficiency tools, with 62% of product managers and 28% of business analysts using the no-code platform internally. Last week, we launched the no-code platform for public users free of charge. And the URL is nocode.cn. And users can bring various creative ideas to life without adding coding skill…
…On nocode.cn, users have created 9,410 applications, with more than 1,600 of them published and in active use.
MongoDB (NASDAQ: MDB)
MongoDB’s management thinks the company’s document model database more accurately reflects the messiness of real-world data and provides customers with greater flexibility, faster time to market and the ability to scale without re-architecting; management thinks MongoDB is exceptionally well-positioned as AI changes application-development and business operations, because AI applications require unstructured data; management sees MongoDB as having 3 things that modern AI applications need – which are (1) real-time data, (2) powerful search, and (3) smart retrieval – all in 1 platform; management thinks MongoDB’s integration of embeddings, text search, vector search, and operational data is a unique differentiator for developers when building AI applications
MongoDB’s document model and the associated platform enables developers to more easily represent the messiness of real-world data, which includes understanding relationships between structured and unstructured data and managing data that is constantly evolving and changing. This fundamental architectural advantage provides customers greater flexibility, faster time to market and the ability to scale without re-architecting…
…As AI redefines how applications are built and how businesses operate, MongoDB is exceptionally well positioned. Real-world AI applications require high-quality, context-rich and offer unstructured data to deliver trustworthy outputs…
…MongoDB now brings together 3 things that modern AI-powered applications need: Real-time data, powerful search and smart retrieval. By combining these into one platform, we make it dramatically easier for developers to build intelligence responsive apps without stitching together multiple systems…
…We have best-in-class Voyage embeddings to improve the accuracy of these results to help people get comfortable with using AI. And by integrating text search, vector search and embeddings and operational data, that’s a unique differentiator. It makes the developer’s life easy, reduces cost and complexity. And so we feel we’re well positioned for this, but it’s still early as most enterprises are still early in the adoption of AI.
MongoDB’s management sees competitors retrofitting JSON and vector support on existing relational (or tabular) databases but the retrofits fail in production for AI, unlike MongoDB’s approach of being a native JSON and document-model database; management thinks that the fact that the retrofitting is happening indicates that tabular architecture databases do not suit AI applications; management thinks that recent Postgres acquisitions made by Databricks and Snowflake show that OLTP (online transaction processing) or operational data stores are the strategic high ground for AI applications, and they are where AI inference happens; management thinks inference is the big market for AI applications; management thinks the acquisitions by Databricks and Snowflake show that it is really hard to build an OLTP datastore; management thinks the acquisitions by Databricks and Snowflake are not a big deal; management thinks that both relational databases and document databases can win; management sees the popularity of Postgres as a function of the consolidation of the SQL database market; management thinks that comparing MongoDB purely with Postgres is incomplete, it should be comparing MongoDB with Postgres plus many other services
In their desire to keep up with evolving customer needs, some vendors are retrofitting their products such as adding JSON or Vector support as afterthoughts, which are superficial and brittle. This is a passive admission that MongoDB’s approach of using JSON and the docu model is the best way to model real-world data. These features may check the box, but they fall apart in production, leading to performance bottlenecks, operational headaches and spiraling infrastructure costs. Fundamentally, these vendors are constrained by the relational underpinnings. It’s important to understand that superficial compatibility with modern data types is not the same as deeply integrated production-grade functionality. MongoDB, by contrast, was purpose-built to address these needs natively…
…[Question] If you look this week at — we saw Snowflake kind of moved and — make the move towards Postgres. We saw Databricks kind of doing something there. Can you kind of frame that?
[Answer] I think the moves by both Databricks and Snowflake, I think, validate one thing that OLTP or the operational data store is the strategic high ground, especially for AI. That’s where inference happens. Inference is the big market. That’s where everyone wants to go, and you need to have an operational data store to do that. And I think the other thing it points out is building organically an OLTP store is really hard, especially when you need to meet the requirements of enterprise scale, availability, resiliency and security. And both organizations had signaled that they were working on organic approaches. Snowflake talked about Unistore, Databricks have talked about their own organic efforts, and it’s clear that they couldn’t make it happen. So this is not an easy task.
The second point I’d make is that just because they’re buying a small Postgres companies, I think — and Neon, I would say, was in the vibe coating space. And I would say Crunchy Data is a small relational company based in South Carolina. I would say that it’s not clear to me why the world needs a 15th or 16th Postgres derivative database. I think we’ll find that out. And I think there’s also some noise about how Neon is 80% of its instances are provisioned via code. I should point out that nearly 80% of MongoDB instances on Atlas are provisioned via code. And so we do that to help our customers provision and scale clusters very, very quickly…
…We believe that the fact that Postgres and other relational platforms are now adding JSON is a faceted mission that the core Tabular architecture just doesn’t get the job done in the world of AI. Developers need to be able to model the real-world data, which is complex, messy, nested, which means it has highly interdependent relationships and is constantly evolving and changing. And then when you look at the fact that they’ve bolted on these capabilities, if you add a document size greater than 2 kilobytes, it’s going to deliver a very poor performance…
…[Question] A key part of the bull narrative for Mongo has been that document databases would steadily take share from relational and then Mongo would become the default general-purpose database for modern apps. I guess my question is, does the rising popularity of Postgres among developers and a strong ecosystem it has, as we see from stuff like what Databricks did and what the cloud guys were doing. Does that suggest that relational just may have greater long-term relevance than initially anticipated?
[Answer] One is that this is a big market. It’s a $100 billion-plus market, so there can be multiple winners, right? Second, the Postgres popularity is really a function of the consolidation of the SQL market. People are leaving Oracle, leaving SQL Server, leaving MySQL and going to Postgres…
…A lot of people compare MongoDB to Postgres, and that’s actually a false comparison. By us embedding keyword search, by us embedding a native vector search, by us embedding, embedding models, you’re really comparing MongoDB to Postgres plus Elastic plus Pinecone plus something like Cohere…
…I would tell you that Postgres is a tabular database, much like all relational databases.
MongoDB’s management is hearing from customers that high-accuracy is important in AI adoption; MongoDB’s acquisition of Voyage helps MongoDB meet customers’ need for accuracy in AI applications; Voyage has leading embedding and reranking models that allow users to feed their data into AI models; Voyage’s latest release, Voyage 3.5, outperforms the next best embedding model and reduces storage costs by more than 80%; management will soon enable MongoDB users to seamlessly generate embeddings from data sitting within MongoDB in a private preview
We continually hear from large enterprises that high accuracy is a critical requirement to drive wide-scale adoption of AI. Our recent acquisition of Voyage AI enhances our ability to serve this need. Embeddings are the bridge between a large language model and a customer’s private data. Voyages leading embedding and reranking models allow customers to feed precise and relevant context into LLMs, significantly improving the accuracy and reliability of the output of AI applications…
…With the release of Voyage 3.5, we’ve taken another step forward, meaningfully outperforming the next best embedding models while reducing storage costs by more than 80%…
…We acquired Voyage. That’s going to be natively part of the platform. We’re going to — later this month, we will enable people to seamlessly generate embeddings from data sitting inside MongoDB, and that will be in private preview. So that’s within 4 months of the acquisition.
Startups and enterprises are using MongoDB for their AI applications; MongoDB has some high-profile AI customers using its platform
Start-ups and mature companies are using MongoDB to help to deliver the next wave of AI-powered applications to their customers, including Cursor, Haleon, Vonage, the Financial Times and LG Uplus…
…We have some high-profile AI customers already on our platform and lots of other smaller customers.
MongoDB’s management continues to see enterprises being early in the adoption of AI; management thinks that the barriers to adoption of AI are limited skills with AI and lack of trust in AI because of the risk of hallucination; some early use cases that management has seen with AI are around round operating efficiency, chatbots, and domain-specific software; management thinks that the real enduring value will come when enterprises build custom AI apps, because there is no competitive advantage for an enterprise in using an AI application that the enterprise’s competitors can also use
We see thousands of customers building thousands of apps on MongoDB, and that’s growing quarter-over-quarter. We are seeing some high-profile, well-known AI companies. I mentioned Cursor on the call, and there’s some — a few other high-profile companies who are building on top of MongoDB. And obviously, those businesses are really taking off. But what we see is that enterprises are still early in the adoption of AI. The barriers include there’s a limited set of skills and experience with AI, trust with AI systems that are probabilistic, which is another way of saying the risk of hallucinations. And so we see obviously some early use cases around operating efficiency, chatbots, cogen and domain-specific ISVs like Harvey, but that customers are using…
…But the real enduring value will come when people start building custom AI apps. And the point I want to make is that anyone can use an ISV to run their business, but that doesn’t give them a competitive advantage because their competitors could use the same ISV. What really gives them a competitive advantage is building custom solutions around using AI to transform their business, whether it is to seize new opportunities to respond to new threats to drive more operating efficiency.
Examples of messy real-world data that are really difficult to work with in relational databases but that are easy with MongoDB
If you want to model the message that has attachments or reactions or part of the threaded conversation, how do you do that in a structured table? If you want to deal with adding new fields or new values and all that, how do you — for example, if you have a user who has something multiple phone numbers, how do you model that quickly? How do you deal with nested structures, right, where a customer record could have — include past orders each with their own line items and order history. Like how do you do that with a — it’s much more difficult where you can model that so much more easily in MongoDB, how do you deal with like messy, inconsistent data that there is no uniformity to.
NVIDIA (NASDAQ: NVDA)
NVIDIA’s Data Center revenue again had incredibly strong growth in 2025 Q1, driven by AI factory build outs and the ramp of the Blackwell family of chips
Data Center revenue of $39 billion grew 73% year-on-year…
…AI factory build-outs are driving significant revenue…
…Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete.
AI workloads on NVIDIA’s chips have now transitioned strongly to inference; NVIDIA’s management is seeing a huge jump in inference demand; major NVIDIA customers, such as OpenAI, Microsoft, and Google, are seeing huge leaps in AI token generation; Microsoft processed 100 trillion tokens in 2025 Q1, up 5x year-on-year; inference-serving startups have tripled their token generation rate and revenues
AI workloads have transitioned strongly to inference…
…We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis…
…Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.
The US government recently issued export controls to China on NVIDIA’s H20 chips, which caused the company to write-off the value of the chips the company can no longer sell; NVIDIA’s management believes China’s AI accelerator market will exceed US$50 billion; management thinks NVIDIA’s loss of access to the Chinese market will harm the company’s business, and benefit the company’s competitors in China and elsewhere; as a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China was below management’s expectations in 2025 Q1 and was down sequentially; management expects a large decline in China data center revenue in 2025 Q2; Singapore is used by many of NVIDIA’s large customers for centralized invoicing and the NVIDIA products billed under Singapore are shipped elsewhere; nearly all of NVIDIA’s H100, H200, and Blackwell Data Center revenue billed to Singapore was for orders from US customers; management sees that half of the world’s AI researchers are based in China; management thinks that the AI platform that wins China will lead globally; because of the US government’s latest export controls, the Chinese AI market is effectively closed to the US; management sees China moving on with AI with or without the US, and the export controls weakening the US’s position; management thinks the US government’s assumption that China cannot make AI chips is clearly wrong; management sees China’s DeepSeek and Qwen as among the best open-source AI models, and these models have gained traction outside of China; management thinks the US wins when top open-source models, even those from China, are built on American infrastructure
On April 9, the U.S. government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory. In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide…
…China as a percentage of our Data Center revenue was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 billed revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200, and Blackwell Data Center compute revenue billed to Singapore was for orders from U.S.-based customers…
…With half of the world’s AI researchers based there, the platform that wins China is positioned to lead globally. Today, however, the $50 billion China market is effectively closed to U.S. industry…
…China’s AI moves on with or without U.S. chips. It has the compute to train and deploy advanced models. The question is not whether China will have AI, it already does. The question is whether one of the world’s largest AI markets will run on American platforms. Shielding Chinese chip makers from U.S. competition only strengthens them abroad and weakens America’s position. Export restrictions have spurred China’s innovation and scale…
…The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers win AI wins AI. Export controls should strengthen U.S. platforms, not drive half of the world’s AI talent to rivals…
…, DeepSeek and Qwen from China are among the most — among the best open source AI models. Released freely, they’ve gained traction across the U.S., Europe and beyond…
…DeepSeek also underscores the strategic value of open source AI. When popular models are trained and optimized on U.S. platforms, it drives usage, feedback and continuous improvement, reinforcing American leadership across the stack. U.S. platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure.
Blackwell’s ramp is the fastest product ramp in NVIDIA’s history; management believes the introduction of the GB200 NVL architecture within the Blackwell family allows users to achieve the lowest cost per inference token; management has seen a significant improvement in manufacturing yields for the GB200 NVL; GB200 NVL is now generally available; hyperscalers are deploying 1,000 NVL72 racks, or 72,000 Blackwell GPUs, on a weekly basis, and are on track to increase their deployment-pace in 2025 Q2; Microsoft has already deployed tens of thousands of Blackwell GPUs for OpenAI, and Microsoft is ramping up to hundreds of thousands of Blackwell GPUs; major CSPs (cloud services providers) are already sampling GB300 systems, with production expected later in 2025 Q2; the GB300’s design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200; software optimisations have already improved the performance of the Blackwell family by 1.5x in May 2025; NVIDIA has brought the Blackwell family of chips to mainstream gaming; compared to the Hopper family, the Blackwell family of chips has 40x higher speed and throughput, which is critical in driving down the cost of inference
Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center-scale workloads and to achieve the lowest cost per inference token. While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 NVL racks are now generally available for model builders, enterprises and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers…
…Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields…
…While Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone…
…This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti, starting at just $299. The RTX 5060 also debuted in laptops, starting at $1,099. These systems doubled the frame rate and slashed latency. These GeForce RTX 5060 and 5060 Ti desktop GPUs and laptops are now available…
…Compared to Hopper, Grace Blackwell is some 40x higher speed and throughput, compared. And so this is going to be a huge, huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time.
NVIDIA Dynamo can increase the AI inference throughput of Blackwell NVL72 by 30x for AI reasoning models; Capital One reduced its AI chatbot’s latency by 5x with Dynamo
NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry. Developer engagements increased, with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo.
In the latest MLPerf inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our 8-GPU 200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance per GPU as well as 9x more GPUs all connected on a single NVLink domain.
NVIDIA’s CUDA software ecosystem has improved the inference performance of the Hopper family of chips by 4x over 2 years
We increased the inference performance of Hopper by 4x over 2 years. This is the benefit of NVIDIA’s programmable CUDA architecture and rich ecosystem.
There were nearly 100 NVIDIA-powered AI factories in flight in 2025 Q1, up 2-fold year-on-year; the number of GPUs in each AI factory also doubled from a year ago; management has line of sight to tens of gigawatts of AI data center projects requiring NVIDIA AI infrastructure; there are many more AI factories that have yet to be announced
The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a twofold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period…
…We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not-too-distant future…
…In the remarks, Colette mentioned there’s some 100 AI factories being built. There’s a whole bunch that haven’t been announced.
NVIDIA’s management sees AI agents as a new digital workforce that can handle simple as well very complex tasks; management has used the Llama model architecture to build the Llama Nemotron family of open reasoning models for agentic AI; the Nemotron models are available as NVIDIA inference microservices (NIMs); management has improved the accuracy and inference speed of the Nemotron by 20% and 5x, respectively; large enterprises including Accenture and Microsoft are using Nemotron
We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs, or NVIDIA inference microservices, with multiple sizes to meet diverse deployment needs. Our post-training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft are transforming work with our reasoning models.
Cisco used NVIDIA NeMo microservices improve its code assistant’s accuracy by 40% and improve response time by 10x; NASDAQ used NVIDIA NeMo to improve the accuracy and response time of its AI platform’s search capabilities by 30% each; Shell used NVIDIA NeMo to reduce the training time of its custom LLM by 20% and improved its accuracy by 30%
NVIDIA NeMo microservices are generally available across industries that are being leveraged by leading enterprises to build, optimize and scale AI applications. With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. NASDAQ realized a 30% improvement in accuracy and response time in its AI platform’s search capabilities. And Shell’s custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo’s parallelism techniques accelerated model training time by 20% when compared to other frameworks.
Yum! Brands will use NVIDIA AI on 500 of its restaurants this year, before expanding to 61,000 restaurants over time, to improve operations; Cybersecurity companies such as Crowdstrike are using NVIDIA AI for agentic workflows; Crowdstrike achieved 2x faster detection with 50% less compute cost through NVIDIA AI
We also announced a partnership with Yum! Brands, the world’s largest restaurant company to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order-taking, optimize operations and enhance service across its restaurants. For AI-powered cybersecurity, leading companies like Check Point, CrowdStrike and Palo Alto Networks are using NVIDIA’s AI security and software stack to build, optimize and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.
NVIDIA’s networking revenue increased sequentially in 2025 Q1; NVLink 72 offers 14x the bandwidth of PCIe Gen 5; NVLink 72 can carry 130 terabytes per second of bandwidth in a single rack (the world’s peak internet traffic is also around 130 terabytes per second); NVLink shipments in 2025 Q1 exceeded $1 billion; NVIDIA recently announced NVLink Fusion, which (1) allows hyperscalers to connect semi-custom CCUs (close control units) to NVIDIA racks, (2) allows ASIC and CPU providers to connect to NVIDIA racks; management thinks Spectrum-X (NVIDIA’s Ethernet networking solution) offers the highest throughput and lowest latency networking solution for AI; Spectrum-X had strong sequential and year-on-year growth; Spectrum-X is widely adopted by major CSPs and consumer internet companies; Google Cloud and Meta became Spectrum-X customers in 2025 Q1; NVIDIA has introduced silicon photonic switches to Spectrum-X and Quantum-X, which increases an AI factory’s power efficiency by 3.5x, network resiliency by 10x, and time-to-market by 1.3x; management sees NVIDIA has having 3, maybe 4, networking platforms right now; latency matters a lot in AI, so achieving low latency in AI networking is important; Spectrum-X has improved the utilisation of Ethernet in AI clusters by 50%-90%
Sequential growth in networking resumed in Q1 with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads.
We created the world’s fastest switch, NVLink, for scale up. Our NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink 72 carries 130 terabytes per second of bandwidth in a single rack, equivalent to the entirety of the world’s peak Internet traffic. NVLink is a new growth vector and is off to a great start with Q1 shipments exceeding $1 billion.
At COMPUTEX, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies and Astera Labs as well as CPU suppliers such as Fujitsu and Qualcomm, to leverage and relink Fusion to connect our respective ecosystems.
For scale out, our enhanced Ethernet offerings deliver the highest throughput, lowest latency networking for AI. Spectrum-X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including CoreWeave, Microsoft Azure and Oracle Cloud and xAI. This quarter, we added Google Cloud and Meta to the growing list of Spectrum-X customers. We introduced Spectrum-X and Quantum-X silicon photonics switches, featuring the world’s most advanced co-packaged optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasing power efficiency by 3.5x and network resiliency by 10x, while accelerating customer time to market by 1.3x…
…We now have 3 networking platforms, maybe 4. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do. Scaling out is easier to do, but scaling up is hard to do. And that platform is called NVLink… In addition to InfiniBand, we also have Spectrum-X… the last one is BlueField, which is our control plane…
…In the case of AI, you have a lot of computers working together. And the traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking and it wants to get work done as quickly as possible, and you’ve got a whole bunch of nodes working together…
…We enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet. And as a result, we improved the utilization of Ethernet in these clusters, these clusters are gigantic, from as low as 50% to as high as 85%, 90%. And so the difference is, if you had a cluster that’s $10 billion, and you improved its effectiveness by 40%, that’s worth $4 billion. It’s incredible. And so Spectrum-X has been really, quite frankly, a home run.
NVIDIA’s GeForce is the largest AI personal computing footprint for developers; NVIDIA added AI laptop models in 2025 Q1 that can run Microsoft’s CoPilot+; NVIDIA’s DGX Spark and DGX Station delivers 1 petaflop and 20 petaflops, respectively, of AI compute in a desktop formfactor; DGX Spark and DGX Station will be available later in 2025
With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft’s CoPilot+….
…DGX Spark delivers up to 1 petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 Superchip. DGX Spark will be available in calendar Q3 and DGX Station later this year.
NVIDIA’s Omniverse is being adopted even more widely by leading software companies; TSMC used Omniverse to save months of work by designing fabs virtually; Foxconn used Omniverse to accelerate thermal simulations by 150x; Pegatron used Omniverse to reduce assembly line defect rates by 67%; GE Healthcare is using Omniverse to develop robotic imaging and surgery systems.
We have deepened Omniverse’s integration and adoption into some of the world’s leading software platforms, including Databricks, SAP and Schneider Electric. New Omniverse Blueprint such as Mega for at-scale robotic fleet management are being leveraged in KION Group, Pegatron, Accenture and other leading companies to enhance industrial operations. At COMPUTEX, we showcased Omniverse’s great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually, Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%…
…GE Healthcare is using the new NVIDIA Isaac platform for health care simulation built on NVIDIA Omniverse and using NVIDIA Cosmos for platform speed, development of robotic imaging and surgery systems.
NVIDIA’s automotive revenue had strong growth in 2025 Q1, driven partly by the ramp of self-driving technologies; NVIDIA is partnering with GM (General Motors) to build next-gen vehicles with NVIDIA AI, simulation, and accelerated computing; NVIDIA is now in production with its full-stack solution for Mercedes-Benz
With our Automotive group. Revenue was $567 million, down 1% sequentially but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs. We are partnering with GM to build the next-gen vehicles, factories and robots using NVIDIA AI, simulation and accelerated computing. And we are now in production with our full-stack solution for Mercedes-Benz starting with the new CLA, hitting roads in the next few months.
NVIDIA recently announced Isaac GROOT N1, the world’s first open fully customizable foundation model for humanoid robots; NVIDIA recently launched Cosmos World Foundation models; leading robotics companies have begun using Isaac and Cosmos; management is very bullish on the development of robotics and thinks future manufacturing plants in the US will deeply incorporate robotics
We announced Isaac GR00T N1, the world’s first open fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmos World Foundation models. Leading companies include 1X, Agility Robotics, Figure AI, Uber and Waabi. We’ve begun integrating Cosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and XPENG Robotics are harnessing Isaac’s simulation to advance their humanoid efforts…
…The era of robotics is here, billions of robots, hundreds of millions of autonomous vehicles and hundreds of thousands of robotic factories and warehouses will be developed…
…Regarding onshore manufacturing, President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs and strengthen national security. Future plants will be highly computerized in robotics. We share this vision.
NVIDIA’s management sees reasoning models as being compute-intensive and requiring hundreds to thousands times more tokens per task than one-shot inference models; management thinks reasoning models are driving a step-function surge in inference demand
Reasoning AI enables step-by-step problem-solving, planning and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands more — thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand.
NVIDIA’s management sees AI scaling laws as being firmly intact, with inference now being a new driver
AI scaling laws remain firmly intact, not only for training, but now inference too requires massive scale compute.
TSMC’s new plants in the USA are for manufacturing of NVIDIA’s chips; other important chip-manufacturing partners of NVIDIA, besides TSMC, are also investing in US manufacturing; NVIDIA has made substantial long-term purchase commitments for US-made chips; management’s goal for US-manufacturing of AI chips is “From chip to supercomputer, built in America, within a year”; management sees the USA as always being NVIDIA’s largest market and home to the largest installed base of NVIDIA’s infrastructure
TSMC is building 6 fabs and 2 advanced packaging plants in Arizona to make chips for NVIDIA. Process qualification is underway with volume production expected by year-end. SPIL and Amkor are also investing in Arizona, constructing packaging, assembly and test facilities. In Houston, we’re partnering with Foxconn to construct a 1 million square foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we’ve made substantial long-term purchase commitments, a deep investment in America’s AI manufacturing future. Our goal: From chip to supercomputer built in America within a year. Each GB200 NVLink 72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job…
…The U.S. will always be NVIDIA’s largest market and home to the largest installed base of our infrastructure.
NVIDIA’s management is seeing the US government, under the Trump administration, changing its tune on AI diffusion rules; the US government now has a new policy to promote US AI technology with trusted partners; NVIDIA’s management is seeing the US government as wanting US AI technology to lead
On AI Diffusion Rule, President Trump rescinded the AI Diffusion Rule, calling it counterproductive, and proposed a new policy to promote U.S. AI tech with trusted partners. On his Middle East tour, he announced historic investments. I was honored to join him in announcing a 500-megawatt AI infrastructure project in Saudi Arabia and a 5-gigawatt AI campus in the U.A.E. President Trump wants U.S. tech to lead. The deals he announced are wins for America, creating jobs, advancing infrastructure, generating tax revenue and reducing the U.S. trade deficit.
NVIDIA’s management thinks every country now sees AI as a core technology for the next industrial revolution
Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities. At COMPUTEX, we announced Taiwan’s first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure. Japan, Korea, India, Canada, France, the U.K., Germany, Italy, Spain and more are now building national AI factories to empower start-ups, industries and societies.
NVIDIA’s management is seeing plenty of enterprise-data living on-premises, so NVIDIA is moving AI into enterprises instead of waiting for enterprises to shift to the cloud
We’re going to see AI go into enterprise, which is on-prem. Because so much of the data is still on-prem, access control is really important, it’s really hard to move all of — every company’s data into the cloud. And so we’re going to move AI into the enterprise. And you saw that we announced a couple of really exciting new products: our RTX Pro enterprise AI server that runs everything enterprise and AI; our DGX Spark and DGX Station, which is designed for developers who want to work on-prem. And so enterprise AI is just taking off.
NVIDIA’s management thinks 6G technology will be built on AI
Telcos, today, a lot of the telco infrastructure will be, in the future, software-defined and built on AI. And so 6G is going to be built on AI.
NVIDIA’s management thinks agentic AI has really dispelled a lot of worries people had over AI hallucinations
AI really busted through. Concerns about hallucination or its ability to really solve problems, I think a lot of people are crossing that barrier and realizing how incredible, incredibly effective agentic AI is and reasoning AI is.
Okta (NASDAQ: OKTA)
Okta’s new products, including Identity Threat Protection with Okta AI, had strong contribution in 2025 Q1
New products such as Okta Identity Governance, Okta Privileged Access, Okta Device Access, Fine Grained Authorization, Identity Security Posture Management and Identity Threat Protection with Okta AI had another quarter of strong contribution.
Okta’s latest advancements help organisations protect AI systems; Okta has been protecting nonhuman identities, or NHIs, for a long time, but NHIs have boomed in recent times with the rise of AI agents; in 2024, only 15% of organisations were confident of their ability to secure NHIs, and Okta has products that help solve this problem; Okta’s products to secure NHIs also help secure human identities; Okta’s products to secure NHIs ensure AI interactions remain governed under Zero Trust policies; Okta’s Auth0 platform now has Auth for GenAI, which solves the problem of AI agents creating unsecured NHIs; Auth for GenAI has a successful developer preview, and general availability (GA) is expected in the coming months; Auth for GenAI is currently a usage-based pricing model; Auth for GenAI is useful for both large and small companies; management is seeing a lot of interest for the Auth for GenAI developer preview from small companies; management thinks the problem of NHIs will become even more prominent as more and more AI projects enter production-mode; management thinks Okta will win with NHIs in the AI age because it is the only company with a complete solution
Our newest advancements help organizations protect their employees, customers and AI systems. The key themes that Showcase this year were: one, how Okta is protecting nonhuman identities or NHIs; and two, how Auth0 is helping developers build, secure AI agents. NHIs have been around for a long time. What’s new is how the recent boom in AI agents has resulted in exponential growth in NHIs. NHIs include service accounts, shared accounts, machines and tokens. NHIs often operate outside traditional identity governance frameworks and can leave organizations vulnerable to security risks. In fact, last year, only 15% of organizations said they are confident in their ability to secure NHIs. Okta addresses this problem with Identity Security Posture Management and Okta Privileged Access. By combining these 2 products, customers can discover, secure and manage NHIs with an end-to-end secure identity fabric to secure both human identities and NHIs across a single system. This integrated approach protects non-federated and privileged identities, ensuring AI-driven automation and machine-to-machine interactions remain governed under Zero Trust policies while continuously monitoring NHI risks and vulnerabilities across the enterprise…
…Auth for GenAI addresses the problem of AI agents creating unsecured NHIs by enabling developers to integrate secure identity into their Gen AI applications. This helps ensure that AI agents have built-in authentication, fine grained authorization, async workflows and secure API access. Auth for GenAI secures AI agents at every step without slowing them down, providing developers with the trusted tools and flexibility they need. The product has had a successful developer preview, and we expect the GA launch this summer…
…Auth for GenAI is a usage-based pricing model. So it’s the number of requests to Auth0. So it’s monetized in a similar way to the way Auth0 is now…
…I think that space is — there are big companies building things that could be taking advantage of Auth for GenAI, but it’s also a lot of smaller companies, too. Every small company start-ups trying to innovate around AI agents. And I know a lot of the interest in the developer preview around Auth for GenAI has been from small companies…
…When you look at our Identity Security Posture Management, its ability to detect these NHI and you look at our privileged solution and our general access management solution, which allows companies to secure those nonhuman identities, it’s very relevant for a company even if they’re just POC in these agents. And they’re in a proof of concept. They’re not really in production. It just puts us — shines a light on this problem as they think about moving to production. So that’s a very important aspect of this dynamic in the market. Now we do think as more of these projects move into production, it’s really, really going to force this issue even more. And so I think we’re going to see further acceleration as more and more companies move into production…
…[Question] Follow up on, as you say, the nonhuman side of the business. And the broader question is why do you think Okta will win in that environment? And I think a lot of investors assume it is going to be a big market. Pricing may be different. But why does Okta win versus when we were at RSA talking to CyberArk or SailPoint or Saviynt, whoever it is, all think that they’re in a position to win, particularly since our take, it sounds like governance will be part of identity with agents, more so than, say, just access.
[Answer] I think today, it’s because we’re the only one with a complete solution. And we have this breadth of products that can help solve this problem from detection to vaulting to governance workflows. And I’m talking specifically about NHIs. And I think — but that’s — I mean, that’s only kind of entry to the race. Now we have to execute well, and we have to keep innovating.
Adversaries are now conducting IT contracting scams with AI, and Okta has recommendations to counter these threats
I encourage you to check out a blog post we shared that highlighted Okta threat intelligence’s in-depth research on how adversaries are conducting IT contracting scams using AI and our recommendations to help mitigate these threats.
Okta’s management has been having conversations with customers that are moving AI projects from POC (proof-of-concept) to production, and how Okta can help them; management is seeing that only the most advanced enterprises are in production with AI projects right now
There’s just the conversations we’re having with customers about how important what we do is to them and how much they’re investing in everything from the traditional things we’ve helped them with cloud transformation and of course, security. But now with what’s going on with all these AI projects and moving from POCs to production and how we can help with that and how we can help them build Auth for GenAI applications…
…Only the most advanced forward-leaning enterprises are actually doing production AI right now and use cases at scale where they’re seeing tangible business benefit at scale in production.
Okta’s management thinks that MCP (model context protocol) is a big deal for AI, but also recognises that it’s still very early; management sees MCP as a way for AI agents to use technology resources; management is very excited about the possibility of adding OAuth to MCP; the pricing model for OAuth within MCP is to-be-determined (TBD)
The MCP is a big deal, as you all know. And the way I think about it is it’s basically a way to — it’s almost like a new Internet. It’s a new way to communicate with tools and technology in a way that these LLMs and these emerging set of browsers and user agents on the AI Internet can use all these resources. And that’s very exciting. People don’t — people forget that if you look at the internals of the web, HTTP, the tag for a browser is actually called a user agent and it uses HTTP to connect to web resources. Well, MCP could be a new kind of Internet where the clients are actually AI agents, not user agents and they can talk to these MCP servers. So it’s very exciting from a shifting of the industry and a shifting the capabilities of what these kinds of software systems could do. But it’s also very early. We’re talking about a protocol that was announced, I think, 6 weeks ago. And everyone’s running around, adding MCP servers to their capabilities and developers are experimenting with what this means. We’re very excited about the ability to work with the standards bodies and the community to add actual OAuth to the MCP, so authentication and OAuth protocol to the MCP protocol and handshake there…
…The way MCP will be monetized and how — if we add product capabilities to extend what an authentication handshake is to an MCP server, that’s — we haven’t built that yet, and we haven’t released that yet. So that will be TBD there.
A lot of large global companies are still using on-premise identity technologies and this is an opportunity for Okta, especially when these companies want to take advantage of AI, as cloud-migration is necessary for AI
We still have tons of room to grow inside the Global 2000 and really the top 5,000 biggest companies and organizations in the world is a tremendous opportunity for us. A lot of those organizations are invested a lot in on-premise technology and a lot in on-premise identity with big identity teams that they spend a lot of money on, a lot of cost there. And those companies are with all the change around cloud migration, which has been going on for years and years and years and the focus on security. And now with all of them trying to take advantage of the AI revolution, there’s another catalyst for them to change and upgrade their identity system.
Okta’s management does not see AI-agent apps as being a big accelerant for Okta’s Customer Identity business, but the overall trend is still towards buying instead of building when it comes to customer identity solutions
[Question] When you first started talking about the customer identity opportunity, I think to us, it kind of made a lot of sense why your customers would choose to buy this stuff instead of building it out of the box. That was, I guess, more for the traditional SaaS world. So what I’m trying to understand is there seems to be a lot of newfound excitement on the customer identity side as we head into this agentic world. Is there anything about a future of agent-based apps that is going to make it even more of a no-brainer to go with buying this out of the box from you guys on the customer identity side instead of trying to develop it themselves compared to maybe the old school SaaS world?
[Answer] In general, the trend is toward more buy, less build. And I think AI probably is — I’m not sure it’s a huge accelerant of that. I think it’s probably on trend just because I think it’s mostly like the solutions are getting better. If you go 10 years ago, there wasn’t really good customer identity solutions that were easy to use, reliable, scalable. And now with Auth0 had an amazing developer experience and were easy to start using and then upsell over time. And that continues. And I think I think the moving to the world of AI and agents and embedding customer identity inside of those apps, I don’t know if it’s material different, but it’s on a trend line that’s toward buying these solutions versus building.
Okta’s management is building a whole set of capabilities and products for AI agents that are not released or announced yet
This whole agentic revolution and agents working on your behalf, I think that’s a whole other set of capabilities and products that we’re thinking about and building, and we haven’t released and announced them yet. But there’s a whole layer on top of what we talk about service accounts and tokens and API access. That’s actually tracking the agent and knowing what that means and knowing what security posture you want and what governance, life cycles, et cetera, et cetera.
Salesforce (NYSE: CRM)
Salesforce’s management thinks Informatica will enhance Salesforce’s data advantage in AI; Informatica is important in helping Salesforce customers harmonise their data for AI applications
If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right. And Informatica combined with Salesforce’s Data Cloud, combined with Tableau, combined with other key assets that we’re going to bring to bear, this is what is creating this incredible data business…
…Today, for our customers, they all want to get there. They all have the hunger to do that. They all want to have this great success, but it takes some time for them to start to build their data sets. And that is why the Informatica acquisition is so important because they all need to not only translate their data to build their master data management. They need to harmonize their data. They need to do all these things. And we see that and we go into these customers and like, “Let’s go.” And they’re like, “We can do some, but we can’t do all.” And the reason they can’t do all is because their whole enterprise data set is not fully harmonized, which is why
In enterprise AI, especially agentic AI, preparation of data sets is very important; management sees the existence of data-silos in enterprises as a key obstacle in enterprises more widely adopting AI
I think everyone who is going through an AI transformation, every business, including mine, we’re going to talk about some great businesses that are going through transformations whether it’s Pepsi or Falabella or OpenTable, et cetera, but every AI transformation is a data transformation. And you don’t see it on the consumer side because when you’re using a consumer AI, you have to remember that the data set has kind of been prefabricated for you. That is the training data and everything is put together. It’s an amalgamated data set applied to this consumer AI model. That’s not how an enterprise AI really works. You have to have your enterprise data together to get the result that you want…
…If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right…
…The enterprise has data sets that are highly controlled, highly governed and highly secured. And these data sets are everything from your customer data set to your financial data set to your HR data set, and the reality is that not all enterprise data is available to all users. Like, for example, you work, Kash, at Goldman Sachs. You can’t see all the Goldman Sachs customer information. There’s regulations around that. You can’t see all the employees’ salary information. You don’t have access to all the Goldman Sachs financial data. So when you’re using these models, they’re not just giving you access to all of this stuff. Are they, Kash? No, they have to be tightly controlled. But if I’m a Goldman Sachs customer, and I want to come in and I want to ask about my account balance or information about my — who I am and what my portfolio looks like or what my opportunities are or even if I’m a Goldman Sachs employee and I want information on — the general information on benefits or how to enable myself or how to sell products more efficiently to customers, all of those things could easily happen right now with the agentic platform. However, there’s a lot of things that could not happen as I kind of just amplified, and that is kind of the constraint.
Salesforce has closed 8,000 Agentforce deals since launch, of which half are paid; Agentforce has handled over 750,000 requests on Salesforce’s help site, lowering cases by 7% year-on-year; 800 customers are already in production with Agentforce; management has launched hundreds of prebuilt Agentforce templates; management has introduced the new Flex Credits consumption-based pricing model for Agentforce after customer feedback; management will add FedRamp High authorisation for Agentforce in June 2025; AgentForce is delivering AI agents to both employees and consumers; management thinks Salesforce is already delivering more agents than any other company in the world; AgentForce reached $100 million in AOV (annual order value) in only a few months and it’s the fastest product to do so in Salesforce’s history even without being fully deployed; 30% of Agentforce’s bookings in 2025 Q1 (FY2026 Q1) came from customers increasing consumption; Salesforce’s internal use of Agentforce has already reduced its hiring needs, driving $50 million in savings; Agentforce is a really fast-growing product that management has not seen before; Agentforce helps pull customers into other Salesforce products; all Agentforce deals in 2025 Q1 (FY2026 Q1) included 4 other clouds on average; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors
Salesforce has closed over 8,000 deals since launching Agentforce, of which half are paid. On help.salesforce.com, Agentforce has handled over 750,000 requests, cutting case volume by 7% Y/Y…
…We’ve got 800 customers already in production with Agentforce, including amazing companies like ENGIE, and that has been a success — incredible success story and with incredible velocity and conversations in OpenTable, Finnair, Grupo Globo, Falabella…
…We have launched hundreds of prebuilt Agentforce templates for different industries, roles, tasks, making it faster and easier for customers to deploy Agentforce…
…Earlier this month, we introduced our Flex Credits. It’s a new consumption-based pricing model. That’s how we’ve tuned our pricing after a huge amount of customer feedback…
…Next month, we’re going to add FedRAMP High authorization for Agentforce, so the U.S. public sector can also experience this incredible success…
…Agentforce does agentic augmentation for employees. Agentforce is also doing it directly to consumers. I think that we are really delivering at this point probably more agents and more conversations and more capability to more enterprises than any other vendor in the world. I really see us as the #1 agent platform already…
…It’s only been a few months. In fact, Agentforce reached more than $100 million in AOV. It’s much faster than any product in our history, and we’re not even fully deployed on all geographies, currencies or languages…
…Even though Agentforce is only in its second quarter, 30% of its bookings also came from customers increasing their consumption…
…In customer support, Agentforce has handled 750,000 cases and is on track to surpass 1 million help portal requests this quarter, cutting case volume by 7% year-over-year. As a result, we have reduced some of our hiring needs, enabling us to rebalance and redeploy 500 customer support employees to higher impact data plus AI roles by year-end, driving $50 million in savings…
…I don’t think the word agent was even on our earnings call a year ago. Maybe it wasn’t even on our earnings call 9 months ago. But it started to appear, and when we released the product end of October, it’s November, December, January, February, March, April, here we are in May. So just think about in a relatively short period of time, I’ve never seen in my career over 45 years in enterprise software this idea that we now have 8,000 customers, 4,000 of whom are paying, many of them who are at scale deployments where this is working in months. It just makes no sense actually to me…
…When we sell an Agentforce, we’re not just dropping some box off and saying, okay, we sold an Agentforce. We’re pulling all of our clouds in. And I’m sure that you heard like, for example, in the example I think of Pepsi, they have 11 of our clouds. So when we’re pulling in Agentforce, where all the other products are coming along with it…
…. We took all the deals, all the Agentforce deals for the quarter. On average, there were 4 other clouds on those deals…
…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce.
Data Cloud surpassed 22 trillion records in 2025 Q1 (FY2026 Q1), up 175% year-on-year (was 50 trillion records in 2024, or FY2025); 60% of Salesforce’s top 100 deals in 2025 Q1 (FY2026 Q1) included Data Cloud; 50% of Data Cloud’s new bookings in 2025 Q1 (FY2026 Q1) came from existing customers; Salesforce’s Data Cloud and AI ARR (annual recurring revenue) exceeded $1 billion in 2025 Q1 (FY2026 Q1), up 120% year-on-year; Salesforce closed 30 net new bookings exceeding $1 million that included Data Cloud and AI; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors; Salesforce had 3x more Data Cloud deals in 2025 Q1 (FY2026 Q1) compared to a year ago
In this quarter, our Data Cloud, just our Data Cloud surpassed 22 trillion records, up 175% year-over-year. Nearly 60% of our top 100 deals included investments in both Data Cloud and AI…
…50% of Data Cloud’s Q1 new bookings came from existing customers. I think that’s really important because it really speaks to the adoption of the product and the incredible usage by the customers who have it…
…Data Cloud and ARR grew more than 120% year-over-year, and it’s more than $1 billion part of our business…
…In Q1, we closed more than 30 net new annual bookings over $1 million that include both data and AI…
…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce…
…We had 3x more Data Cloud deals in Q1 than we had the year before.
Salesforce’s management has the ADAM framework for thinking about agents, apps, data, and meta data for AI; management thinks the ADAM framework is necessary in order for companies to achieve success with agentic AI; a new Tableau product, named Tableau Next, is an example of Salesforce’s ADAM framework
When I talk about agents and data and apps and metadata, that’s what we really call our ADAM framework. It’s in our experience to see now these 4 elements, the app, the data, the agents and the metadata, that make Salesforce unique, that companies need to achieve the real promise of agentic AI…
…If you were in San Diego, you saw Tableau Next. And what you saw was the DataFam. That’s the Tableau community kind of fully inspired because not only were they looking at Tableau Next, this incredible new product, but what they saw was Tableau, the Tableaus they love. And they also saw an agentic layer, and they saw it deeply integrated into our data cloud and all running on our metadata platform. That’s our ADAM framework, the agents, the data, the apps, the metadata all together…
…In this new agentic AI era, every company is going to say that they have agents. Well, I think every company does say that they have agents. But without these 4 parts of what we call ADAM, the — really the agents, the data, the apps, the metadata framework, you’re just not really able to deliver this complete experience for the enterprise, including delivering digital labor.
Salesforce’s management continues to see Slack as the interface for users to converse with Salesforce’s AI agents; every Slack user gets a digital teammate when Agentforce is deployed in Slack; Salesforce’s own sales agent within Slack is improving the efficiency of Salesforce’s sales teams by saving 44,000 hours of work annually; pairing Data Cloud with the sales agent has led to a significant reduction in lead-routing time from 20 minutes to 19 seconds
Slack is, of course, where I believe you’re going to really begin and end every Agentforce conversation. It’s the conversational interface for managing all of your work across apps, systems, teams. And Service Cloud, Sales Cloud, Tableau Next, any Salesforce app can live inside Slack…
…With Agentforce in Slack, every employee has a digital teammate that can make notes for your meeting, summarize your Slack channels. And you really see like AI taking place on Slack when you look at Slack recap or you look at agents just coming right into your channels to talk to you in real time…
…Our sales agent in Slack is transforming how our teams sell. Our AEs have already logged over 21,000 interactions, simplifying everyday sales activity, saving our teams over 44,000 hours annually. Further, Data Cloud is amplifying that impact, cutting lead routing from 20 minutes to 19 seconds in Slack.
Finnair is using AgentForce for customer service; Agentforce is in thousands of conversations a week with Finnair customers; using AgentForce, Finnair aims to automate 80% of customer service queries and reduce rep onboarding time by 25%; management sees the airline industry as a big opportunity for Agentforce
Finnair is using Agentforce to help manage customer service for 12 million passengers. Agentforce is already having thousands of conversations a week with Finnair customers, and the airline is aiming to automate 80% of customer service queries and reduce new rep onboarding time by 25% with Agentforce…
…We’re talking to so many airlines about how they not only can use all our Customer 360 apps, not just the Data Cloud, not just our meta platform but build this agentic capability around the airline. This is going to be a huge opportunity for that entire industry, which is so customer service obsessed.
Latin America retailer Falabella started using Agentforce in Colombia, deployed through WhatsApp, a few months ago; Falabella’s Agentforce experience was very successful and a six-figure Agentforce deal has now become a $1 million deal
Here’s this company that’s pioneering Agentforce just a couple of months ago in their Colombia business. And then it’s so successful, they’re actually deploying it on WhatsApp, which we hadn’t really seen before. And they’re using WhatsApp. The customers are coming in. They’re coming in and, “Hey, what’s my order? What’s going on?” And this what’s my order use case is the main thing that’s driving Falabella, and boom, all of a sudden, they go, “You know what, this is working so well. We’re going all over Latin America,” and what was kind of, I think, a low 6-figure deal. I mean, Miguel is going to have to come in here and tell me, turned into like a $1 million deal overnight…
…Yes, it was $300,000, right, from just Colombia.
OpenTable is using Agentforce and started with restaurants, before deploying to employees, and now consumers
OpenTable, we’ve been talking about this story for a while, which is [ Glenn ] is doing a great job deploying Agentforce. And he started with the restaurants. Then, he did employees. And now he’s like doing the consumers, and this is an incredible thing that OpenTable has been so successful.
Brazilian media conglomerate Grupo Globo bought Agentforce in 2024 Q4 (FY2025 Q4); Agentforce has since increased Grupo Globo’s customer retention rate by 22%
Another Latin American success is Grupo Globo. The Brazilian media conglomerate purchased Agentforce in Q4. In less than 3 months, Agentforce basically boosted Globo’s retention rate by 22%, driving revenue upgrades, cross selling, converting nonsubscribers.
Large Japanese enterprises are very excited about Agentforce and using to build agentic layers around their businesses
We’ve talked about the speed of which Agentforce is gone, but it’s not just a U.S. phenomenon. It’s an international phenomenon. And as I mentioned last week, I was in Japan, and one of our customers in Japan, Fujitsu, is really doing some amazing things. But when I heard at the rate and scale and speed that they want to deploy the product, and their vision in terms of how it can be all encompassing for an agentic layer around the entire company, I really just could not believe it. I really sat with 5 of the largest Japanese companies. And I think somehow every company’s imagination has been captured that they have this idea that they can build an agentic layer around their company.
Salesforce’s management is seeing that the rate of innovation in AI is far exceeding customer adoption
This idea that agents are kind of starting to provision to become digital labor, this is exceeding my expectation that it crosses industries. It’s crossing geographies. And as I said, all of this is really just happening in only 6 months. By the time we get to Dreamforce, which is still another 6 months ahead, I expect another huge massive transformation. We’re starting to cut the code right now on what will be one of the main releases of Dreamforce. And when we look at what will come as the release after Dreamforce, our technology, our product doesn’t look at all like what it looked like just a few months ago. So we’re moving very, very fast. And I think that I really would say this hasn’t really happened too many times in the last 30, 40 years. The rate of innovation far exceeds the rate of customer adoption.
Salesforce’s management thinks that most of the AI models are within 3-6 months of innovation of each other; management thinks the models have not improved a lot in accuracy because they are all trained on the same datasets
When we all are using ChatGPT or Gemini, or You.com or Perplexity or Anthropic or any of these models or an open source model or DeepSeek, okay, all of these models are mostly the same. They’re within 3 to 6 months of innovation of each other. We all know that. And then all these models are trained on mostly the same datasets because there’s only so much data that they can be trained on. Now there’s some synthetic data, but it doesn’t mean very much to a lot of these models. That’s why, by the way, that these models still have not improved a lot of their accuracy in the consumer side.
Salesforce’s management thinks Salesforce has been the best technology company in the world at building an agentic layer around itself; Salesforce used AI agents to handle 1 million conversations in customer support in 2025 Q1 (FY2026 Q1) and this has led to a dramatic reduction in the number of people needed to handle customer issues; Salesforce is Agentforce’s Customer Zero
What is it going to take to get this transformation to happen, where we have a much bigger agentic wrapper around Goldman Sachs, your company, or around all companies? We’ll look at my company to start. I think we’ve probably done the best of maybe any tech company. We’ve done now — this quarter, we’ll pass through 1 million conversations in customer support. It’s a dramatic reduction in the amount of human beings who have had to get involved to answer customers’ issues. I don’t think any other tech company at scale has delivered this capability. It is a proof point without any doubt that Salesforce has been able to deliver on its vision of digital labor, and Agentforce’s #1, Customer Zero, Salesforce. So we eat our own dog food, and this is amazing.
Salesforce’s management thinks the proclamation from some AI experts that AI will very soon cause massive job-losses in white-collar work to be alarmist with the current state of AI
[Question] The CEO of Anthropic recently commented that AI could wipe out 50% of entry-level white-collar jobs and drive unemployment a lot higher, unfortunately. And since you’ve been very astute and very ahead of the curve on commoditization of LLMs and you’ve been very outspoken on the topic of digital labor, I’m curious just to get your thoughts on that concept.
[Answer] In terms of the amount of white-collar jobs that are going to disappear, you’re all experts at this point in the current generation of AI. You’re using it every day. We’re all using it. It doesn’t matter who I speak to. Probably all of your children, all of your family members are using it, and you can see how it’s impacted. Like people are smarter. They get their medical labs. They ask, “Well, what do you think about this?” But then when you call your doctor, sometimes the doctor goes, “Well, actually, that’s not completely true.” And we’re kind of at this point where it’s very good on some things but not for everything. And because of that, even in the enterprise, while there’s a lot of things that we can do, edit this press release or write me this speech or whatever, but the reality is, oh, you’re probably still going to want to get in there and work on it. And I think we all know that. So look, we’re at an exciting moment in AI, and maybe we’re moving into this world where there’s going to be like these AI prophets and obviously, I’m a huge fan of Dario’s. He’s great, amazing person, incredible company, wonderful. But some of these comments, I think, are alarmist and get a little aggressive in the current form of AI today.
Sea Ltd (NYSE: SE)
Sea’s management thinks AI will help Sea’s business on the consumer-facing side and internal product improvement; on the consumer-facing side, Sea has used AI to improve search recommendations, advertising efficiency, help sellers create better product descriptions, and help seller create videos based on images or descriptions of products; management measures the returns of Sea’s AI-related investments through click-through rates and conversion rates; management is seeing that most of Sea’s large AI-related investments on the consumer-facing side have delivered a positive return on investment (ROI); for internal product improvements, management is using AI to filter counterfeit products and detect fraud, among other areas; management measures the ROI of AI investments for internal product improvements through cost savings and most AI investments have positive ROI
For the AI investment, we believe that AI will make a big change to our industry, both from a consumer-facing side and also from our internal product improvement…
…One of the big improvement that we did is on our search recommendations and our ad. So we’re deploying AI solution to help us to target our user a lot more efficient when users search us and when people come to our app, so we can recommend more accurate products to them and also help us to have better efficiency on the ad product. That’s why we can improve the ad take rate over time. Another example is the AIGC production that we can help our seller to create for their product descriptions. We have been increasing the video coverage for our product description a lot over time, and part of that is driven by the — we are enabling the seller to create videos based on the images or based on some of the descriptions. And typically, for this investment, we always have a very clear ROI measurement for any of the investment, as I shared before, whether we are spending our AI resources on better our ads, we’re spending our AI resources on better the product descriptions, we measure the return on investment on this through our click-through rate, measure our investment through our conversion rate. And most of our investments so far, anything in meaningful size, has been positive return for any investment with AI resources…
…We are also investing quite a lot on improving our internal productivities, for example, that we’re using AI to help our internal listing team to filter the product in our marketplace a lot more efficient so we can discover the counterfeit, the fraud, et cetera, in a lot cheaper way. And again, those — for all those things we measure based on our AI investments versus the savings that we have typically bring a positive return.
Tencent (OTC: TCEHY)
Tencent’s management is seeing AI having tangible contributions to Tencent’s businesses, such as performance advertising and evergreen games
During the first quarter of 2025, our high-quality revenue streams sustained their solid growth trajectory. AI capabilities already contribute tangibly to business such as the performance advertising and evergreen games.
Tencent’s management has stepped up Tencent’s spending on AI opportunities, such as the Yuanbao app and AI in Weixin; management believes the operating leverage from Tencent’s existing revenues will absorb the costs associated with AI investments and contribute to the company’s growth; Tencent’s AI investments are in the form of both capital expenditures and operating expenses; some of Tencent’s AI investments are already generating revenue, such as through (1) improved advertising targeting, (2) improved content recommendation which increases user time-spent, (3) more time spent in games from usage of AI, and (4) cloud revenue from the deployment of GPUs, or graphics processing units; other AI investments will need more time to deliver a return on investment (ROI) and these investments will lead to lower-margin growth in the short-term compared to recent quarters
We also stepped up our spending on new AI opportunities such as Yuanbao application and AI in Weixin. We believe the operating leverage from our existing high-quality revenue streams will help absorb the additional costs associated with these AI-related investments and contribute to healthy financial performance during this investment phase. We expect this strategic AI investment will create value for users and society and generate substantial incremental returns for us over the longer term…
…As we have highlighted in the prior quarter earnings call, we are stepping up investments in AI in the form of capital expenditures as well as operating expenses. Some of these GPU and AI investments already generate revenue for us, such as improved ad targeting, which boosts ad revenue; improved content recommendation, which boosts user time spent and thus ad revenue; usage of AI within evergreen games, which boosts user engagement and thus game revenue; and deployment of GPUs and AI across our computing infrastructure, APIs and platform solutions, which generates cloud revenue.
For our other GPU and AI investments, which are more long cycle in nature, there’s a natural time lag between making the investments and those investments starting to generate significant revenue for us. During this time lag period, we expect the costs of those GPU and AI investments to offset our underlying operating leverage, resulting in a temporary smaller gap between our revenue and operating profit growth rate than we have achieved in recent quarters. That said, we’re confident that our stepped-up investment in longer-cycle AI projects will create substantial long-term value for our users, business and shareholders.
Tencent is in the early stages of rolling out AI features for Weixin, such as (1) Yuanbao (Tencent’s AI chatbot) within Weixin chat, (2) AI answers within Weixin Search, (3) AI tools for content creators for easier content production, and (4) an AI coding assistant to make it easier to create Mini Programs in Weixin
We’re in the early stages of rolling out AI features within Weixin. Users can now add Yuanbao as a Weixin context for seamless AI interaction within Weixin Chat, providing context-aware responses and facilitating content discovery while leveraging the Weixin ecosystem and the worldwide web. Weixin Search is now starting to include results powered by large language models, including the fast thinking model Hunyuan Turbo S, and the chain of thoughts reasoning models Hunyuan T1 and DeepSeek R1. We provide AI tools so that content creators can generate images matching the text of their official accounts articles and generate video effects for video accounts videos utilizing preset templates. We reduced the Mini Programs development time via an AI coding assistant for creating AI programs that supports natural language prompts and image inputs.
The Marketing Services segment’s revenue was up 20% year-on-year in 2025 Q1 because of higher user engagement and AI upgrades of the advertising platform; Marketing Services revenue grew across all major advertising categories; management has upgraded the Market Services segment’s advertising platform with enhanced generative AI capabilities to accelerate advertising creation and live-streaming content; management is using LLMs (large language models) to deliver better advertising recommendations
For Marketing Services, our revenue grew 20% year-on-year to RMB 32 billion, benefiting from higher user engagement, ongoing AI upgrades to our ad platform and a strengthening transaction ecosystem within Weixin…
…On the ad tech front, we upgraded our advertising platform with enhanced generative AI capabilities such as ad generation and video editing tools to accelerate ad creation and digital human solutions to facilitate live streaming activities for content creators and merchants. We’re using large language models to deepen our systems, understanding of merchandise and of user interests across our apps and so deliver better ad recommendations.
AI-related revenue within Tencent Cloud grew quickly in 2025 Q1, driven by demand for GPUs (graphics processing units), APIs (application programming interacts), and platform solutions; Tencent Cloud’s growth was constrained by GPU availability
AI-related revenue within Tencent Cloud grew quickly year-on-year, driven by increased customer demand for GPUs, APIs and platform solutions, although constrained by limited GPU availability.
Tencent’s management thinks there’s room for both a general AI agent, and a Weixin-specific AI-agent that sits within the Weixin ecosystem; management believes that as Tencent’s AI chatbots Yuanbao and iMA improves and evolves over time, they can answer questions better and be able to interact with other apps and external APIs (application programming interfaces); management thinks that Yuanbao and iMA are similar to AI agents developed by peers; management believes that Tencent can create a unique AI agent that connects with users within Weixin’s ecosystem
So on Agentic AI, it’s a very hot concept, right? And the idea is actually, oh, the AI can actually help you to complete a very complicated tasks that involve many different steps as well as the use of tools and maybe in connection with other apps. So if we look at that concept, then there is a general Agentic AI, which everybody can do. Essentially, you create this agent and you go out to the world and try to complete tasks for your user. But at the same time, there’s also an Agentic AI that can sit within Weixin and the unique ecosystem of Weixin. And I think those are two different products…
…I think we are creating that capability within some of our AI native products such as Yuanbao and iMA, over time, as these AIs continue to evolve to increase in terms of their capability. So in the very beginning, these AIs actually answer questions very quickly. So those are the sort of quick response. And then over time, they include — they start including the chain of thoughts, a long thinking reasoning model and you can answer complicated questions. And over time, the capability can actually allow them to start doing more complicated tasks. So they start evolving to have Agentic capability, and they will be interacting with all other apps and programs and external APIs to help the users. So that would continue to evolve. And it’s not that much different from other Agentic AIs provided by our peers.
But on the other hand, right, within the Weixin ecosystem, I think there is the opportunity for us to create a pretty unique Agentic AI that connects with the unique components of the Weixin ecosystem, including the social graph, including the communications and community capability, including the content ecosystem, such as our Official Accounts and Video Accounts and all the millions of Mini Programs that exist within Weixin, which actually sort of gets into all kinds of information as well as transactional and operative capabilities across many different verticals of applications. So I think that would be extremely unique compared to other more general Agentic AIs, and that’s sort of a very differentiated product for us.
Tencent’s management thinks AI business models include (1) increasing advertising revenue through AI targeting, and (2) GPU rentals; management sees GPU rentals as a low priority form of business; management thinks the subscription model for AI services will not be an important business model within China
In terms of your question on AI business models, I think if you look at advertising, it’s directly augmented by AI because AI can actually help to improve the targeting capability of our ads. And when we deliver better results, then it translates directly into additional advertising revenue. And I think that is a big opportunity that we are already realizing in our performance ads, but there’s more opportunity to develop over time. Now I think transaction is actually very closely tied to advertising, right? When you have advertising that leads to direct transactions and then advertising value actually goes up significantly. And I think that’s the way we are actually also trying to increase our advertising revenue. That’s another component and pillar of our advertising revenue growth driver.
GPU rental is sort of directly related to cloud business, and that’s more like a reselling business mostly. And to a large extent, right now, we are putting it on a lower priority because — especially when there’s a short supply of GPUs, right, then GPU rental is a lower priority for us.
And subscriptions, I think it’s not the most likely business model for AIs in China, right? Now everybody is actually providing AIs for free. So the subscription model, which exists outside of China, I think it’s not going to be mainstream business model for AI in China.
Tencent’s management sees long runway for growth in both the Domestic and International Games businesses; one driver for growth is the use of AI, in ways such as deploying an AI coach for new players, and to help prevent cheating
We do believe we have a long runway for our domestic and indeed international game revenue growth looking forward. And there’s many reasons, but just to pick on three for now. First of all, we talked extensively this time last year about some of the changes we were making to how we envisage and therefore, how we operate and therefore, who operates our biggest Domestic Games. And you can see that we have made those changes and they’re bearing the fruit that we hoped they would bear and we see them bearing more fruit going forward. A second driver or enabler of that long runway, is the utilization of AI, which we think is particularly beneficial to the big competitive multiplayer games that we’ve talked about extensively and that represent the majority of our domestic game revenue. And that’s the case because while there’s many ways that we can and we’re starting to deploy AI within games some of the most interesting include using AI to help coach new players, to help accompany existing players, to help prevent cheating and hacking and so forth. And all of those are particularly important within competitive multiplayer games.
Tencent’s management is seeing users of Tencent’s new AI services use them for asking questions, following up with more questions, and analysing photos
At this stage, I think we’re trying to create functionalities and user experiences that would leverage AI and try to see what may or may not stick with the users. So as I said, right, the users sort of like to ask questions, like to interact with the AI with further follow-up questions. And when we put in various functionalities such as allowing photos to be analyzed and sort of people use it. So there are a lot of functionalities, which right now we have put in, and we’re starting to get to see people like them a lot or are not using it that much
The NVIDIA H20 chips was banned in 2025 Q1 and there are now new BIS guidelines (effectively new chip controls), but Tencent has a good stockpile of AI chips; management will use the stockpile of AI chips to generate immediate returns and also train Tencent’s models; management believes that Tencent can achieve very good training results even with small chip-clusters, so the company’s current stockpile of chips will be sufficient to train models for a few more generations; management thinks that the concept of scaling laws embraced by American technology companies, where AI models need to be trained on ever-larger chip-clusters, is outdated; management sees Tencent having a larger need for GPUs around inference, especially if the company moves toward agentic AI; to improve inference efficiency and reduce GPU-reliance, management thinks Tencent can leverage software optimisations, customise AI models depending on the use cases, and use other chips (such as ASICs, or application-specific integrated circuits) that are available in China
On the GPU front, it’s actually a very dynamic situation, right? So there — since the last earnings call, we have seen an H20 ban. And then after that, there was the BIS new guidelines that just came in overnight. So it’s a very dynamic situation, and we just sort of have to manage the situation, on one end, sort of in a completely compliant way, and on the other end, sort of we try to figure out the right solution for us to make sure that our AI strategy can still be executed. So the good thing that we are in is that, number one, I think we have a pretty strong stockpile of chips that we acquired previously, and that would be very useful for us in executing our AI strategy. And if you look at the allocation of the usage of these chips, obviously, they will be used for the applications that will generate immediate return for us. So for example, in the advertising business as well as content recommendation product, right? We actually would be using a lot of these GPUs to generate results and generate return for us. Secondly, in terms of the training of our large language models, they will be of the next priority. And the training actually requires higher-end chips. And the good thing on that front is that over the past few months, right, we start to move off the concept or the belief of the American tech companies, which they call the scaling law, which require continuous expansion of the training cluster. And now we can see even with a smaller cluster, you can actually achieve very good training results. And there’s a lot of potential that we can get on the post-training side, which do not necessarily meet very large clusters. So that actually sort of help us to look at our existing inventory of high-end chips and say, we should have enough high-end chips to continue our training of models for a few more generations going forward.
And then the larger need for GPUs are actually sort of around inferences and especially sort of when you see a growth in demand for inference on the user side as well as when we move into the chain of thoughts reasoning model, it actually requires many more tokens to answer a complicated question. And if we move into Agentic AI, right, it requires even more tokens, there’s actually a lot of need on the inference side. But on the inference side, there’s actually a lot of work that could be done for us to manage the need.
One is just sort of leveraging software optimization. I think there’s still quite a bit of room for us to keep on improving the inference efficiency, right? So if you can improve inference efficiency 2x, then basically, that means the amount of GPUs get doubled in terms of capacity. So that’s actually a very good way of investing our resources to improve on the inference efficiency. And the other approach is we can customize different sizes of models, especially some applications do not require very large models, right, and we can tailor-made models and distill models so that they can be used for different use cases, and that can actually save on the inference usage of GPUs. And finally, we actually sort of can potentially make use of other chips, compliant chips available in China or available for us to be imported as well as ASICs and GPUs in some cases for smaller models inferences. So I think there are a lot of ways to which we can fulfill the expanding and growing inference needs, and we just need to sort of keep exploring these venues and spend probably more time on the software side rather than just force buying GPUs.
Tencent’s management is unsure of when some of Tencent’s investments in AI will pay off because they see the whole world as being in uncharted territory when it comes to AI investments, but Tencent has historically experienced a pay-off within a 1-2 year timeframe for investments in new areas; management expects a narrowing of the difference in growth rate between Tencent’s revenue growth and operating profit growth, but operating leverage is still expected
[Question] You guys mentioned earlier in your opening remarks, smaller gap between revenue and operating profit. Can you kind of elaborate a bit more on this, the magnitude and what kind of extensive period that we’re talking about?
[Answer] We’re at uncharted territory, not only for Tencent, but for the whole world in terms of the deployment of artificial intelligence. So I don’t have necessarily a very high degree of confidence in these statements. But if you’re thinking about measuring the duration, then the past may be the best guide to the future in that Tencent has been through many time periods where we have cultivated a new product toward critical mass and substantial popularity ahead of monetizing that product. And typically, the duration of those gaps between investment to cultivate versus monetization and revenue generation would be in the sort of 1-year to 2-year time range. So obviously, it will depend on what our peer companies do in China, obviously, will depend on consumer habits, on advertiser habits. But I think that’s a reasonable time frame to think about. In terms of magnitude, I won’t go beyond what we said earlier, which is referring to a narrowing. So we don’t expect the delta between revenue growth and operating profit growth that we experienced this quarter to continue. There will be a narrowing. But on the other hand, we don’t expect our operating leverage to turn negative either.
Tencent’s AI investments are mostly in the form of capital expenditures, but there’s also incremental marketing expenses for Yuanbao and salaries for AI engineers
In terms of what costs other than CapEx or really depreciation could cause that narrowing, then CapEx depreciation is, by far, the most important. We do have some incremental marketing expenses for Yuanbao, although not so much for AI within Weixin. And then we referenced the fact that engineers with expertise in AI are expensive, but that’s more of a sort of mix comment rather than an aggregate headcount comment. We don’t see a step-up in headcount. We’ll continue to manage headcount closely, but we observe that engineers with that AI expertise are rightly well paid.
Historically, Tencent’s banner ads had 0.1% click-through rates while feed ads had 1%, but with AI, management has seen that certain ad inventories have reached a 3% click-through rate; management thinks no one knows the upper limit of AI-powered advertising click-through rates; AI can benefit Tencent’s advertising revenue by showing more appealing content to consumers to increase their time-spent, but the increase in click-through rates is still most important improvement from AI
A big part of the uplift that AI is providing to advertising revenue today can be quantified in the form of the click-through rate on ads. And historically, banner ads achieved a roughly 0.1% click-through rate. Feed ads achieved roughly 1.0% click-through rate. With the benefit of AI, we have seen that the click-through rate on certain ad inventories can improve toward 3.0%, for example. And then the question is, what’s the upper limit on that click-through rate. And at this point, no one knows the answer because it almost becomes philosophical if you had complete information or insight into a consumer if you had the ability to infer what the consumer wants or the consumer given their prior behaviors should want and then deliver an ultra targeted ad to that consumer, then it’s very hard to say that the upper limit should be X percent rather than Y percent…
…We can use AI to target more appealing content to the consumer, which means they spend more time in the feed, which means they then view more ads, but I think that ad click-through rate is perhaps the most important.
Veeva Systems (NYSE: VEEV)
Veeva’s management announced the Veeva AI initiative in April 2025; Veeva AI will see the company build AI into its applications across clinical and commercial; management thinks the addition of AI will significantly improve productivity for customers; Veeva AI agents will have application-specific context and direct access to Veeva data; the first release of Veeva AI will happen in December 2025; the first 2 of Veeva’s AI Agent solutions, CRM Bot and MLR Bot, is planned for December 2025; Veeva AI is part of Veeva’s overall AI strategy, which includes the Veeva Direct Data API and the Veeva AI Partner Program; management sees the Vault CRM product as a fast path to AI productivity for many customers; management thinks Veeva AI can improve the efficiency of the life sciences industry by 15% in the next few years; management thinks Veeva AI’s efficiency gains will manifest because the AI technology will be deeply embedded in core applications; early reception from customers to Veeva AI has been very positive; management will charge an appropriate license fee for Veeva AI that balances revenue growth for the company, and broad customer adoption; customer response to a demo of the CRM Bot was great
Announced in April, Veeva AI is a major initiative for us with a clear vision that’s focused on delivering tangible value. We’re building AI into Vault Platform and Veeva applications across all major areas from clinical to commercial. Adding AI – through AI Agents and AI Shortcuts – to our core applications can significantly improve productivity for customers and the industry. Veeva AI Agents have application-specific context and direct, secure access to Veeva application data, documents, and workflows. AI Shortcuts enable end users to set up personal AI-powered automations for their most frequent user-specific tasks. The first release of Veeva AI is planned for December 2025. Our first two AI Agent solutions, CRM Bot and MLR Bot in commercial, are also planned for year end. Veeva AI is part of our overall AI strategy which also includes the Veeva Direct Data API and the Veeva AI Partner Program, which are both available and operating well today…
…We showed our AI Agents – CRM Bot, Voice Control, Compliant Free Text, and MLR Bot. Vault CRM will be a fast path to highly productive AI for many of our customers…
…I think if you look over the next 3, 4, 5 years out to 2030, I think Veeva can help increase life sciences efficiency by 15% or so with Veeva AI…
…Why I’m so bullish on it? Because Veeva has the core applications, and we’re building the AI very deeply embedded in the core applications. So when we build AI, we’re not building a generic AI. We’re building a medical legal regulatory approval agent, a CRM agent that does pre-call planning, a safety AI agent that can transcribe pretext into a safety case, so deep AI applications. And you need the deep core applications and the AI working together. And that’s where the magic will happen. It’s just very, very, very clear to me…
…[Question] Understanding it’s Veeva AI just recently kind of rolled out. Any initial feedback from customers and how you think in the coming years, how it might impact your overall business?
[Answer] The reception from customers is very positive because it just makes sense. It’s not a lot of hype. They need the AI working with the core applications…
…I think Veeva AI is something that we will charge an appropriate license fee for. So I think it will be a net positive for Veeva. We don’t have that packaging worked out yet. We do want to price it so that it can be very reasonable and broadly adopted, help the industry move forward. And yes, that certainly help our revenue all the time…
…When I showed the demo of Veeva AI, one example of what Veeva AI can do with CRM Bot, you can just see the aha moment go with the customers because what they want is they want AI to help them with the engagement planning, right, do all that work. And then all the data entry afterwards, do that work so they can focus on the engagement in their field. And you just see the light bulbs going on.
Veeva’s management sees the core AI technology as settling down a little; management thinks it’s clear that AI is a new computing paradigm that can produce new kinds of automation; management thinks that core applications will still be very relevant despite the automation that AI can deliver
I think we — the core technology has settled down a little bit in terms of large language models, what’s going on there. And then it’s very clear that this AI is a new computing paradigm. It’s something that can automate certain things that humans can do, which basic software, traditional software couldn’t do that. It doesn’t work like a human. This is nondeterministic computing. It can automate some things that a human can do, but it doesn’t obviate the need for a core application.
Veeva’s management thinks that AI can deliver the biggest positive productivity impacts in the pharma industry within the sales function
Where is AI likely to be and across sales, marketing or service. Yes, it’s a good question. Overall, the way to think about the pharma industry is the human relationships, the sales organizations, the spend on the sales force is very significant and very meaningful. And if you can provide productivity gains and effectiveness gains for the field team, you have a very significant impact. So I think there’s a — we’re seeing a lot of focus in the sales side, which is why one of our first agents will be in the CR in the core CRM space, the CRM bot. We think we can make customers significantly more productive from a field team perspective. So it’s not to say that we’re not seeing investment in other areas, certainly, customer service. Case intake as an example. There’s a lot of examples on the marketing side. But I would prioritize sales higher given the size, the importance, the relationships and the potential impact for it to have.
Veeva’s management thinks the pharma industry has problems with fragmented data (which hampers the use of AI), and has not been able to produce deep industry-specific AI yet; management thinks Veeva can help with both problem areas
The industry still has fragmented data and getting the data to work together, getting the data into the software so you can make decisions and you can get insights fast about that. That is still a challenge for the industry. It’s not a solved problem yet…
…We talk a lot about the excitement around AI, but there’s also a lot of unsolved problems in the AI space. And part of that is bringing together or the industry hasn’t yet been able to bring together very industry-specific processes with deep industry-specific AI. That’s a problem that they’ve made investments. They haven’t often seen the full return on their investment in some of the AI projects. And I think that’s another area where they’re excited about our ability to help them over time.
Wix (NASDAQ: WIX)
Wix’s management recently introduced a new AI-powered product, Wixel, which is a stand-alone visual design platform for things other than websites; Wixel is constantly choosing and optimizing the best AI models for each task behind the scenes, which is a unique feature among similar offerings; Wixel helps make image and video editing more accessible; Wix has partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot; it’s still very early days for Wixel and management expects Wixel to evolve meaningfully throughout 2025; management is currently treating Wixel as a separate subscription with its own pricing of $79 currently, and management is testing pricing now; management believes legacy players will find it hard to change their user interface and experience as they already have big customer base, creating differentiation for Wixel
Earlier this month, we introduced Wixel, our new stand-alone visual design platform that extends Wix’ vast design expertise beyond websites for the first time. Wixel marks the beginning of our next-generation approach to visual design, combining Wix’s intuitive creation tools and user-friendly interface with the power of generative AI. This platform combines the best AI models on the market today tailored for specific image needs, including object, background editing and much more with a constant pipeline of new AI enhancements. This makes Wixel unique from everything else available on the market. It handles the complexity of today’s high-end AI technology behind the scenes, choosing and continuously optimizing the best models for each task. This allows our users to always have access to the most advanced and up-to-date tools for image generation and editing…
…Our goal is to give total control over photo and video editing to everyone, the same way we did for website creation. Wixel is for Wix users, for entrepreneurs, freelances and business owners who already rely on Wix to build and grow online. It’s for the millions of DeviantArt artists, who want to add an easy to use yet powerful editing tool to their toolkit, without sacrificing the quality of their art…
… Excitingly, we partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot. This collaboration allows Microsoft 365 users, small business owners, students and everyday creators, to design in a smarter, more intuitive way with Wixel. Though this launch is a cornerstone of our product road map, we are still very early in the journey with plenty of work ahead in order to achieve our vision for Wixel. In the coming year, you can expect the platform to evolve meaningfully with breakthrough capabilities. As we continue to innovate, I’m excited to see how Wixel reshapes the digital creation space…
…[Question] Some thoughts on pricing, how you landed at $79 a year, and how you’re trying to strike the balance between monetization and adoption?…
…When it comes to Wixel, we don’t try to build another drag and drop editing environment, which I think all the tools that you’re referring to are a drag and drop editing environment. What we’re trying to do is really how would — if you would think in the 5 years from today, how you could edit images content with AI, how would that look like? And we’re trying to build that into Wixel. So I think the way that the tool itself behaves is very different than the traditional editing environment. Now I’m not saying that they cannot do that. I’m sure, they can, there are a lot of smart people there. I’m just saying that if you try to rebuild your tools into this thinking about how will the universe look in 5 years or how would AI look in 5 years, you’ll find that you have to change a lot of the user interface, a lot of the experience, a lot of the underlying technologies in those existing tools, which I believe is a bit of a challenge when you have a lot of users.
Wix’s management recently launched Astro, a new AI assistant embedded within the Wix dashboard; management expects Astro to improve user engagement, product upgrades, and reduce churn; management plans to launch more AI agents
We also introduced Astro, our new AI assistant embedded within the Wix dashboard. Astro simplifies the user journey by guiding users, surfacing relevant tools and insights and helping them complete key tasks. We expect Astro to improve user engagement, boost package upgrades and reduce churn over the long term. And it’s only the first in a series of AI agents we plan to roll out.
Wix’s management recently launched new AI-powered tools for website automations and customisations; the tools include (1) the creation of dynamic content based on site-visitor characteristics, (2) no-code interface for users to drive business outcomes, and (3) automating advanced business workflows
Additionally, we launched new AI-powered tools for website automations and real-time site customization, including adaptive content application, Wix Functions and Wix Automations. These features are designed to make our platform smarter and more efficient while delivering highly personalized experiences to site visitors…
…. This suite includes:
- Adaptive content application: a tool designed to personalize website experiences for site visitors by generating dynamic content based on visitor characteristics and instructions, ultimately enhancing engagement and user experience
- Wix Functions: a no-code interface that allows users to customize outcomes for various business scenarios, enabling businesses to operate more smoothly and effectively
- Wix Automations: a builder designed to support advanced business workflows with a highly intuitive, fully customizable automation engine
These tools help businesses effortlessly optimize their operations for enhanced efficiency, while ensuring a seamless visitor experience without performance drawbacks like increased load times.
Wix’s management recently launched Wix Model Context Protocol or MCP Server, an infrastructure advancement that lets users leverage natural-language prompts to connect Wix’s business functionality with their preferred AI tools; management demonstrated at a recent conference how Wix MCP Server can be used to generate code for fully functional payment solutions
Finally, we rolled out the Wix Model Context Protocol or MCP Server, a key infrastructure advancement that allows users to leverage natural-language prompts to seamlessly connect Wix’ comprehensive business functionality with their preferred compatible AI-powered tools. The Wix MCP Server enables AI-driven app development for users to build custom experiences on top of Wix or manage their Wix-based business using natural language and AI coding assistance. As the use case presented at Stripe’s recent conference, our team demonstrated how to use LLMs to generate reliable code for fully functional payment solutions. They built a complete website that accepts online payments via credit cards, Apple Pay and Google Pay through Wix Payments and Stripe.
Wix’s management thinks that agencies, which are the customers of Wix’s Partners business, still have a big role to play today and in the future even as AI agents proliferate; management thinks AI agents today still need to evolve significantly in order to achieve big goals; management sees agencies picking up AI technologies faster than consumers and small businesses
Well, in theory, right, in theory, if we look at the far future, then why would you need an agency, right? Because in theory, you can just tell the AI, hey, build this website for me, change those things, now make it successful. And — but practically, we’re not there yet. I think there’s a big distance that we have to have for those AI agents to evolve in order to be able to help you actually achieve all of those goals. Even when we are trying to build this exact agents to do each one of those, there’s still a lot of human interactions and I think a lot of expertise that the human can bring to help it. So I think there is a lot of room for agencies even in the next — in the years to come. Currently, when we look at the AI data, I would say that agencies probably pick up technologies faster than consumers and small businesses. So we’ve actually kind of gave them a bit of a shift in terms of what they can do.
Wix’s management is optimistic about vibe coding but it’s still a young technology that produces code that tends to break over time; management thinks vibe coding will help to expand Wix’s market reach
I think vibe coding is a super exciting concept. It’s still very early. And so things tend to break. After a while, they’re not stable. They’re not good at SEOs or search engine optimization. There’s a lot of things that need to get there to be mature in order for it to be a viable product for our customers. Just the simplest one is if you edit something, right, it takes 4 minutes for any small change to happen, right? In the best case scenario, it’s 4 minutes. So moving a button will take you a few minutes. So there’s a lot of super exciting potential in vibe coding…
…We’re going to start by — with, of course, a few things including the ability to code components into the Wix Editor, which is one of the obvious things that we’re going to be doing. I do think that this will allow us and companies like us to expand our market reach because things that you could not have done traditionally on website building platforms, right, now you’ll be able to do because you are able to write this custom-code without coding. So I do — I’m very optimistic. I think it’s going to present to us a lot of really interesting opportunities. But I want to emphasize again, it’s really a young technology, it’s still not stable.
Wix’s management thinks websites will be structurally different in the AI age; management is using Google less when searching for information; management believes that the complexity of building websites in the age of AI will increase, which will benefit website builders such as Wix
[Question] Help us expand our mind, so to speak, on whether websites somehow kind of need to be like structurally different in the AI era, particularly from like a utility and discoverability perspective, and kind of how do you position for that?
[Answer] I do believe that there is a big change coming. I know that for myself, I’m using ChatGPT more than Google when I search for things now. So I would love to have a content — and ChatGPT digest a lot of content from the Internet and try to give you this limited version and there’s advantages and there’s disadvantages, right?…
…LLMs work today by just scrolling the Internet, of course, is not good enough. It’s not going to provide you any knowledge about will my hairdresser have an appointment in 2 days, right? And so — and we’re starting to see the first layer of protocols, right? Microsoft just announced once, Anthropic announced MCP, which is a way for an LLM to query complicated services on — in a way that the agent know how to learn, how to ask an API. We just announced that we supported and released everything that we say now is available for MCP…
…I do also believe that in many ways, that will play — help platforms like Wix because the complexity of building a website that know how to offer its services for APIs and MCP to LLM, and how to do the equivalent of SEO for LLM are just going to make building a website 10x harder, right? So if you — today, you can take somebody who know how to write HTML CSS and in theory, build a distant website, then in a year, that will be impossible. I think the complexity that will be created by those tools and the speed of innovation, right? MCP was announced 1.5 months ago, already released to [indiscernible] I think about 1.5 months ago. And so the complexity and the need to support and to accelerate, I think that is something that will actually help all the website and content-building platform because it’s going to be much harder to do it with your own internal team.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Meituan, Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Sea Ltd, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.