Last month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q2). In it, I shared commentary in earnings conference calls for the second quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2025’s second quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
Here they are, in no particular order:
Adobe (NASDAQ: ADBE)
Adobe’s management is infusing AI across Adobe’s flagship Creative Cloud applications; there is strong adoption of the Creative Cloud Pro offering, which includes Firefly; recent new AI features in Creative Cloud applications include (1) Harmonize in Photoshop that blends composited objects with the image, and (2) Project Turntable, which rotates 2D artwork to accurately visualize different angles; Creative Cloud had strong new user acquisition, particularly in emerging markets; management will soon be unveiling new AI innovations within Adobe’s Creative Cloud applications
We’re infusing AI across our flagship Creative Cloud applications including Photoshop, Illustrator, Premiere Pro and After Effects and delivering new offerings for next generation creators with Adobe Firefly across web and mobile…
…We’re seeing strong adoption of the Creative Cloud Pro offering which includes Firefly, reflecting the value 5 professionals see in having AI integrated with power and precision creative tools…
… Recent examples include the addition of a new Harmonize feature in Photoshop that blends composited objects with the image by automatically adjusting lighting, colors and shadows. Harmonize has quickly become one of the most used features in Photoshop. We released Project Turntable, a popular sneak from MAX last year, into Illustrator, which helps users rotate their 2D artwork to accurately visualize different angles, eliminating a frequent and time-consuming task. Innovations like these directly translate into measurable value for customers by cutting production times, enabling more content output, and raising the overall quality of creative work and have driven strong migration to our new Creative Cloud Pro offer…
…Continued new user acquisition of Creative Cloud with particular strength in emerging markets like India which grew ending units 50 percent year over year…
…We’re excited to welcome our community at Adobe MAX next month. We’ll showcase incredible innovations that highlight amazing productivity features in our flagship Creative Cloud applications, breakthrough AI capabilities leveraging Firefly and third-party models, new agentic experiences for conversational editing, and significant strides in content production automation for enterprises.
Adobe’s management is making the new Firefly application the single destination for creators’ workflows; the Firefly application includes Adobe’s own AI models as well as 3rd-party models; there is strong adoption of the standalone Firefly subscription; the 3rd-party models in Firefly include Google’s Gemini, Veo, and Imagen models, along with models from OpenAI and more; new capabilities for Firefly that were added in 2025 Q2 (FY2025 Q3) include avatar generation and sound effects generation; Firefly Services are agentic services that use custom models to automate and personalize image, video, and 3D content for many types of use cases; management recently delivered a no-code interface for Firefly Services; usage of Firefly Services and Custom Models grew 32% and 68% sequentially in 2025 Q2 (FY2025 Q3); the Firefly App for mobile has been downloaded millions of times since launch; the Firefly App’s MAU (monthly active users) was up 30% sequentially in 2025 Q2 (FY2025 Q3); first-time subscribers to Adobe from Firefly app was up 20% sequentially in 2025 Q2 (FY2025 Q3); Firefly has powered 29 billion generations (24 billion in 2025 Q1) since its launch in March 2023, with video generations up 40% sequentially in 2025 Q2 (FY2025 Q3); Nano Banana from Google was integrated with Firefly on the day it was released and the integration of Nano Banana led to an better product than the standalone Nano Banana; management sees the real strength of Adobe in the company’s ability to deeply integrate 3rd-party generative AI models into the workflows of the company’s existing applications; the integration of 3rd-party models into Adobe’s applications is not a trivial project; the majority of AI credit usage in Adobe is being used on the company’s Firefly models, but 3rd-party models are seeing a nice uptick in usage, and management is happy with the current mix
We’re delivering an end-to-end, ideation-to-creation solution in the new Firefly application to make it the single destination for creators’ workflows. It includes our own first-party, commercially safe models and leading third-party models. We are seeing strong adoption of the standalone Firefly subscription offering. We recently added Google Gemini Flash 2.5 alongside Google’s Veo and Imagen models to the roster of partner models from OpenAI, Black Forest Labs, Runway, Pika, Ideogram and others. In the rapidly evolving AI landscape, where each generative AI model has its own aesthetic style, we’re offering customers choice and flexibility to use the right model within Adobe applications, without the friction of switching between workflows and platforms…
…The Firefly app is a powerful, yet accessible AI production studio that helps creators deliver original content faster than ever before. In Q3, we added a slew of new capabilities, including avatar generation, sound effects generation and updates to the growing list of integrated generative models…
…We are delivering incredibly powerful automated content production capabilities through Firefly Services to enterprises of all sizes and across all verticals. These agentic services leverage Custom Models to automate and personalize image, video and 3D content for marketing campaigns, ad creation and postproduction video work, all while maintaining brand consistency. Additionally, we delivered a no-code interface that extends the power of Firefly Services to studio and design teams. Firefly Services are available through GenStudio as well as to individuals through Firefly app subscriptions. Consumption of Firefly Services and Custom Models grew 32 percent and 68 percent quarter over quarter, respectively…
… Millions of downloads of the Firefly App for mobile since launch; Firefly app MAU grew 30 percent quarter over quarter; Firefly app continues to attract next gen creators, with first time Adobe subscribers through the app growing 20 percent quarter over quarter; Generative AI consumption accelerated, with 29 billion generations, and video generations growing nearly 40 percent quarter over quarter…
…[Question] on sort of the demo that you guys gave on that video at the beginning. Really, again, highlighting the Adobe magic with kind of what you’re doing with Nano Banana, and — being able to manipulate images like that.
…Regarding choice, we want to make sure that all third-party models are available, you saw our announcement with Google and Nano Banana, OpenAI, Flux, Runway, Luma, Ideogram, the list continues to grow. And you call out the example of Nano Banana. We actually launched Nano Banana in the first — on the day that it was released as part of the Firefly application, and now we’re integrating it into Integrated Cloud Pro. So the core of the choice of whatever model has the most interesting thing for the thing you want to do, you know you can turn to Adobe, and it will be there.
The second part is the integration as you talked about, right? We have a lot of workflows that we have — that we pulled into the model. You noticed that in the demo you saw, and all the demos that are out there people are using Nano Banana with Photoshop. They’re doing it in a way that they’re blending the precision and the control you get with Photoshop and combining it with the generative capabilities of Nano Banana…
… The magic is clearly in our applications because we can take all of the models that exist and integrate that within our interface. And that’s a completely nontrivial task of what we have done to build. That was actually the rationale for building Firefly because we understand whether they’re diffusion or transformer models better than I think anybody can in the Creative Application. So I wouldn’t underestimate the amount of magic that we have to make it look as seamless as it has…
…[Question] On the mix of AI credit usage between your own Firefly-based solutions and third party, whether you’re seeing any pickup from the third-party models and how users are responding?
[Answer] The majority of generation continues to be Firefly given the commercial safety and the underpinnings of what that is. But we are seeing a nice uptick in usage of the other models. Especially for things like ideation and sort of edit capabilities that are integrated into Firefly. So that mix feels right to us, and we’re going to continue to optimize and drive that discovery in our applications going forward.
Adobe’s management sees Adobe GenStudio as the most comprehensive solution for AI-driven marketing automation; Adobe GenStudio now exceeds $1 billion in ARR, growing 25% year-on-year; there is accelerating adoption and usage of Adobe GenStudio; new capabilities in Adobe GenStudio for Performance Marketing are accelerating video and display ad campaign creation; marketers can produce engaging short-form video ads in Adobe GenStudio with commercially-safe Firefly models; management recently released new capabilities for display ad campaigns for Adobe GenStudio, including on-brand image generation with Firefly
Adobe GenStudio is the most comprehensive solution that brings together workflow and planning, creation and production, asset management, activation and delivery and reporting and insights to enable marketing automation with AI in the enterprise. Our Workfront, Frame, AEM Assets, Firefly Services, and GenStudio for Performance Marketing products – which are key components of the integrated GenStudio solution – now exceed $1 billion in ARR growing over 25 percent year over year…
…We’re seeing accelerating adoption and usage of Adobe GenStudio, the most comprehensive content supply chain solution, as enterprises drive content velocity with AI. New capabilities in Adobe GenStudio for Performance Marketing are accelerating video and display ad campaign creation. Marketers will be able to produce engaging short-form video ads using the commercially safe Firefly Video Model. We released new capabilities for display ad campaigns, including on-brand image generation with Firefly, as well as offerings with Amazon Ads, Google Campaign Manager 360, LinkedIn and Meta to power seamless campaign workflows.
70% of eligible AEP (Adobe Experience Platform) customers are using AEP AI Assistant; management sees AI becoming the new UI (user interface) for brand discovery by consumers; management thinks brands must deliver hyperpersonalized, immersive experiences on owned channels to drive engagement and loyalty, and this is where Adobe shines; management sees new
marketing needs such as LLM (large language model) optimization and LLM advertising as massive opportunities for Adobe; management is infusing agentic capabilities into Adobe Experience Manager; management saw LLM (large language model) traffic grow 4,700% year-on-year in July 2025; management thinks Adobe has AI-first and AI-infused solutions that can orchestrate the customer experience in the era of agentic AI; AEP has agentic capabilities and management launched the 1st phase of the AEP Agent Orchestrator in 2025 Q2 (FY2025 Q3), so that users can build, manage and orchestrate AI agents from Adobe and 3rd parties; Adobe LLM Optimizer is currently available in early access and will be generally available later in 2025 Q3 (FY2025 Q4); Adobe LLM Optimizer help brands shape how they show up in LLM results; subscription revenue for AEP and native apps was up 40% year-on-year in 2025 Q2 (FY2025 Q3)
Customers are leveraging the rich data and customer knowledge in Adobe Experience Platform to enable agentic workflows to scale the capabilities of Adobe’s category-leading customer experience orchestration applications. We’re seeing continued adoption and momentum for Adobe Experience Platform (AEP) AI Assistant with 70 percent of eligible AEP customers leveraging this functionality.
As AI transforms consumer behavior, it’s reinventing marketing and customer experience. Brand discovery is shifting from primarily search to include generative engine optimization. AI becomes the new UI, guided by conversations rather than menu clicks. Brands must deliver hyperpersonalized, immersive experiences on owned channels to drive engagement and loyalty. In this new reality, Adobe uniquely offers an integrated customer experience platform that delivers automation, agility and scale.
The explosion of content creation and automation in the enterprise and the beginning of new marketing needs such as LLM optimization and LLM advertising are a massive opportunity for Adobe. We’re infusing AI into Adobe Experience Manager with our upcoming LLM Optimizer release, a powerful agentic app to improve brand visibility, drive acquisition and maintain engagement with customers across LLM platforms…
…Our most recent Adobe Digital Index data, which is based on online transactions across over 1 trillion visits to U.S. retail sites, shows that LLM traffic grew 4,700 percent year over year in July 2025…
…Our AI-first and AI-infused solutions spanning GenStudio for content supply chain; AEP and Apps for customer engagement and loyalty; and Adobe Experience Manager and LLM Optimizer for brand visibility and discovery, enable us to power customer experience orchestration in the era of agentic AI…
…We are innovating on our leading AEP marketing and customer experience platform with built-in agentic functionality, empowering marketers to deliver digital experiences with greater agility and efficiency. Our intelligent agents understand intent, reason and recommend actions to drive outcomes across content, data, and journeys. Purpose-built agents are embedded in our core apps and new AI-first applications, helping brands unlock greater efficiency and precision, automate workflows and personalize experiences at scale. We launched the first phase of AEP Agent Orchestrator in Q3, empowering businesses to build, manage and orchestrate AI agents from Adobe and third parties. These capabilities power the Data Insights Agent and Product Support Agent, which are generally available now and add to our growing portfolio of agents.
Our newest innovation is Adobe LLM Optimizer, available in early access. As customers and prospects increasingly turn to generative AI search and assistants for brand discovery, LLM Optimizer helps shape how brands show up in results which is driving influence, visibility and qualified traffic…
…Strong demand for AEP and native apps with Q3 subscription revenue growing over 40 percent year over year…
…We are excited that the product will be generally available later this quarter.
Adobe’s management recently launched Acrobat Studio, which combines Acrobat and Express; Acrobat Studio has PDF Spaces, which uses AI Assistant to derive insights for users from a collection of PDFs and other content; combined monthly active users of Acrobat and Express are up 25% year-on-year; Acrobat AI Assistant brings a new conversational interface to PDF-consumption; management is seeing accelerated use of AI Assistant across desktop, web and mobile; users can easily create AI agents in PDF Spaces to perform document tasks on their behalf; Acrobat Studio has encouraging early adoption and usage trends; there is rapid adoption of Adobe Express; dentsu is using Adobe Express for its global marketing strategy across its 68,000 employees worldwide, and is seeing measurable impact; there was 40% sequential growth in units for Acrobat AI Assistant in 2025 Q2 (FY2025 Q3), and 50% sequential growth in conversations and summarisations; 14,000 organisations added Adobe Express in 2025 Q2 (FY2025 Q3), up 4x from a year ago; Express usage in Acrobat doubled sequentially;
We’re integrating creativity with productivity for billions of users with the recent launch of Acrobat Studio, which brings together Acrobat and Express…
… The new Acrobat Studio includes PDF Spaces, which transforms collections of PDFs, web pages and other files into dynamic knowledge hubs that help people work smarter and faster using AI Assistant to derive insights. We’re seeing steady growth across our family of Acrobat and Express products with combined monthly active users growing approximately 25 percent year over year…
…The introduction of Acrobat AI Assistant brought a new conversational interface that enhances the experience of customers consuming PDFs. This unlocks increased comprehension across the trillions of PDFs in the world. We continue to see accelerated use of AI Assistant across desktop, web and mobile…
…Users can leverage PDF Spaces to organize documents and links, discover insights faster through conversational experiences and enable editing and remixing of PDF content into new formats like emails and presentations. I’m particularly excited that anyone can easily create agents to perform document tasks on their behalf. Customers can use PDF Spaces with team members for more impactful knowledge sharing and collaboration. The combination of PDF Spaces, AI Assistant and an integrated Express experience is available through Acrobat Studio, a new, premium offer in our Acrobat line-up. Early reception of Acrobat Studio has been strong, with encouraging adoption and usage trends that highlight the significant customer demand and opportunity ahead…
…We’re seeing rapid adoption of Adobe Express. In the enterprise, Express is helping organizations scale content creation while maintaining brand consistency and quality. A great example is dentsu, which has made Express a core part of its global marketing strategy. Adobe’s platform is being rolled out to all 68,000 employees worldwide and scaled across brands including Carat, iProspect, dentsu X, Dentsu Creative, Tag and Merkle. By enabling creative teams to build content in Creative Cloud and share that content through Express within an overall GenStudio solution, dentsu ensures brand alignment across global teams while empowering marketers to create and remix their own content. This is driving measurable impact at dentsu…
…Ending units for Acrobat AI Assistant grew more than 40 percent quarter over quarter and AI Assistant engagement, with conversations and summarizations grew nearly 50 percent quarter over quarter…
…Over 14,000 organizations added Express in Q3 alone, a 4x increase in the quarter versus a year ago; Express usage within Acrobat nearly doubled quarter over quarter.
Adobe’s AI-influenced ARR is now more than $5 billion (was in the “billions” in 2025 Q1); management expects AI-influenced ARR to continue to rise as a percent of Adobe’s business; Adobe’s AI-first products has already achieved management’s target of $250 million in ending ARR by end-FY2025
Our AI-influenced ARR has now surpassed $5 billion, up from over $3.5 billion exiting fiscal year 2024 and we have already surpassed our full year AIfirst ending ARR target…
…Adobe AI influenced ARR surpassed $5 billion and we expect it to continue to rise as a percent of our business. Notably, ARR from our new AI-first products, including Firefly, Acrobat AI Assistant, and GenStudio for Performance Marketing, has already achieved our end-of-year target of over $250 million.
Adobe’s management thinks that larger advertisers will still prefer to retain control over their advertising campaigns, and not hand nearly all or total control over to digital advertising platforms such as Google and Meta Platforms that are providing near- or fully-automated AI-powered solutions; management sees the large digital advertising platforms as being excited to be supported by Adobe’s performance-marketing solutions
As it relates to how people are going to create and run campaigns and ad placements in all of these different platforms. I think you’re going to see some smaller medium businesses use it. I think all of the larger companies, what we continue to hear in the enterprises, they want the ability to create campaigns, run it across multiple channels, see the attribution, as well as see — what we can do in terms of the analysis.
But in addition to that, I mean, all those advertising channels that you talked about are really excited about Adobe making it seamless which is why you’ve seen in the GenStudio for performance marketing the support third-party channels, whether that’s TikTok, Meta, Google, Amazon, all of that, we’re just going to continue to do.
Adobe’s management created LLM Optimizer after realising that Adobe has a lot of content that matches the questions users were asking AI chatbots regarding PDFs; management thinks LLM Optimizer is a great opportunity for Adobe to drive traffic to itself from AI chatbots, and for other companies to drive traffic to their properties
I was actually working internally with our team, our adobe.com team, which obviously runs a big digital business. That’s how we got going on the LLM Optimizer. We noticed that in terms of some of the traffic, it’s not only the search traffic, but a lot of our customers, our prospects were starting to ask questions within ChatGPT and Perplexity and so on. How do I edit this PDF? I have a large PDF? How do I compress it? Those kinds of questions. And we realized that we had a lot of content available that if we made it available the right channels that will get picked up by the LLMs and that would give us — our Acrobat brand a lot more visibility through the LLMs. So that’s how the idea for the product came about…
…I noticed in a lot of the preview reports folks look at web traffic, and it’s coming from different sources. That’s a really new movement. And so as people about just search traffic and what was happening in search, you really have to start to factor and we’re, I think, one of the leaders in that space, how to really take advantage of what’s happening, not just across search but also what happens across social and now increasingly what happens across LLM. So as Anil mentioned, this is not just an opportunity for us to use ourselves but I think a massive opportunity for us to help every single company deal with this new reality.
Adobe’s management is seeing a new movement of web traffic coming from AI chatbots; management thinks consumers will adopt LLMs for the entire process involving e-commerce transactions
I noticed in a lot of the preview reports folks look at web traffic, and it’s coming from different sources. That’s a really new movement…
…With the LLM, the new LLMs, the discovery to actual consideration, to purchase, maybe even the post purchase, that entire funnel is starting to consolidate and you’re going to be seeing consumers actually adopt LLMs for the entire process.
Even in the AI era, management thinks Creative Cloud has growth opportunities with seat expansion
[Question] There’s a thesis out there for software in general. That AI is the headwind to seats and the seats will need to shift to consumption, the issue is then can capture more consumption revenue than seat. How do you think about the relationship between seats and consumption in the Creative Cloud?
[Answer] On Creative Cloud specifically, we definitely view this as both as seat expansion as well as a marketing automation. And that’s part of the reason, as you know, why we — this customer grouping that we talk about, which is Creative Professional and Marketing Professionals, And in the enterprise that’s playing out exactly the way it is. It is actually still continuing to play out with seat expansion in the enterprise.
Adobe has continued to post healthy margins despite investing in AI capabilities because management has put a lot of effort into controlling training and inferencing costs, and using AI to drive internal productivity
[Question] You’ve been speaking to mid-40s margin profile, you’re still operating a bit above that this quarter. It looks like gross margins are actually up a touch versus last year. Why aren’t you seeing degradation from AI adoption, given some of the metrics you’re providing?
[Answer] I think there’s 2 vectors of productivity that the company is driving to underpin margin delivery. First one, how we drive GPU training fleets to support training, the utilization, the algorithms we use to efficiently get at model construction as well as continually loading that GPU fleet to make sure there’s high utilization over time. The second piece is inferencing. Constantly tuning the algorithms and cost per inference. We watch this maniacally, how we feel these fleets of GPUs to make sure that the reserve instances, which come in at very different price points than on demand that we constantly balance and optimize the cost structure that underpins the usage of that compute power. And then obviously from an internal working standpoint, adoption of these technologies how we drive productivity gains in the company, how we augment individual employees from a productivity standpoint as well as ways of working inside of the company, to continue to drive more and more productivity out of the world’s best employees.
Adobe’s management is seeing users of Adobe’s AI solutions having better retention
The thing that we have seen is a direct correlation between increased use of AI and retention, and we feel very good about that.
Adyen (OTC: ADYEY)
Adyen had been applying AI on payments well before AI became a hot topic; Adyen’s AI-powered Adyen Uplift technology, launched in 2024, improves conversion, strengthens fraud prevention, and reduces payment costs; Adyen Uplift has a full-funnel approach that is superior to legacy systems’ approach; Adyen Uplift uses Adyen’s access to trillions of dollars in global transaction data from 1 billion shoppers to provide the necessary recommendations; Adyen Uplift is modular in design and has 4 components, Optimize, Protect, Tokenize, and Authenticate; Optimize uses Adyen’s IPR (Intelligent Payment Routing) to maximise payment authorisations and reduce transaction costs; each component of Adyen Uplift can be used separately, but work best when used together; merchants have control to test and adjust performance settings within Adyen Uplift; nearly all users of Adyen Uplift are using Optimize, while 68% of Adyen Uplift users are using Protect; in markets such as Australia and the USA where debit cards can route payments through global or domestic networks, IPR uses machine learning to analyze real-time signals to determine the optimal route for each transaction because domestic networks can offer lower fees, but at lower performance; IPR can reduce cost while maintaining or even improving approval rates; adoption of IPR was up 8x in 2025 H1 compared to 2024 H2; US customers of IPR saw average cost reduction of 20% on debit transactions and 89 basis point improvement in authorisations; Australian customers of IPR generated average cost savings of 47%; Adyen Uplift is fully embraced across the Digital pillar; Adyen is only partly charging for Adyen Uplift’s services currently
We’ve been applying machine learning to optimize payment flows well before AI rose to the top of the industry agenda…
…Adyen Uplift was developed around three recurring needs: improving conversion, strengthening fraud prevention, and reducing payment costs…
…While legacy systems often address these issues in isolation, Adyen Uplift takes a full-funnel approach. It uses risk-based intelligence and automation to optimize decisions across the entire payment flow. With access to trillions of dollars in global transaction data from over a billion shoppers across online and in-store channels, we can detect high-risk behavior and reliably recognize trusted shoppers. This combination provides the depth of insight needed to deliver tailored recommendations that customers can test and validate in real-time…
…Adyen Uplift is modular by design so enterprise customers can adopt the capabilities most relevant to their business. Optimize is the decision engine that maximizes payment authorizations and reduces transaction costs. It uses IPR to find the optimal balance between conversion and cost for any transaction with multiple route possibilities. Protect delivers advanced fraud detection, while Tokenize ensures payment credentials remain valid and secure. Authenticate helps businesses meet local compliance requirements without adding unnecessary friction to the shopper experience. Each module can stand alone, but the product suite delivers the most value when its components work together. What seems optimal at one step of the payments flow often isn’t when viewed in full context…
…Merchants now have more control to test and adjust performance settings dynamically. Each recommendation includes clear activation instructions, the ability to test before adoption, and a projected outcome, helping them to assess potential impact and move with confidence. Examples include enabling a local payment method, fine-tuning authentication logic, or activating IPR for US debit payments…
…Optimize is available to all customers, with nearly all utilizing the module. Additionally, 68% of enterprise merchants in our 2025 cohort have adopted the Protect module from day one…
…Intelligent Payment Routing (IPR) within Adyen Uplift is a prime example. This product dynamically selects the optimal route for each transaction based on conversion and cost. We invested in direct connections with local debit networks early on. This enabled us to build a solution that not only ensures compliance but consistently enhances performance. In markets like the U.S. and Australia, dual-branded debit cards can be routed through either global or domestic networks. While local rails often offer lower fees, their performance can vary. IPR uses machine learning to analyze realtime signals, such as scheme performance, issuer behavior, and cost structures, to determine the optimal route for each transaction. The result is a product that reduces cost while maintaining or even improving approval rates.
Adoption grew 8x in H1 2025 compared to the pilot group announced in H2 2024, with major U.S. brands such as Adobe, Microsoft, 24 Hour Fitness, and Indeed using the solution. In the U.S., customers saw an average cost reduction of 20% on debit transactions and a +89 basis point improvement in authorization rates. In Australia, the launch of local routing over Eftpos supported 55 merchants, with average cost savings of 47%…
…Adyen Uplift is now fully embraced across Digital, becoming a core part of how customers optimize for performance, reduce cost, and navigate growing complexity…
…Uplift is a product that we launched in the second half of last year. We are partly charging for it. So it depends a bit on the module that you’re exactly using and some of the parts are free. Of course, ultimately, what we’re building for is that we charge for the products that we offer to our customers. So it’s currently a mix.
Adyen’s management sees significant potential in agentic commerce and thinks Adyen is well-positioned for the shift; management thinks agentic commerce brings new demands, in particular, a new lens for looking at fraud prevention, because traditional signals used in fraud prevention are absent in agentic transactions; management sees Adyen’s tokenization capabilities as being an important enabler of agentic commerce in being able to improve authorization, reduce fraud, and enable intelligent, context-specific execution; management sees Adyen as being at the leading edge of tokenization in the context of enabling agentic commerce; Adyen’s global risk system, built on nearly €1.3 trillion in annual volume, enables consistent fraud detection in agent-initiated flows; Adyen’s MCP (model context protocol) server enables structured agent-to-business communication; management thinks Adyen’s platform will allow whatever emerges from agentic commerce to work seamlessly with existing global payment methods
One area where we see early momentum and significant long-term potential is agentic commerce: the shift from enhanced search to autonomous, agent-led purchasing. While still emerging, the rapid adoption of large language models signals rising interest and underlying demand. We’re well positioned to support this shift, helping merchants and consumers navigate the next chapter of ecommerce.
Agentic commerce brings new demands: secure information exchange, sandboxed payment permissions, dynamic authorization, and real-time context-awareness. Crucially, it requires rethinking fraud prevention. Traditional signals are often absent when agents transact on behalf of users, making it essential to rely on scalable infrastructure and intelligent risk models that operate without direct human input. Our platform is built for this. Our tokenization suite enables secure, seamless credential sharing between agents, merchants, and shoppers. Agents can initiate payments using standardized tokens that improve authorization, reduce fraud, and enable intelligent, context-specific execution. We’re at the forefront of this space, pushing the boundaries of what tokenization can do. Our recent announcement with JCB highlights how we’re advancing global credential security — Adyen is the first to offer their advanced tokenization to reduce fraud and improve authorization.
Our authentication engine supports adaptive trust models, applying the right protocol based on transaction risk, regulation, and issuer logic. Our global risk system, trained on nearly €1.3 trillion in annual volume, adds consistent fraud detection, even in agent-initiated flows, flagging misuse, and maintaining trust at scale. And with our Model Context Protocol (MCP) server, we’re enabling structured agent-to-business communication, equipping AI agents to securely interpret and act on commerce data…
… Our infrastructure ensures that whatever emerges in this space can work seamlessly with the global payment methods, regions, and consumer journeys our customers rely on today, and in the future.
MongoDB (NASDAQ: MDB)
MongoDB is adding thousands of AI-native customers
We’re adding thousands of AI native customers.
MongoDB Atlas consumption growth in 2025 Q2 (FY2026 Q2) benefitted from a strong start to consumption in May 2025, as well as broad-based strength; Atlas consumption growth in 2025 Q2 (FY2026 Q2) was consistent with 2024 Q2 (FY2025 Q2); Atlas’s growth has been driven partly by an uptick of capabilities such as Search and Vector Search
We had an impressive Atlas growth quarter, which benefited in part from the strong start to consumption in May that we referenced on our last call as well as broad-based strength, especially in larger customers in the U.S…
…In Q2, Atlas consumption growth was strong and relatively consistent with last year’s growth rates. This drove the acceleration in revenue as well as the growth in absolute revenue dollars year-to-date for the first half of fiscal ’26…
…What we’re also seeing is that there’s a great uptick of some of the other capabilities we offer like search and vector search that are also adding to that growth of those workloads.
Many of MongoDB’s recently-added customers are building AI applications and this bolsters management’s confidence that MongoDB is an important part of the AI infrastructure stack; management sees MongoDB emerging as a standard for AI applications
Many of our recently added customers are building AI applications, underscoring how our value proposition is resonating for AI and why MongoDB is emerging as a key component of the AI infrastructure stack…
…MongoDB is emerging as a standard for AI applications.
MongoDB has integrated capabilities such as search, vector search, embeddings and stream processing into its database product; the integrations mean MongoDB has so much more capabilities than competing databases such as Postgres; management thinks AI startups tend to go with Postgres first because the founders are familiar with Postgres and they do not think carefully about their database choices; what the AI startups often realise after choosing Postgres is they run into scaling challenges and then turn to MongoDB; management wants to do more developer education regarding Postgres versus MongoDB
MongoDB has redefined what’s core for the database by natively including capabilities like search, vector search, embeddings and stream processing. Comparing MongoDB to another database like Postgres is not an apples-to-apples comparison. Take a global e-commerce application that manages inventory and order data while enabling product discovery through sophisticated search across millions of SKUs. The choice for this application is not between MongoDB or Postgres, it is between MongoDB or Postgres plus other offerings like Pinecone, Elastic and Cohere for embeddings…
…[Question] Why do we hear so much about Postgres adoption for AI start-ups. You talked about the success you guys are having. But if Postgres has the disadvantages that you’ve talked about multiple times, scalability, JSON support, how come we hear so much about that kind of at least in the early stages of AI?
[Answer] What’s become clear is a lot of these startup founders don’t think that hard about their database choice, they kind of go with what they know. And what we are seeing is that as some of these startups are scaling, they’re running to real scaling challenges with Postgres. And what — and we’ve talked about this in the past, like when you add a JSON — when you use JSONB on Postgres, a 2 kilobyte document or bigger starts really creating performance problems because Postgres has to do something called off-row storage, which creates enormous performance overheads. And so the developers need a platform that can handle structured, semi-structured and unstructured data, they need obviously a platform that performs well, and they need a platform that can scale as they grow. And what we’re hearing clearly from the startup community that Postgres, in many cases, is not scaling for them, and they’re now coming to us…
…We realize we need to do more developer education and do more work. And so we’re investing a lot in the startup community. We’re running a big event in October in San Francisco with a big hackathon, and we’re inviting lots of customers to participate. But that’s just the start of a meaningful investment we’re making in the Bay Area and the AI startup community to rethink their decisions around just going with what they know.
MongoDB’s management is seeing enterprises adopt AI, but the process is still early; most AI use cases for enterprises are related to employee productivity tools and packaged solutions from ISVs (independent software vendors); enterprises are still very early in building custom AI applications; enterprises often fail when attempting to scale vibe-coded software built on relational databases; management is seeing enterprises start deploying AI agents, but it’s still very early; management is hearing from customers that AI is currently providing productivity gains, but it’s not transforming their businesses; management thinks the real value of AI will come when enterprises are able to build custom AI solutions; enterprises are sometimes hesitant to deploy AI for customer-facing applications because it’s not possible currently to guarantee the quality of output of AI models; management thinks there will not be an inflection point in enterprises suddenly adopting AI at a big scale, instead adoption will simply take time to grow
In the enterprise segment, adoption is real but early. Much of the activity today centers on employee productivity tools and packaged ISV solutions. Enterprises are still in the very early stages of building their own custom AI applications that will transform their business. We consistently hear from customers that when teams try to scale from vibe-coded prototypes built on relational back ends to enterprise-grade deployments, these platform quickly hit limits in flexibility, scalability and performance…
…Where it is being deployed is really on end user productivity, whether it’s developers with cogen tools or business users using tools to summarize documents, extract data or things like deflecting tickets from people to systems with like conversational AI. I think you are starting to see the first steps in people deploying agent-based systems, and I can talk a little bit about that, but that’s still very, very early. We’re seeing small ISVs, some of them are taking off, who are really driving most of the impact.
But the real enduring value will come. When you talk to a customer today, most of them when you ask them is AI really transforming a business, they will say no. Yes, we’re seeing some productivity gains here and there, but it’s not really transforming my business. I think the real enduring value will come when they build custom AI solutions that can truly transform the business, whether it’s to drive new revenue opportunities or dramatically reduce their existing cost structure…
…I had 2 meetings today with 2 different leaders of 2 different financial institutions here in New York, and they both talked about what they’re doing in AI. They both admitted that they’ve kind of started with low stakes use cases, but their appetite to start doing more is increasing as they get more and more comfortable with the technology, and they’re quite excited to leverage MongoDB as part of that journey. But again, I think that’s kind of a microcosm into the enterprise market where I think they’re still quite early in their AI journey…
…AI systems are probabilistic in nature, not deterministic in nature. So you can’t always guarantee the output. You can hope that you’ve trained the models well. You’ve hoped that you’ve given it the right information, but you can’t always guarantee the output. So as I mentioned, I had meetings with 2 financial services customers earlier today, and both of them are still hesitant to roll out an end user-facing AI applications for those specific reasons…
…[Question] Some of the comments you were talking about the AI slowdown, and you heard about recent MIT report about 95% AI implementation not getting any kind of return. How do you see — what’s kind of do you think the inflection point?
[Answer] It’s going to take time to be comfortable with technology. It’s going to take time where people start with low stakes use cases and start gravitating to higher state use cases. So I don’t think there’s going to be some seminal inflection point. I think it’s just going to take time. But I think that time is coming.
A leading electric vehicle company chose MongoDB Atlas and Vector Search for its autonomous driving platform; MongoDB Atlas Vector Search had superior performance over Postgres; the electric vehicle company is using MongoDB Atlas to handle over 1 billion vectors and expects 10x growth in data usage in the next 12 months
A leading electric vehicle company chose Atlas and vector search to power its autonomous driving platform. After testing vector search against Postgres PG Vector for their in-vehicle voice assistant, they selected MongoDB for superior performance at scale and stronger ROI. They now rely on Atlas to handle over 1 billion vectors and expect 10x growth in data usage by next year.
AI-native startup DevRev used MongoDB Atlas to build its AgentOS product; AgentOS handles billions of requests per month; MongoDB Atlas helped DevRev speed up product development at lower cost and helped DevRev scale globally; DevRev is using MongoDB Atlas Vector Search
DevRev, a well-funded AI-native platform with proven founders disrupting the help desk market built AgentOS, it’s a complete agentic platform that autonomously handles billions of monthly requests on Atlas. DevRev accelerated development velocity, lower cost and scale globally with low latency by using Atlas. AgentOS also leverages Atlas Vector Search for semantic search enriching its knowledge graph and LLMs with domain-specific content.
MongoDB’s management is very excited about Relational Migrator; Relational Migrator has a new product leader with strong skills around using AI to drive automation in the product; Relational Migrator also has a new go-to-market leader; management does not expect Relational Migrator to contribute much to MongoDB’s business in 2025 (FY2026)
[Question] I know you’ve been investing in Relational Migrator. You’re working with companies like Cognition to accelerate the code migration opportunity. And you’ve seen professional services ramp up a little bit. But where have you started to see sort of the time to migration or replatform improve a bit?
[Answer] we’re super excited about what we call app modernization or legacy app modernization. You’ll hear a lot more about this at Investor Day in September, Tyler. But what I will say you is that the value proposition is very clear. Customers are very, very motivated to try and modernize these legacy systems for a wide variety of reasons. We are seeing a lot of progress. We’ve actually brought in a new leader — new product leader, who brings a lot of depth and scale, especially around AI to help us build the tooling to leverage AI to really drive more automation in terms of how we analyze and refactor the code. We brought in a new leader last quarter to really help drive the delivery and the go-to-market efforts around app mod. So we’re definitely beefing up resources…
…It won’t be as pronounced in terms of this year, but we’re very, very excited about the opportunity.
MongoDB’s management is seeing OLTP (online transaction processing) be the strategic high ground for AI especially in inferencing; many database companies are struggling to develop OLTP platforms and so had to make acquisitions; management thinks MongoDB is positioned really well for the AI opportunity given its strengths as an OLTP platform
What we are seeing is that the strategic high ground for AI, especially when it comes to inference is OLTP. So we talked about this on the last call where some companies that acquired early-stage OLTP start-ups. And what it really spoke to and those companies had spoken about their organic efforts to build an OLTP platform. And I think what it spoke to was the fact that they building an OLTP platform that’s ready and mission-critical and enterprise can serve the most demanding requirements of enterprises is not trivial. And I think they basically threw in the towel and decided to do these acquisitions…
…If now customers are going to be choosing what OLTP platform that they want for AI, just given our architecture, just given the fact that we have a durable architectural advantage in terms of JSON support, which addresses messy, complicated and highly interdependent and constantly changing data structures. The fact that we integrated search and vector search, I think, really helps us position going after AI.
MongoDB’s management thinks real JSON is becoming more important now with AI; management is seeing the hyperscalers hold off on investing in JSON-related capabilities; management thinks JSON is the best way to handle messy and evolving data structures in the real world, and this positions MongoDB well for AI because it is a JSON database
[Question] I’m thinking about Lakebase from Databricks and then DocumentDB in the Linux Foundation. Can you just comment on both those things?
[Answer] Around the Linux Foundation, I think what this really also suggests — shows is that real JSON is much more important now with AI than ever before and the clones and bolt-ons that have traded off features and performance and developer experience have just not met customer expectations. And candidly, what I see this is that the hyperscalers are investing less and really handing off to the open source community to kind of really take on the bulk of the work in terms of product development. Our hyperscaler partnerships remain strong…
…We’re a JSON database. JSON is the best way to express and model the complicated and messy and highly interdependent and constantly evolving data structures that you have to deal with in the real world. So that’s point number one. So it’s much easier to do that in MongoDB than to do that on some kludge kind of set up on top of a relational database.
MongoDB’s management thinks a unique differentiator of MongoDB for AI startups is MongoDB’s database allows sophisticated retrieval of information to be done quickly; another unique differentiator of MongoDB is the presence of Voyage’s embedding models; embeddings act as a bridge between a company’s private data and the AI model, and reduces hallucinations
I would say the AI cohort was not a material driver of the growth. That being said, what we are seeing is a lot of customers very, very interested in our architecture…
…Second is that we integrate search and vector search. You can do very sophisticated things to what people call hybrid search and retrieval, you can do very sophisticated things in finding information quickly, which is a very unique differentiator for us. So what this means that rather than stitching together multiple systems, you can do this all on MongoDB, so it becomes less complexity and lower cost.
The third thing is that we’ve now embedded Voyage models on our platform, right? So the — if you control the embedding layer, you sit at the gateway of needing of AI, right? What the embedding models do is really a bridge between a company’s private data and the LLM. So that becomes really important because the better the quality of the embedding model, the better the quality of the signal of your own data. So that reduces things like hallucinations or just bad outputs. And so customers are now — as people start caring more and more about like higher state use cases, they really want to ensure those outputs are high. And the fact that it’s part of our platform, we can enable you to do auto embeddings. It becomes an incredibly compelling feature.
MongoDB’s management thinks AI agents will be using a company’s systems much more intensely than humans, so it’s important that a company’s systems can massively scale up and down; the need for massive scaling up and down of systems positions MongoDB well; management thinks MongoDB is positioned to win in a world where AI agents dominate because of (1) the strengths of the JSON database, (2) support of vector search, (3) support of memory
Agents require — if you think about — if you’re using agents, agents will use your systems much more intensely than humans will because they can do things much more quickly. So you need platforms that can massively scale up and down, which is, again, a good sign and support indicator for MongoDB…
…[Question] If we were to fast forward 5, 10 years and we start to see a real paradigm shift where instead of agents built on kind of the traditional GUI mobile interface that we’ve been in for the past 30 years, we actually entered kind of a multi-agentic world where maybe the interaction vector may move away from what we’ve been used to into more natural language. Can you talk about why MongoDB still has a strong role and some of the investments that you might be making to position yourself well for the world, understanding that’s at the very least several years away?
[Answer] We believe that agents essentially do 3 things. One, they perceive or understand the state of things. So you need essentially a way to understand the state of what’s happening in your business, then you need to decide what to do or plan. So basically, you have to come up with the plan saying, “I want to take this action or these sets of actions.” And then you have to act. You actually have to go execute those actions, right?
So why is MongoDB good for agents. One, as I said before, the JSON document database is the best of being able to model the real world, the messiness, the complicated nature. The real world does not fit easily in rows and columns. And that’s why our document database, I think, is the best way to do that. Two, we obviously support search and vector search. So you can do very sophisticated hybrid search. So that becomes super important. And then with memory, if agents didn’t have memory, they would act like goldfish. They could only react to the last thing — last piece of information that they saw.
So memory lets agents connect the dots across time and situations. So you have different kinds of memory, things like short-term context, past experiences, knowledge, skills, et cetera, they need to be able to share quickly. You need to be able to orchestrate those agents because you may have multiple agents doing a certain task. You need to register and have governance policies around those agents. We think that the underlying platform needs to be able to support those things while there’s a lot more work needs to be done, the underlying architecture that we have in MongoDB is well suited to address those needs.
Nu Holdings (NYSE: NU)
Nu Holdings’ management has seen significant improvement in its ability to do credit underwriting for credit cards, driven by (1) AI-powered improvements to Nu Holdings’ credit models, and (2) new data acquired by the company; Nu Holdings is now the leader in open finance consent, which helps in Nu Holdings collecting data; the AI-related improvements in credit models comes from Nu Holdings’ 2024 acquisition of Hyperplane; the credit models that were improved by Hyperplane are largely focused on the mass market at the moment, but management expects the AI-enabled architecture from Hyperplane to be applied to more models in the future; management expects to see meaningful changes to Nu Holdings’ models across many different use cases in the future by applying Hyperplane’s technologies
We have been seeing kind of a fairly material improvements in our ability to do credit underwriting and to continue to expand the credit card portfolio. It has to do with the adoption of new models and technologies to how we do credit underwriting, going all the way to better kind of traditional machine learning models, but also neural networks and predictive AI technologies, but more and more by the adoption of new data that we acquired…
…So the more customers stay with us, the more data we accumulate, we are now the leaders in open finance consent. The combination of better modeling technique with more data has allowed us to consistently increase kind of credit underwriting, credit limits and utilizations…
…[Question] The Hyperplane expansion in the credit limit that you talked about, is there any particular segment of customer base where it is more targeted towards higher income or mass market or your super core segments?
[Answer] So far has been mostly focused on mass market, but we expect that a lot of these new AI enabled architecture will be now applied to a number of different models…
…We expect a number of new models coming in for a number of different segments for the different countries and for different applications, such as collections, fraud, cross-sell. So we’re very excited about this, and it’s early days of applying this new technology to a lot of the decisioning that we have across Nubank. But we expect to see meaningful changes across the board.
NVIDIA (NASDAQ: NVDA)
NVIDIA’s Data Center revenue again had very strong growth in 2025 Q2 (FY2026 Q2), driven by the Blackwell family of chips
Data center revenue grew 56% year-over-year. Data center revenue also grew sequentially despite the $4 billion decline in H20 revenue. NVIDIA’s Blackwell platform reached record levels, growing sequentially by 17%…
…The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue.
NVIDIA’s management sees $3 trillion to $4 trillion of AI infrastructure by 2030; management sees $600 billion in data center capital expenditures in 2025; management expects AI infrastructure investments to continue growing, driven by (1) agentic AI’s requirement for orders of magnitude more training and inference compute, (2) sovereign AI, (3) enterprise AI adoption, and (4) robotics; NVIDIA’s management sees the market for AI inference expanding rapidly; the capital expenditures from the CSPs (cloud services providers) has doubled over the last few years to $600 billion; management expects enterprises beyond the cloud hyperscalers to contribute to the expected $3 trillion to $4 trillion of AI infrastructure spend by 2030; management sees NVIDIA’s chips accounting for the majority of spend in AI data centers
We see $3 trillion to $4 trillion in AI infrastructure spend in the — by the end of the decade…
…Capital expenditures from the cloud to enterprises, which are on track to invest $600 billion in data center infrastructure and compute this calendar year alone, nearly doubling in 2 years. We expect annual AI infrastructure investments to continue growing, driven by the several factors: reasoning agentic AI requiring orders of magnitude more training and inference compute, global build-outs for sovereign AI, enterprise AI adoption, and the arrival of physical AI and robotics…
…The market for AI inference is expanding rapidly with reasoning and agentic AI gaining traction across industries…
…The last couple of years, you have seen that CapEx has grown in just the top 4 CSPs by — has doubled and grown to about $600 billion…
…The CapEx of just the top 4 hyperscalers has doubled in 2 years. As the AI revolution went into full steam, as the AI race is now on, the CapEx spend has doubled to $600 billion per year. There’s 5 years between now and the end of the decade, and $600 billion only represents the top 4 hyperscalers. We still have the rest of the enterprise companies building on-prem. You have cloud service providers building around the world…
…Out of a gigawatt AI factory, which can go anywhere from $50 billion to plus or minus 10%, let’s say, $50 billion to $60 billion, we represent about $35 billion plus or minus of that and $35 billion out of $50 billion per gigawatt data center.
The Blackwell family of chips is seeing widespread adoption and its users include high-profile model builders; the transition from the GB200 to the GB300 has been seamless, with the current run rate for the GB300 rack at 1,000 racks per week, with acceleration in output expected throughout 2025 Q3 (FY2026 Q3); the GB300 has a 10x higher inference performance on reasoning models compared to H100; GB300 has a 10x improvement in token per watt energy efficiency compared to the previous Hopper family of chips; management thinks Blackwell is the new standard for AI inference performance; the GB300 platform has a 50x increase in energy efficiency per token compared to Hopper; management believes a company investing in GB200 can earn 10x the amount in revenue; the performance of the Blackwell family of chips has already improved by 2x since its launch because of NVIDIA’s software innovations, including a groundbreaking numerical approach to LLM (large language model) pretraining; the new numerical approach means the GB300 can achieve 7x faster training than the H100; the AI industry’s major companies have adopted the new numerical approach
The GB200 NVL system is seeing widespread adoption with deployments at CSPs and consumer Internet companies. Lighthouse model builders, including OpenAI, Meta and Mistral are using the GB200 NVL72 at data center scale for both training, next-generation models and serving inference models in production…
…The transition to the GB300 has been seamless for major cloud service providers due to its shared architecture, software and physical footprint with the GB200, enabling them to build and deploy GB300 racks with ease. The transition to the new GB300 rack-based architecture has been seamless. Factory builds in late July and early August were successfully converted to support the GB300 ramp, and today, full production is underway. The current run rate is back at full speed, producing approximately 1,000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online.
We expect widespread market availability in the second half of the year as CoreWeave prepares to bring their GB300 instance to market as they are already seeing 10x more inference performance on reasoning models compared to H100. Compared to the previous Hopper generation, GB300 NVL72 AI factories promise a 10x improvement in token per watt energy efficiency, which translates to revenues as data centers are power limited…
…Blackwell has set the benchmark as it is the new standard for AI inference performance…
…New NVFP4 4-bit precision and NVLink 72 on the GB300 platform delivers a 50x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale. For instance, a $3 million investment in GB200 infrastructure can generate $30 million in token revenue, a 10x return…
…NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell’s performance by more than 2x since its launch. Advances in CUDA, TensorRT-LLM and Dynamo are unlocking maximum efficiency. CUDA library contributions from the open source community, along with NVIDIA’s open libraries and frameworks are now integrated into millions of workflows. This powerful flywheel of collaborative innovation between NVIDIA and global community contribution strengthens NVIDIA’s performance leadership. NVIDIA is a top contributor to OpenAI models, data and software.
Blackwell has introduced a groundbreaking numerical approach to large language model pretraining. Using NVFP4 computations on the GB300 can now achieve 7x faster training than the H100, which uses FP8. This innovation delivers the accuracy of 16-bit precision with the speed and efficiency of 4 bit, setting a new standard for AI factor efficiency and scalability. The AI industry is quickly adopting this revolutionary technology with major players such as AWS, Google Cloud, Microsoft Azure and OpenAI as well as Cohere, Mistral, Kimi AI, Perplexity, Reflection and Runway already embracing it. NVIDIA’s performance leadership was further validated in the latest MLPerf Training benchmarks, where the GB200 delivered a clean sweep. Be on the lookout for the upcoming MLPerf Inference results in September, which will include benchmarks based on the Blackwell Ultra.
NVIDIA’s next generation of chips, the Rubin family, are in fab now and remains on schedule for volume production in 2026; 6 different chips go into a Rubin AI supercomputer
The chips of the Rubin platform are in fab, the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 scale up switch, Spectrum-X scale out and scale across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rack scale AI supercomputer with a mature and full-scale supply chain…
…It takes 6 chips just to build — 6 different types of chips just to build a Rubin AI supercomputer.
The US government recently started reviewing licenses for sales of NVIDIA’s H20 chips to China customers; some of NVIDIA’s China customers have received licenses for H20 chips, but NVIDIA has yet to make any shipments; management sees the US government as expecting a 15% revenue-share from the sales of H20 chips to China customers, but the US government has yet to publish regulations on this; management has not included H20 sales in its 2025 Q3 (FY2026 Q3) guidance; management expects revenue of $2 billion to $5 billion in 2025 Q3 from H20 chips if they can be shipped once geopolitical uncertainty subsides; NVIDIA has capacity to fulfill more orders for H20 beyond the $5 billion expectation; management continues to advocate for the sale of Blackwell chips to China as they believe the sales will benefit the US economy; management sees the sales of Blackwell chips to China as being for commercial uses only; China revenue declined sequentially; management thinks China represents a $50 billion revenue opportunity for NVIDIA in 2025, with growth of 50% annually, if the company is able to sell chips there; management sees China as the home of AI researchers with about 50% of AI researchers being in the country; management sees China as the home of the leading open-sourced AI models, and that it’s important for American AI companies to be able to serve China because of the country’s lead in open source
In late July, the U.S. government began reviewing licenses for sales of H20 to China customers. While a select number of our China-based customers have received licenses over the past few weeks, we have not shipped any H20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales, but to date, the USG has not published a regulation codifying such requirement.
We have not included H20 in our Q3 outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2 billion to $5 billion in H20 revenue in Q3. And if we had more orders, we can bill more.
We continue to advocate for the U.S. government to approve Blackwell for China. Our products are designed and sold for beneficial commercial use, and every license sale we make will benefit the U.S. economy, the U.S. leadership. In highly competitive markets, we want to win the support of every developer. America’s AI technology stack can be the world’s standard if we race and compete globally…
…China declined on a sequential basis to low single-digit percentage of data center revenue…
…The China market, I’ve estimated to be about $50 billion of opportunity for us this year if we were able to address it with competitive products. And if it’s $50 billion this year, you would expect it to grow, say, 50% per year…
…It is the second largest computing market in the world, and it is also the home of AI researchers. About 50% of the world’s AI researchers are in China.
The vast majority of the leading open source models are created in China. And so it’s fairly important, I think, for the American technology companies to be able to address that market. And open source, as you know, is created in one country, but it’s used all over the world. The open source models that have come out of China are really excellent. DeepSeek, of course, gained global notoriety. Qwen is excellent. Kimi’s excellent. There’s a whole bunch of new models that are coming out. They’re multimodal. They’re great language models. And it’s really fueled the adoption of AI in enterprises around the world because enterprises want to build their own custom proprietary software stacks. And so open source model’s really important for enterprise. It’s really important for SaaS who also would like to build proprietary systems. It has been really incredible for robotics around the world. And so open source is really important, and it’s important that the American companies are able to address it. This is — it’s going to be a very large market. We’re talking to the administration about the importance of American companies to be able to address the Chinese market.
NVIDIA saw an increase in shipments of Hopper 100 and H200 chips in 2025 Q2 (FY2026 Q2), which indicates the breath of AI workloads that run on NVIDIA’s hardware
In the quarter was an increase in Hopper 100 and H200 shipments. We also sold approximately $650 million of H20 in Q2 to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of our platform.
NVIDIA’s RTX Pro servers, for world models, are now in full production; nearly 90 companies are already adopting the RTX Pro servers, including Hitachi for digital twins, Eli Lilly for drug discovery, Hyundai for factory design, and Disney for immersive story telling; management believes RTX Pro can become a multi-billion business
NVIDIA RTX PRO servers are in full production for the world system makers. These are air-cooled PCIe-based systems integrated seamlessly into standard IT environments and run traditional enterprise IT applications as well as the most advanced agentic and physical AI applications. Nearly 90 companies including many global leaders are already adopting RTX PRO servers. Hitachi uses them for real-time simulation and digital twins, Lilly for drug discovery, Hyundai for factory design and AV validation, and Disney for immersive storytelling. As enterprises modernize data centers, RTX PRO servers are poised to become a multibillion-dollar product line.
NVIDIA’s management sees sovereign AI continuing to grow; NVIDIA is involved with Europe’s landmark AI initiatives; the European Union has plans to invest €20 billion to build 20 AI data centers; management sees NVIDIA being on track to earn $20 billion in sovereign AI revenue in 2025 (FY2026), up more than 100% from a year ago
Sovereign AI is one on the rise as the nation’s ability to develop its own AI using domestic infrastructure, data and talent presents a significant opportunity for NVIDIA. NVIDIA is at the forefront of landmark initiatives across the U.K. and Europe. The European Union plans to invest EUR 20 billion to establish 20 AI factories across France, Germany, Italy and Spain, including 5 gigafactories to increase its AI compute infrastructure by tenfold. In the U.K., the Isambard-AI supercomputer powered by NVIDIA was unveiled at the country’s most powerful AI system, delivering 21 exaflops of AI performance to accelerate breakthroughs in fields of drug discovery and climate modeling. We are on track to achieve over [ 20 billion ] in Sovereign AI revenue this year, more than double than that last year.
NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q2 (FY2026 Q2), driven by strong demand across Spectrum-X Ethernet, InfiniBand and NVLink; management thinks Spectrum-X Ethernet has the highest throughput and lowest latency network for Ethernet AI workloads; Spectrum-X grew double-digits sequentially and year-on-year in 2025 Q2 and has more than $10 billion in annualised revenue; management recently introduced Spectrum-XGS Ethernet technology that can double GPU-to-GPU communication speed; CoreWeave will be an initial adopter of Spectrum-XGS Ethernet technology; Infiniband’s revenue was up nearly 100% sequentially, driven by XDR technology; XDR technology has nearly 100% higher bandwidth improvement over the previous generation; management sees NVLink as the world’s fastest data switch; NVLink Fusion, which allows semi-custom AI infrastructure, has received widespread positive reception; NVLink Fusion will be used by Japan’s quantum computing research center, FugakuNEXT; the difference between NVLink 8 and NVLink 72 is that NVLink 8 makes each node a computer, whereas NVLink 72 makes each rack a computer; NVIDIA has 3 networking technologies that addresses scale up (NVLink), scale out (Inifiband), and scale across (Spectrum Ethernet); management sees NVLink 72 as being excellent at amplifying memory bandwidth
Networking delivered record revenue of $7.3 billion, and escalating demands of AI compute clusters necessitate high efficiency and low latency networking. This represents a 46% sequential and 98% year-on-year increase with strong demand across Spectrum-X Ethernet, InfiniBand and NVLink.
Our Spectrum-X enhanced Ethernet solutions provide the highest throughput and lowest latency network for Ethernet AI workloads. Spectrum-X Ethernet delivered double-digit sequential and year-over-year growth with annualized revenue exceeding $10 billion. At Hot Chips, we introduced Spectrum-XGS Ethernet, a technology design to unify disparate data centers into giga-scale AI super factories. [ CoreWeave ] is an initial adopter of the solution, which is projected to double GPU-to-GPU communication speed.
InfiniBand revenue nearly doubled sequentially, fueled by the adoption of XDR technology, which provides double the bandwidth improvement over its predecessor, especially valuable for the model builders.
The world’s fastest switch, NVLink, with 14x the bandwidth of PCIe Gen 5 delivered strong growth as customers deployed Grace Blackwell NVLink rack scale systems. The positive reception to NVLink Fusion, which allows semi-custom AI infrastructure, has been widespread. Japan’s upcoming FugakuNEXT will integrate Fujitsu’s CPUs with our architecture via NVLink Fusion. It will run a range of workloads, including AI, supercomputing and quantum computing. FugakuNEXT joins a rapidly expanding list of leading quantum supercomputing and research centers running on NVIDIA’s CUDA-Q quantum platform, including [ ULIC ], AIST, [ NNF ] and NERSC, supported by over 300 ecosystem partners, including AWS, Google Quantum AI, Quantinuum, QuEra and PsiQuantum…
…This last year, we transitioned from NVLink 8, which is a node scale computing, each node is a computer, to now NVLink 72, where each rack is a computer…
…We now offer 3 networking technologies. One is for scale up. One is for scale out and one for scale across. Scale up is so that we could build the largest possible virtual GPU, the virtual compute node. NVLink is revolutionary. NVLink 72 is what made it possible for Blackwell to deliver such an extraordinary generational jump over Hopper’s NVLink 8. At a time when we have long thinking models, agentic AI reasoning systems, the NVLink basically amplifies the memory bandwidth, which is really critical for reasoning systems. And so NVLink 72 is fantastic.
We then scale out with networking, which we have 2. We have InfiniBand, which is unquestionably the lowest latency, the lowest jitter, the best scale-out network. It does require more expertise in managing those networks…
…For those who would like to use Ethernet because their whole data center is built with Ethernet, we have a new type of Ethernet called Spectrum Ethernet. Spectrum Ethernet is not off the shelf. It has a whole bunch of new technologies designed for low latency and low jitter and congestion control. And it has the ability to come closer, much, much closer to InfiniBand than anything that’s out there. And that is — we call that Spectrum-X Ethernet.
NVIDIA’s new robotics computing platform, Jetson Thor is now available, and it delivers an order of magnitude higher AI performance and energy efficiency than its predecessor; NVIDIA’s full stack robotics platform is growing rapidly with more than 2 million developers and 1,000-plus hardware-software applications; leading enterprises involved with robotics, including Amazon Robotics and Boston Dynamics, have adopted Jetson Thor
Jetson Thor, our new robotics computing platform, is now available. Thor delivers an order of magnitude greater AI performance and energy efficiency than NVIDIA AGX Orin. It runs the latest generative and reasoning AI models at the edge in real time, enabling state-of-the-art robotics.
Adoption of NVIDIA’s robotics full stack platform is growing at rapid rate, over 2 million developers and 1,000-plus hardware software applications and sensor partners taking our platform to market. Leading enterprises across industries have adopted Thor, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta.
Robotic applications require exponentially more compute on the device and in infrastructure, representing a significant long-term demand driver for our data center platform. NVIDIA Omniverse with Cosmos is our data center physical AI digital twin platform built for development of robot and robotic systems. This quarter, we announced a major expansion of our partnership with Siemens to enable AI automatic factories. Leading European robotics companies, including Agile Robots, NEURA Robotics and Universal Robots are building their latest innovations with the Omniverse platform.
Singapore was 22% of NVIDIA’s 2025 Q2 (FY2026 Q2) revenue; Singapore is an important geography for NVIDIA because its US customers use Singapore
Singapore revenue represented 22% of second quarter’s billed revenue as customers have centralized their invoicing in Singapore. Over 99% of data center compute revenue billed to Singapore was for U.S.-based customers.
Management shipped GeForce RTX 5060 desktop GPUs in 2025 Q2 (FY2026 Q2); the RTX 5060 desktop GPU has double the performance of the previous generation; management will soon bring Blackwell to the GeForce NOW; management thinks RTX GPUs brings the best on-device AI performance; NVIDIA has partnered with OpenAI to optimise their GPT models for inference on RTX-powered Window devices
This quarter, we shipped GeForce RTX 5060 desktop GPU. It brings double the performance along with advanced ray tracing, neural rendering and AI-powered DLSS 4 gameplay to millions of gamers worldwide. Blackwell is coming to GeForce NOW in September…
…For AI enthusiasts, on-device AI performs the best RTX GPUs. We partnered with OpenAI to optimize their open source GPT models for high-quality, fast and efficient inference on millions of RTX-enabled Window devices. With the RTX platform stack, Window developers can create AI applications designed to run on the world’s largest AI PC user base.
AI workloads on NVIDIA’s chips have now transitioned strongly to inference; NVIDIA’s management is seeing a huge jump in inference demand; major NVIDIA customers, such as OpenAI, Microsoft, and Google, are seeing huge leaps in AI token generation; Microsoft processed 100 trillion tokens in 2025 Q1, up 5x year-on-year; inference-serving startups have tripled their token generation rate and revenues
AI workloads have transitioned strongly to inference…
…We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis…
…Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.
NVIDIA’s automotive revenue had strong growth in 2025 Q2, driven by self-driving technologies; NVIDIA has started shipping Thor SoC (system on a chip); management sees the self-driving automotive market shifting towards a vision language model architecture, generative AI, and higher levels of autonomy; NVIDIA’s full stack Drive AV software platform is now in production and management thinks it can produce billions in new revenue opportunities for NVIDIA
Automotive revenue, which includes only in-car compute revenue, was $586 million, up 69% year-on-year, primarily driven by self-driving solutions. We have begun shipments of NVIDIA Thor SoC, the successor to Orin. Thor’s arrival coincides with the industry’s accelerating shift to vision language model architecture, generative AI and higher levels of autonomy. Thor is the most successful robotics and AV computer we’ve ever created. Thor will power. Our full stack Drive AV software platform is now in production, opening up billions to new revenue opportunities for NVIDIA while improving vehicle safety and autonomy.
NVIDIA’s management sees agentic AI requiring 100-1,000x the amount of computation compared to 1-shot AI models; agentic AI is driving tremendous growth in the amount of computation; management thinks agentic AI has reduced hallucination significantly; management thinks agentic AI has helped deliver breakthroughs in robotics
Where chatbots used to be one shot, you give it a prompt and it would generate the answer, now the AI does research. It thinks and does a plan, and it might use tools. And so it’s called long thinking; and the longer it thinks, oftentimes, it produces better answers. And the amount of computation necessary for 1 shot versus reasoning agentic AI models could be 100x, 1,000x and potentially even more as the amount of research and basically reading and comprehension that it goes off to do. And so the amount of computation that has resulted in agentic AI has grown tremendously…
…Because of agentic AI, the amount of hallucination has dropped significantly. You can now use tools and perform tasks. Enterprises have been opened up. As a result of agentic AI and vision language models, we now are seeing a breakthrough in physical AI, in robotics, autonomous systems.
NVIDIA’s management sees NVIDIA’s chips as having plenty of advantages over ASICs (application-specific integrated chips); management thinks very few ASICs go into production because the problem set of delivering an accelerated computing platform, which is a full-stack design, is really complicated; management thinks building a data center with NVIDIA brings the best utility as compared to ASICs; management sees NVIDIA’s platform has the most energy efficient, with the best performance per watt; management thinks a world where data centers are limited by power is one where performance per watt is incredibly important
NVIDIA builds very different things in ASICs. So let’s talk about ASICs first. A lot of projects are started. Many start-up companies are created. Very few products go into production. And the reason for that is it’s really hard. Accelerated computing is unlike general-purpose computing. You don’t write software and just compile it into a processor. Accelerated computing is a full-stack co-design problem. And AI factories in the last several years have become so much more complex because of the scale of the problems have grown so significantly…
…The models are changing incredibly fast from generative based on auto regressive to degenerative based on diffusion to mixed models to multi-modality. The number of different models that are coming out that are either derivatives of transformers or evolutions of transformers is just daunting…
…The diversity of our platform, both in the ability to evolve into any architecture, the fact that we’re everywhere, and also, we accelerate the entire pipeline, everything from data processing to pretraining to post training with reinforcement learning, all the way out to inference. And so when you build a data center with NVIDIA platform in it, the utility of it is best. The lifetime usefulness is much, much longer…
…People talk about the chip itself. There’s one ASIC, the GPU that many people talk about. But in order to build Blackwell the platform and Rubin the platform, we had to build CPUs that connect fast memory, low — extremely energy-efficient memory for large KB caching necessary for agentic AI to the GPU to a SuperNIC to a scale up switch, we call NVLink, completely revolutionary, we’re in our fifth generation now, to a scale out switch, whether it’s Quantum or Spectrum-X Ethernet, to now scale across switches so that we could prepare for these AI super factories with multiple gigawatts of computing all connected together…
…We’re in every cloud for a good reason. Not only are we the most energy efficient, our perf per watt is the best of any computing platform. And in a world of power-limited data centers, perf per watt drives directly to revenues.
The US currently represents 60% of the world’s compute
United States represents about 60% of the world’s compute.
NVIDIA’s management thinks AI will accelerate global GDP growth
You would think that artificial intelligence would reflect GDP scale and growth and so — and would be, of course, accelerating GDP growth.
NVIDIA’s management is seeing year-to-date AI startup funding at already $180 billion and this compares with $100 billion for the whole of 2024; AI startups’ revenues are expected to increase by 10x to $20 billion in 2025; management thinks it’s reasonable that AI startups’ revenues could 10x again in 2026
Native-AI start-ups was $100 billion was funded last year. This year, the year is not even over yet, it’s $180 billion funded. If you look at AI native, the top AI-native start-ups that are generating revenues last year was $2 billion. This year, it’s $20 billion. Next year be 10x higher than this year is not inconceivable.
NVIDIA’s AI products are sold out
The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs.
Okta (NASDAQ: OKTA)
Okta’s management’s approach to securing nonhuman identities (NHIs), which are effectively AI agents, is to give them the same level of visibility, access control, governance and remediation as human identities; management believes no other company can deliver the level of sophistication Okta can to secure AI agents; the Auth0 for AI Agents product from Okta’s Auth0 platform enables developers to build AI agents that are secure by design; management thinks AI agents will significantly amplify the identity-security problems related to machine identities that are currently faced by enterprises; management is hearing from the leaders of the largest companies in the world that they will not be able to get projects involving AI agents to work if their identity-security problems are not addressed; management is building a new product that will model the identity of an AI agent so users can have even more control in managing the security of the AI agent; the new product is still in its very early days because management is seeing very few companies putting AI agents into production despite many companies testing out these agents; management wants to eventually have Okta be the system of record for AI agents so the AI agents can choose what technologies they want to work with
Take our approach to securing nonhuman identities, or NHIs. Okta’s unified platform helps ensure they receive the same level of visibility, access control, governance and remediation as human identities. This includes the ability to detect and discover NHIs wherever they exist, provision and register them properly, authorize and protect them with appropriate policies and govern and monitor their behavior continuously. That’s the power of an identity security fabric enabled with Okta’s unparalleled breadth of modern identity security products. No other company can deliver that level of sophistication.
With our Auth0 platform, we’re enabling developers to build agents that are secure by design and identity security fabric-ready from day 1. Auth0 for AI Agents, formerly known as Auth for GenAI, delivers user authentication that works seamlessly with AI workflows, token vaults that securely manage credentials, async authorization that lets agents work autonomously while maintaining user control and fine grained authorization that permits AI agents to only access authorized data…
…Our perspective is quite simple. It’s that you have many problems today in your enterprise that are clear and present and you can get a lot of security benefit by addressing these problems. These are the problems that we talk about a lot. These are service accounts. These are machine identities. These are putting the right vaulting and governance workflows around all of these things. These are like the bread and butter of our identity platform across Governance and Privileged Access and Identity Threat Protection with Okta AI and the bread and butter of what we’re talking about. These are clear and present things today. In addition to that, every company is going to make a huge investment in AI agents. And what that’s going to do, first and foremost, is it’s going to make that problem I just described 5x worse because every agent wants to connect to 10 service accounts and is going to have its own tokens…
…The last week, I’ve had conversations with CIOs of massive companies that everyone’s heard of that say, there’s no way we’re going to be able to do this AI stuff if we don’t get our identity foundation in order…
…There are investments we are making in innovation we’re building that is going to even take it a step further, which is actually modeling the identity of an agent and giving more power to the customer to manage and secure these things because it’s a native thing inside of Okta, which is also very exciting.
But that’s very early because the amount of companies that are actually playing with AI agents is 100%. The ones that are actually putting them in production at scale is very small. So the timing is right here to solve this problem they all have today, the surface accounts and token vaulting, et cetera.
And then over time, be the system of record for the AI agents themselves, and give them choice and flexibility on if they want to use Salesforce or what they want to use Salesforce for, agents, or ServiceNow agents or build their own agents and give them the fundamentals across all of that, which are security, control and governance.
Okta’s management recently introduced a new open standard, Cross App Access, that helps with securing AI; Cross App Access enables AI agents to safely connect with other technologies; management is seeing strong interest from Okta’s partners and ISVs (independent software vendors) for Cross App Access; management has been working on Cross App Access for 3 years; Cross App Access is an industry-wide effort that started with other SaaS companies wanting to have the ability to connect their products with their customers’ other products; management sees the emergence of AI as aggravating the problem of product-to-product connections
Securing AI is the next frontier, and our introduction of a new open standard called Cross App Access is a key part of the solution. This is an important innovation that helps control what AI agents can access, allowing us to help make our customers and ISVs more secure and providing better end-user experience. In short, Cross App Access allows for support of AI agents within the identity security fabric and the flexibility to safely connect to other technologies. Already, there is strong interest in Cross App Access from partners and ISVs, including AWS, Boomi, Box, Ryder and Zoom, and we had over 1,100 attendees at our Identity Summit on the topic earlier this month…
…Cross App Access is an industry-wide effort. It’s actually 3 years old. We’ve been working on this for 3 years. And it came out of Mike from Atlassian and Eric from Zoom and many other SaaS leaders wanted a way to standardize how when they sold their products into companies, how those products were then hooked up to everything else in the company. So Zoom wants to connect to your calendar, wants to connect to a note taking. Atlassian wants to connect to all of your other software development tools. So we invented this protocol and this concept and have published this open standard to solve a very important problem. How do you give your IT teams and your security teams visibility into all these application connections that happen between apps. Now guess what? That’s a problem that’s existed for a long time. And guess what’s happening with AI. AI is supercharging this problem. Now every agent gets what it wants to do. It wants to connect to 15 applications and guess what you need. You need an open protocol for all of those applications that are letting those agents connect, publish and share that information with the security team so they can have visibility and control and audit that. So that’s why Cross App Access is so important.
Okta’s management is not seeing any difference in terms of the Okta products that AI native companies are choosing compared to other types of customers; AI native companies are aware that they are very attractive targets for hackers, so they are really investing in identity security; management thinks Okta can help with AI native companies’ identity security needs
[Question] When we look at the AI native cohort, are there any interesting adoption trends that you’re seeing there in terms of what products they’re taking, how they’re using the platform?
[Answer] It doesn’t seem dramatically different than other cohorts in terms of the adopting workforce solutions or Auth0. It looks pretty much the same, except they’re growing very fast. I guess that’s a difference, especially, actually, the revenue metrics. It’s growing very fast, and we think we’re well positioned in that cohort. And I think similar to every company, they’re trying to figure out how they can be secure internally as they’re growing very fast. They know from a workforce identity and identity security perspective for their internal operations, they’re sitting on a lot of very valuable data and definitely hackers want to attack them like they want to attack every important company. So they’re really investing in identity security, and Okta helps them with that.
Okta’s management thinks that Okta’s 2 open standards, IPSIE and Cross App Access, will help the entire identity market become valuable; management thinks about the monetisation of the open standards from the perspective of the open standards making machine and AI agent identities more widely accepted and thus making Okta’s products more important for customers
These are 2 open standards we’re pushing out there with the ecosystem. And the effect of both of these things for Okta is going to be basically identity providers are going to be more valuable tools to the customers. So they’re going to have better control, fine-grained control, into resources, better policies, more value. So the whole identity market gets more valuable and bigger…
…The clear and present issue today, which is service accounts, nonhuman identities. We monetize that through Okta Privileged Access and Identity Security Posture Management. So Identity Security Posture Management detects the nonhuman identities and the risks in a proactive way that’s comprehensive across all platforms. And Okta Privileged Access and Okta Identity Governance can vault the credentials and rotate the credentials and have the right governance workflows…
…In a world of AI agents, our belief is strong that you are going to manage AI agents with your identity system. And so that’s how we’re going to monetize that. You’re going to — when you put a bunch of AI agents inside Okta, that’s going to be more valuable from an identity security perspective and we’re going to be able to have — we’re going to be able to charge for that with our customers…
…But it all is kind of predicated on a vibrant, healthy growing AI agent ecosystem, which I think there’s a lot of different thoughts on how that exactly play out, but who’s the vendor going to be, who’s the platform, SaaS vendors versus custom development, whatever. I think whatever happens, you’re going to need to manage this stuff.
Salesforce (NYSE: CRM)
Salesforce has won 12,500 AgentForce deals since it was launched 3 quarters ago, of which, 6,000 are paid; 40% of new AgentForce bookings in 2025 Q2 (FY2026 Q2) came from existing Salesforce customers; AgentForce had 60% sequential increase in customers going from pilot to production in 2025 Q2; AgentForce now can support the public sector and has FedRAMP High certification, so Salesforce can now sell more to the US government than before; management thinks Salesforce’s consumption model is showing strong early success; management recently announced new flexible payment options for AgentForce, with Flex Credits accounting for 80% of AgentForce new bookings in 2025 Q2 (FY2026 Q2); DIRECTV is one of Salesforce’s biggest Flex Credits customers; Falabella refilled the Flex Credits tank 3 times in a quarter
In the 3 quarters since we launched Agent Force, we have now won more than 6,000 paid deals and more than 12,500 overall…
…40% of our Agent Force new bookings this quarter came from existing customers extending their investment with Salesforce. And it’s demonstrating the value that they’re getting and how the flywheel is really working. We’ve seen a 60% increase quarter-over-quarter in customers who’ve gone from pilot to production and they’re expanding use cases and scaling consumption…
…Now with Agent Force for public sector and FedRAMP High certification, we’re able to sell more to the government than ever before because we’re bringing the power of the agentic enterprise directly to the government…
…Our consumption model is showing strong early success…
…Last month, we announced new flexible payment options for Agent Force, including pay-as-you-go, to lower the barrier to adoption and encourage experimentation. And following their launch last quarter, Flex Credits now account for 80% of Agent Force Q2 new bookings…
…Marc alluded to DIRECTV. Incredible business value. This is one of the biggest flex credit customers that we have globally…
…There is a customer that in just 3 or 4 months, they refilled the tank 3 times. I gave you the example of Falabella.
Salesforce’s management sees all of Salesforce’s customers becoming agentic enterprises; management sees AI agents represent a complete transformation for Salesforce and its customers; management sees the end goal of agentic AI as humans and AI agents working together with trusted data; management is adding native agentic capabilities into all of Salesforce’s products; Salesforce is pairing every salesperson with an AI agent and is using AI agents in Sales Cloud to call every single person back; in customer service, agents are handling millions of conversations, with AI agents handling 1.5 million conversations in 9 months within Salesforce’s help site; in field service, AI agents are helping technicians orchestrate scheduling and logistics, and helping technicians solve problems; the new version of Salesforce’s Tableau has AI agents that surface insights and recommendations instantly; Salesforce’s marketing product will soon have AI agents that can turn every one-way email to customers into 2-way conversations; Salesforce employees are using Slack as the interface for communicating with AI agents built with AgentForce; management thinks Salesforce will lead the way in this agentic enterprise wave because it has (1) the software infrastructure, and (2) the metadata platform; management will soon unveil all of Salesforce’s agentic products at Dreamforce; management is seeing very healthy growth in the pipeline for agentic transformation among enterprises
One thing is extremely clear to me, every single one of our customers is becoming an agentic enterprise…
…This isn’t simply just some automating some existing business process these agentic enterprises. Well, for Salesforce, it’s certainly true. It’s a complete transformation. And for our customers, the agentic enterprise is a complete reinvention in many cases of who they are and what their potential is. It’s a shift from traditional hierarchies to reshaping the entire company from busy work to orchestrating workflows, from siloed teams to seamless collaboration, from clicking and routing to natural conversations…
…But ultimately, it’s about this. It’s about humans and agents working together with every decision grounded in trusted data…
…Across our portfolio, we are adding these native agentic capabilities into every single one of our products…
…Our Sales Cloud for years has been an app that thousands or millions of salespeople use to manage their sales every single day. But now riding alongside every salesperson is an agentic salesperson. And that agentic salesperson is calling every single person back. And how that relates to Salesforce, well, let me tell you that, well, maybe somewhere between 20 million and 100 million people who have contacted Salesforce in the last 26 years, they haven’t been called back. It’s just because we didn’t have enough people. But now with our new agentic sales, everybody is getting called back…
…In service, we’ve been talking about that now for months, you can see our agents are handling millions of conversations while humans are delivering the empathy and expertise. Well, it’s a bigger story than that, where you know that we have delivered in the last 9 months about 1.5 million conversations just for our own company on help.salesforce.com…
…In field service, agents orchestrate scheduling and logistics so technicians can focus on solutions. I saw it myself at my home. I have this incredible device from Eaton, one of our large customers using our field service product. And it actually connects my air stream trailer to my house. And when the technician comes out to work on it, well, they’re able to use the agentic capability to learn as much as possible about the product that I’m using and how to fix it and how to repair it, while also managing the traditional system of record that’s on the field service capability, managing all the field service and service operations through the field service capability…
…We’ve been showing now for a few months, starting at our Tableau conference, the new version of Tableau, where agents surface insights and make recommendations instantly and where agents and humans are working together to make smarter, faster decisions…
…We’re demonstrating to our customers and about to release our new e-mail platform that provides every one-way conversation into a 2-way conversation. And agents are going to turn these one-way e-mails into 2-way conversations…
…If you’ve seen anyone from Salesforce recently, have them show you how we’re using Slack as our interface to our own agentic enterprise where we have dozens of agents with people and apps and LMs, all in one conversational agentic workspace. It’s pretty cool. And these agents are operating across apps, departments, silos, all running off of our data cloud, all running off of AgentForce…
…Salesforce is going to lead the way. There’s no question about that. We’ve built the software infrastructure for the agentic enterprise, we have our metadata platform unifying our apps, our data and agents into one powerful agentic operating system. We are rebuilding every single 1 of our products to be agentic. We’re delivering almost every single one of those products at Dreamforce. And at Dreamforce, you’re going to see all of these products…
…I see the pipeline into H2. Pipeline is growing in the high teens. And for big deals, it’s actually approaching 20% growth. That’s a really good sign. We haven’t seen that kind of pipeline in a long time. The agentic enterprise is really the next incredible investment cycle.
Data Cloud is a critical foundation for Salesforce’s agentic ambition because it provides the data and metadata for accurate output by AI agents; management believes Data Cloud enables Salesforce to have the most accurate AI agents in the industry, with about 90%-ish accuracy; management thinks Data Cloud will be the most strategic and important business for Salesforce; Data Cloud is now a $7 billion business; Data Cloud had 140% year-on-year growth in customers, and usage numbers are growing rapidly; more than half of Fortune 500 companies are on Data Cloud; FedEx is using Data Cloud to save a lot of costs and grow the percentage of customers who signed a contract and proceed to start shipping by double-digits; Salesforce’s Data Cloud and AI ARR (annual recurring revenue) reached $1.2 billion in 2025 Q2, or FY2026 Q2 (was $1 billion in 2025 Q1, up 120% year-on-year), up 120% year-on-year; Salesforce closed 60 deals in 2025 Q2 (FY2026 Q2) exceeding $1 million that included Data Cloud and AI; management sees Informatica, together with Data Cloud and Mulesoft, as the 3 components for every company’s AI foundation
Data Cloud is the heart and soul of the success of these agents because it is providing the data and the metadata that you need and the context to get the accuracy. We probably have the highest accurate agents in the industry, and the way that we’re achieving that is through our data cloud. It’s this Data Cloud as well as Tableau and MuleSoft and soon Informatica, all working together to really helping our customers to clean and harmonize their data and provide it in a way that can be consumed by our Agent Force platform to provide this level of accuracy.
I think the data business is probably the most strategic and most important business for Salesforce going forward. And already, it’s a $7 billion business. And Data Cloud is having a great year. It had 140% year-over-year growth in customers and 326% growth in row access by zero-copy integration. The usage numbers are really just off the charts. But over half of the Fortune 500 are already on Data Cloud, but it’s really just the very, very beginning…
…FedEx, and you’re going to see them at Dreamforce, their Chief Operating Officer, Richard Smith, is coming to be part of my keynote. Well, let me tell you that they’ve got unified data across all their platforms now with Data Cloud, and the numbers that they’re telling us that they’re saving, well, I’m not going to — I’m not going to take away Richard’s punchline from the Dreamforce keynote, it’s like numbers I’ve never heard in terms of what the amount that can be saved by technology. And now if a business customer [ isn’t ] actively shipping, our own marketing cloud campaign is automatically triggered and sales reps are alerted and it’s all happening through our Data Cloud. And this idea that FedEx has seen a double-digit increase in the percentage of customers who signed the contract and proceeded to start shipping, it’s dramatically surprised them what has been possible in such a short period of time…
…Data Cloud and AI ARR continues to scale, reaching $1.2 billion in Q2, growing 120% year-on-year…
…Data and AI products were in 60 deals greater than $1 million…
…Because AI, as we all know, these large language models only have a certain level of accuracy and it’s not 100%. It’s probably about in the 90s when it really gets well-architected with our data cloud and with all the different kind of capabilities and kind of really advanced techniques that we’ve come up with to make our AI as accurate as it can…
…We think that every customer is going to need an Informatica, every customer is going to need a MuleSoft and every customer is going to need a Data Cloud. And together, we think that’s called the AI foundation. And that AI foundation is the Data Cloud plus MuleSoft plus Informatica. And if you’re going to roll out Agent Force, you’re going to need an AI foundation made up of those 3 things.
DIRECTV used AgentForce to (1) save billing reps 300 hours of inquiry-handling and (2) execute 50,000 actions in a week with Employee AI Agent; enGen expects to save millions of dollars annually by cutting call times with AgentForce; PenFed expects to save millions of dollars annually by using AgentForce for loan underwriting; Under Armor used AgentForce to double its case deflection rate and increase its customer satisfaction rate by double digits, all in less than 60 days; Reddit used AgentForce to reduce average resolution times from 8.9 minutes to 1.4 minutes; Telepass used AgentForce to power 275,000 agentic conversations over 5 months, and has become one of the fastest-growing AgentForce customers; Pandora has scaled from 1 agent to 3 agents with AgentForce in a single quarter; Indeed has doubled the number of actions taken by its customer-facing agents and has added another agent for internal productivity; Williams Sonoma has deployed AgentForce for only a few weeks, but has expanded from the initial use case of customer support for 1 brand, to customer support for 8 brands and other use cases; the US army is planning to use AgentForce to support its Human Resource Command; Salesforce has expanded 24/7 instant support to 6 new languages and agents now cover 94% of its global case volume; Salesforce recently launched many new agents for internal use cases; management is aware of the recent MIT study showing that 94% of AI projects in enterprises have failed, but Salesforce’s customers are getting great results; Falabella is using Salesforce’s AI agents to track its order locations and has seen its NPS (net promoter score) increase, its call volume drop by 25%, and 70% of its conversations shift to WhatsApp
DIRECTV save billing reps nearly 300 hours of inquiry handling with Agent Force. And Employee AI Agent executed 50,000 actions in a week…
…enGen, an incredible company, projecting millions in annual savings by cutting call times.
PenFed, we talked about, many scripts that we’ve had, already projecting millions in annual savings by using Agent Force in its loan underwriting…
…Under Armour and Kevin Plank, well, he more than doubled his case deflection rate and boosted customer satisfaction by double digits. And they did it in under 60 days…
…A lot of our employees are excited about Reddit because they’ve reduced average resolution times from 8.9 minutes to 1.4 minutes…
…Telepass, well, they’ve powered more than 275,000 agentic conversations over 5 months. And the way they got it in the script is “We can’t believe the speed and growth of these conversations just in the last few weeks,” a conversation with the management level that they’ve become one of our fastest-growing AgentForce customers…
…Pandora, the amazing jewelry retailer, Alex’s entire team scaled from 1 agent to 3 in a single quarter…
…Indeed have more than doubled the number of actions taken by their customer-facing agents and added another agent in Slack to drive internal productivity…
…Williams Sonoma, and we’ve only been live for a few weeks, started with Agent Force powering customer support for just one of their brands. I think you know they have like quite a few amazing brands like Pottery Barn and West Elm and others. Well, now it’s rolled out along 8 of their brands and as well as agents for other use cases, including a sous chef agent, that is helping customers choose cookware and guiding them step-by-step through recipes. They are finding incredible new ways to use the Agent Force platform. And they’re doing it side by side across their entire sales force deployment…
…The Army is already planning to launch a digital front door for its Human Resource Command, providing 24/7 powered service and support to all soldiers and personnel and millions of veterans…
…In Q2, we expanded 24/7 instant support to 6 new languages, which combined with English now cover over 94% of our global case volume. Earlier this year, we launched our IT and HR agents in Slack to support our employees. And in July, we launched dozens more specialized agents in Slack…
…Over the weekend, I read that MIT study that’s becoming very popular, which really goes to show that a lot of companies have thought they were on the right path with generative AI, building their own models, doing it themselves, hooking it all up. And now they’re claiming about 94% of those projects have failed. But we’ve been saying that was going to happen for the last several years, as you know. But that’s not what our customers are saying. Our customers are saying that they’re getting phenomenal results and that they have humans and agents working together to create a new level of customer success, or we say it at Salesforce as an agentic enterprise…
…Falabella, is the largest retailer in Latin America. Their main use case, they have several, but their main use case is: Where is my order? And they solved that question to the customers across the web, in-app and WhatsApp. The pilot took 2 months from idea to production. They access their OMS system. They leverage the CRM data in Salesforce, knowledge articles that we put in Data Cloud. They connect Data Cloud to GCP. And the value is extraordinary. The NPS has increased by 10%, 10 points, from 70% to 70%. All the digital interactions, most of them, 70% of them have shifted to WhatsApp, and the call volume has dropped by 25%.
Salesforce will soon launch its agentic IT service platform; many Salesforce customers have been asking for IT services from Salesforce; the agentic IT service platform will be integrated with Slack; the agentic IT service platform will see every IT request become a conversation; management thinks the agentic IT service platform will be a huge growth driver for Salesforce; management thinks traditional ITSM (IT service management) products have served only the very high-end market, but Salesforce’s agentic IT service platform can serve a much wider demographic of customers; Salesforce itself is the first customer of its agentic IT service platform
The world of ITSM and IT service. It’s an application area that we just haven’t gone to before. But I’m very excited that next month, and you’re going to see this at Dreamforce as well, that we’re launching our own agentic IT service platform. A lot of our existing customers have been asking for this. We’re bringing a whole new level of capability. It’s agent-first and it’s Slack-first, that is right inside Slack, you’re going to be using our agentic IT service capability. It’s natively embedded where employees already work with 0 learning curve…
…With agentic IT service, well, every request is becoming a conversation where agents work hand-in-hand with IT teams proactively fixing their problems. It’s going to be an incredible growth driver for the company…
…It’s a very democratic platform. A lot of the ITSM products have only served the very highest end of the market with maybe 1,000 customers here or 1,000 customers there. But the thing about Slack is that it’s used by about 1 million customers worldwide. And I think all of them are going to be able to be able to benefit from this IT service platform. No one else is delivering this level of agenda capability and digital labor at scale. Now we know how to do this because our own first customer for this, well, it’s us. We are Customer 0.
Salesforce’s management thinks being agent-first will expand Salesforce’s margins in the long run; Salesforce has cut its customer support workforce by 40% because of the efficiency of AI agents
We believe that being agent-first is a key driver of our own long-term margin expansion…
…[Question] We’ve heard software companies say that they have held their head count flat in their support organizations. We haven’t heard anyone saying that they reduced head count by close to 40% there like you have.
Salesforce’s management thinks AI is an extension of SaaS, and not an eliminator of SaaS, because there are still problems that AI cannot solve
There’s a lot that we can resolve automatically through these agents with the customers, but there’s also a lot that cannot be resolved. And that has to be escalated to the humans. And so it’s humans and agents working together to satisfy customer success. And this is what has been extremely important…
…So it’s not about the fundamental, I would say, elimination of SaaS. What I would say, it’s the fundamental extension of SaaS…
…Nothing lasts forever, okay? But I just look at how I’m running my own business and the business of our customers, I don’t understand what the replacement is. So I just look at this incredible next-generation transformational capability, and I’m going to lay it all out at Dreamforce. And by the way, my keynote, I kind of threw away all my slides and I said, let’s just have 12 CEOs of the largest companies on the planet just show you exactly what they’re doing with this technology, because it’s crystal clear what the value proposition is. But to hear some of this nonsense that’s out there in social media or in other places, people say the craziest things, but it’s not grounded in any customer truth.
Salesforce’s management sees Salesforce as being the only company that can bring together deterministic workflows and agentic reasoning
We are the only platform, the only software infrastructure that can bring the deterministic workflows, the data and the agentic reasoning and actioning on the same platform.
Salesforce’s management thinks AGI (artificial general intelligence) will not be coming any time soon
The idea that there is, I’ll just say, again, an AGI, that seems like a fantastical term. I know it’s coming in the next week or 2 evidently. But this idea that there’s some kind of AGI that’s about to take over the whole world. Well, let me just help everybody understand that’s not exactly what’s about to happen.
Salesforce’s management thinks Salesforce is going to see incredible growth in the next 2 years because of AI
We think we’re going to see some incredible growth over the next 6 to 8 quarters…
…My focus is accelerating bookings. I’m very happy with the execution of my team. I’m very positive about what is coming ahead, not just in H2, but also what is coming in the next fiscal year. We’re already thinking about the next fiscal year. We wouldn’t be investing at the rate that we are investing with very — a lot of intentionality in the areas that are growing, in the areas that have higher margin if we didn’t see a great opportunity.
Sea Ltd (NYSE: SE)
Sea’s management is using AI to improve Shopee’s advertising business; sellers who used Shopee’s advertising products rose 20% in 2025 Q2, and sellers who used Shopee’s advertising products grew their ad spend by more than 40% from a year ago
Since early last year, our dedicated ad-tech team has worked hard to improve algorithms, enhance traffic allocation efficiency, and deploy AI technologies to better serve our ad-paying sellers. And we have seen very encouraging results. During the second quarter, the number of sellers using our ad products
rose by around 20%, and ad-paying sellers’ average quarterly ad-spend grew by more than 40% year-on-year. Our tech enhancements have allowed us to more effectively optimize Shopee’s GMV and advertising revenue at the same time. We saw an 8% uplift in Shopee purchase conversion rates and improved our ad take rate by almost 70 basis points this quarter, year-on-year.
Sea’s management has provided AI tools for Shopee sellers to produce high-quality video content; livestreaming and short-form video orders in Southeast Asia accounted for more than 20% of Shopee’s physical goods order volume from the region; there are now 7 million Youtube videos with Shopee product links embedded, up 60% sequentially (was 4 million in 2025 Q1)
Our AI tools empower Shopee sellers to produce high-quality video content, helping them improve user conversion and make more money without having to invest in their own studio set-up. In Southeast Asia, orders from livestreaming and short-form videos accounted for more than 20% of our total physical goods order volume in the second quarter. Our collaboration with YouTube has also continued its strong momentum. As of June, more than seven million YouTube videos featured Shopee product links across our Southeast Asian markets, an increase of more than 60% quarter-on-quarter.
Sea’s management sees Monee has having 3 unique advantages, namely, (1) integration with Shopee, (2) a large user base who are growing their credit records with Monee, and (3) use of AI to improve credit models;
…Three unique advantages that Monee has. First, deep and seamless integration with our Shopee ecosystem. Second, a very large base of users who are growing their credit track records with us over the years. Third, our increasing use of AI to improve our credit models. Together, these advantages uniquely enhance our underwriting capabilities in each market, enabling us to very effectively push for growth across our three credit product lines: on-Shopee SPayLater, offShopee SPayLater, and cash loan products.
Sea’s management has been using AI a lot in general recommendations, leading to improvement in conversion rates as the system can better understand user intention
We also use AI a lot our general recommendations, and this improved our conversion rate quite a lot by understanding user intention better, by understanding the buyer’s query better.
Sea’s management is using AI to generate images for product descriptions
We also spent a lot of effort on the AIGC initiatives that we can generate a lot more attractive pictures for the product descriptions.
Shopee’s customer service chatbot is 80% managed by an AI agent; the use of AI in Shopee’s chatbot helps sellers both reduce cost and increase the potential for upselling when interacting with consumers
On the customer interaction side, we — our customer service chatbot is 80% managed by AI agent. We’re also helping the seller to interact with the buyers through the CS chat by agent as well, not only reducing the cost for the sellers, but also improve the upselling potential for the sellers while talking to the buyers.
Sea’s management is actively using AI to improve Sea’s internal operations
The second type is to improve our internal operations. For example, obviously, the product development side, but also many of our daily operations like, for example, if you look at the way we run our marketing campaigns, a lot of my campaign are very automated right now through AI tools. Many of the process to process the payment are AI-enabled through the agent, et cetera.
Sea’s management is very excited about the use of AI in the gaming industry; management thinks the gaming industry will be among the first batch of industries to benefit from advancements in AI; management has seen AI improve productivity in game development by generating art work; management thinks AI agents can improve the gaming experience for players who prefer to play solo games; management wants to explore the use of AI to generate content and have personalised gaming experiences instead of the current format where the gaming experience is preset
We are very, very excited about the AI perspective in the game industry. And personally, I believe game industry will be among the first batch of industries largely benefited by the AI advancements and the technologies.
And so far, like we have seen a lot of kind of upside on the — actually on the development and the production side. And say, for example, like for — to develop any new content new map, we need to generate a lot of original arts. And now a lot of like very, very basic arts can be generated by AI. So it’s — the quality is very, very decent in terms of the efficiency, the volumes are generated and the varieties are generated is I mean, you can imagine it’s much, much better than what human can do. So this has largely improved our productivity, and it’s really, really exciting.
And like on the — as you mentioned from the gamers like engagement perspective, like — so there is a very, very clear opportunity we have seen in the use cases like we do believe like, say, for example, Free Fire is a very, very social game. It’s designed for team play. So it’s like there’s much, much more fun if you play with other players, and there’s a much more combination of the strategy, the technique you can use than you play as a solo gamer. But we observed in Free Fire, we still have a very, very sizable gamers like only play solo games. I mean they enjoyed, but I think they haven’t really fully experienced the amazing part of the game. And maybe because of they’re shy, they don’t know how to reach out to other players. So as we think like the AI-enabled bots, it’s kind of like their — it’s an AI game agent like as their teammates as peers for them to play the game together kind of play a brother’s roles, sister’s roles and coach roles in the game and give them a little bit flavor of how this interaction will kind of feel and taste in the game play and as an encouragement for them to reach out to play as a team rather than individuals. I think that largely helped on the retention.
And furthermore, I think we are very actively experiencing and trying to figure out how to kind of leverage the generative AI to let gamers and to generate the content rather than, okay, so now all today’s game experience are preset and how the experience will look like. And I think with the AI tools, actually, this experience can be much more immersive and much more interactive and much more individualized.
Tencent (OTC: TCEHY)
Tencent’s management added AI-powered citation to content on Weixin; management is using LLMs (large language models) to help merchants with customer inquiries and personalized product recommendations; Yuanbao, Tencent’s AI chatbot, can now be added as a Weixin contact for users to interact with; management is enhancing the Yuanbao app and is pushing for growth in DAUs (daily active users)
On the AI front, we added AI-powered citation to content so that users reading official accounts articles or video accounts comments can activate contextual AI commentary on related information. We upgraded Mini Shops customer service with large language model capabilities to provide merchants with more intelligent responses to customer inquiries and personalized product recommendations. We enabled Yuanbao as a Weixin contact to interpret and summarize video accounts content. Meanwhile, we are rapidly enhancing the functionalities of our AI native app Yuanbao, and we’ll share more details about how we are growing the DAU later this year.
Tencent’s management is seeing AI becoming an increasingly important driver of growth in Tencent’s Domestic Games and International Games businesses; management is applying more AI tools to increase the speed and scale of content production in Tencent’s games; AI allows Tencent to provide more human-like virtual teammates to solo-gamers and more realistic non-player characters in games; management is using AI in marketing activities for its games for more efficient targeting
Reviewing the progress of our game business domestically and internationally in recent months, AI has become an increasingly important driver of its growth in terms of game content, game engagement and game monetization. We’re increasingly applying AI tools to boost the speed and scale of content production across our major games. AI allows us to provide more human-like virtual teammates in our competitive PvP games and to power more realistic nonplayer characters in our story-driven PvE games. And we’re using AI in our game marketing activities to more efficiently target marketing spending towards the users most likely to activate and remain in each game.
The Marketing Services segment’s revenue was up 20% year-on-year in 2025 Q2 because of AI upgrades in its advertising platform, and more closed-loop advertising involving Weixin’s ecosystem; the AI upgrades included better AI capabilities in ad creation, placement, recommendation and performance analysis; the AI upgrades led to higher click-through rates, conversions and ROI for advertisers; Video Accounts’ Marketing Services revenue grew 50% year-on-year in 2025 Q2; Mini Programs’ Marketing Service revenue grew 50% year-on-year in 2025 Q2; Weixin Search revenue grew 60% year-on-year in 2025 Q2, driven by the use of Tencent’s LLM (large language model) to deepen understanding of merchandise and of user consumption intent; most of the advertising revenue growth in 2025 Q2 came from higher revenue per impression partly because of AI-driven increases in the click-through rate
For Marketing Services, revenue grew 20% year-on-year to RMB 36 billion in the quarter, benefiting from AI-powered adtech upgrades and from increased closed-loop advertising arising from Weixin’s transactional ecosystem. We expanded AI capabilities in areas including ad creation, placement, recommendation and performance analysis, which had the effect of boosting click-through rates, conversions and ROI for advertisers. Specifically, we upgraded our ad platform architecture by deploying a scaled-up foundation model, which analyzes advertisement click-through rates and transactions across multiple apps and services as well as user interactions across text, image and video to determine user interest and optimize ad performance in real time.
By property, Video Accounts marketing services revenue rose approximately 50% year-on-year due to more traffic and more transactional activity within Video Accounts. Mini Programs marketing services revenue also increased about 50% year-on-year. Activity within Mini Games and Mini Dramas created a flywheel effect, which drives more developers to use our closed-loop marketing solutions to promote their services. And Weixin Search revenue grew around 60% year-on-year due to more consumer and advertiser interest in Mini Program search results and to enhance ad relevance as we leverage our large language model to deepen understanding of merchandise and of user consumption intent…
…In the second quarter, the majority of the advertising revenue growth of 20% year-on-year arose from higher revenue per impression. And that, in turn, was primarily due to a higher click-through rate arising from deploying AI, although also to higher revenue per click arising from more closed-loop activity with mini shops and mini games.
Within the Fintech and Business services segment, Business Services revenue grew in the teens year-on-year in 2025 Q2; Cloud Services revenue accelerated in 2025 Q2 from increased revenue from providing GPUs and API tokens for customers’ AI needs; management is focused on growing Business Services at an accelerated rate without being hampered by fluctuations in GPU supply Business services revenue grew at a teens rate year-on-year. Cloud services revenue growth accelerated versus recent quarters, benefiting from increased revenue from providing GPUs and API tokens for customers’ AI needs. Fees collected on Mini Shops transactions continue to grow at a rapid rate and business services gross margin rose year-on-year due to improved efficiency and positive mix shifts…
…We’ve put our cloud business onto a more sustainable base as well as improve the cost competitiveness of the supply chain for our cloud business, we can — we are refocusing on growing revenue at an accelerated rate versus the prior rate without depending too much on the vagaries of the GPU supply situation. So if we do have sufficient GPUs that we can rent out more in the cloud, then we’ll do so. But our cloud strategy is not dependent on the GPUs. We’re also growing in CPU, in storage, in database, in CDN and so forth. So that’s on the cloud side.
Tencent’s management has enhanced the data quality and diversity of Hunyuan, Tencent’s proprietary foundation model; Hunyuan 3D model has become the No.1 3D generative model on Hugging Face; game developers, 3D-printing companies and designers are increasingly using Hunyuan 3D; management wants to continue improving Hunyuan, and sees many dimensions for doing so; when Hunyuan improves, all of Tencent’s AI services also improve
For HunYuan, we enhanced our data quality and diversity through data augmentation and synthesis and implemented more effective pretraining and post-training scaling. HunYuan 3D model has become the top ranked 3D generative model on Hugging Face due to its geometric precision, texture fidelity and prompt 3D alignment capabilities. Game developers, 3D printing enterprises and design professionals are increasingly using the HunYuan 3D model for their digital asset generation needs…
…In terms of the model, I would say there’s actually a lot to be done, right? And I would say sort of in the broad bucket, there is the large language model itself, and we want to keep improving the LLM itself. And that actually involves improvement along a number of different dimensions, including making sort of the data sort of higher quality and more comprehensive. That includes making the pretraining more efficient and more effective and improving the pretraining model that includes improving the post-training and reinforced learning processes in basically extracting the capability of the pretrained model and that includes improving our infrastructure so that we can actually train more efficiently as well as inference more efficiently, right?…
…When we have an improved LLM, it’s actually sort of the foundation for all our AI services. And in particular, it would improve our search and productivity-related services…
…We also want to improve the multimodal capability of our model so that we can actually provide more customized functions for the users in Yuanbao, right? Within Yuanbao, it’s not — people are not just using it for search and productivity-related activities. They are using it for all kinds of different multimodal activities. They may want to speak, they may want to turn text into pictures, turning pictures into text and there are a lot of multimodal conversions within Yuanbao, which we actually need to have very strong capability for…
…I think the third broad category is actually coding and agents, right? So that if we can sort of keep improving, then basically, we can provide much better coding environment for both ourselves as well as our enterprise customers. And at the same time, that would enable a better agent and instruction follow capability for our agent. I think that’s particularly important for Weixin going forward and as we build an agent for Weixin that can be personalized assistant to the Weixin users in a personalized way.
Tencent’s management thinks Tencent’s advertising revenue growth can grow at a healthy rate for a long time; the drivers of future growth for the advertising revenue come from (1) higher click-through rate, where AI delivers better targeting and thus more clicks, (2) more traffic, including traffic within Tencent’s AI-native experiences, (3) higher revenue per click, as generative AI used for creating the ads results in more ad demand, (4) closed-loop e-commerce transactions driving higher advertising demand, and (5) higher advertising load; management does not expect any meaningful impact to Tencent’s advertising business from the new advertising law for gaming company sales and marketing because the advertising business has ample diversification, and the AI-related improvements management is making is a far more important variable; management could crank the lever for advertising-growth if the cost of deploying AI throughout Tencent suddenly spikes
On the advertising and the potential, we continue to believe that we enjoy a long and lengthening runway for continuing to grow our advertising revenue at a reasonably healthy rate. And that length of the runway reflects upside in a number of the key variables that determine our marketing services revenue, including the click-through rate where AI delivers better targeting and thus more clicks, including traffic where we see growth in video accounts traffic and search traffic over time, in traffic within our AI native experiences, including revenue per click as generative AI used for creating the ads results in more ad demand as well as e-commerce closed-loop transactions resulting in more ad demand. And then finally, in ad load, where, as you know, for short video, our ad load is currently in the low to mid-single digits versus our peers who are in the low to mid-teens…
…[Question] About the impact on the new advertising law for gaming company sales and marketing. Under the new ad regulation effective in July, sales and marketing spending in excess of 15% of revenue will need to pay an additional 25% tax. So how do you expect this to affect our advertising income, especially for mini games, which heavily rely on traffic acquisitions, i.e., the sales and marketing could easily surpass this 15% revenue threshold?
[Answer] We don’t expect a meaningful impact. Our advertising business has become quite broad-based over time. And if you look at the second quarter, there was an adverse impact from the food delivery companies and some of the e-commerce companies ramping up in food delivery, reducing their advertising spend as they invested more in subsidies. But despite that, our advertising revenue grew 20% year-on-year. So in our view, there’s always going to be individual blips up and down in terms of individual categories. But what we’re doing in terms of deploying AI within advertising is a much more important variable…
…Now of course, if the cost of deploying AI, including GPU depreciation was suddenly to step up and become very burdensome, we could accelerate the advertising monetization, but we don’t see the need to do that right now.
There are 4 broad categories of AI features across Tencent’s ecosystem, namely (1) the AI-native app Yuanbao, (2) AI-enabled search, (3) features within games, and (4) features within productivity tools; management thinks it’s still early in observing user behaviour
In terms of the AI features, right, I think there is sort of broadly speaking, a number of these features. One is obviously our Yuanbao, which is an AI native app. And then I would say it’s related to search, AI-enabled search. So that lands on our browser that also lands on WeChat search. And then there’s a whole host of different features within even games, right, when we have AI-enabled players or in our productivity tool, for example, summary of meetings in our Tencent Meeting and assistance within our Tencent docs, right, to help people to write. I would say we’re still at an early stage in observing the user behavior.
Tencent’s management has so far not seen any major negative impact on Search from the use of AI to produce search results
The one sort of negative impact that you are pointing to is when there is AI-assisted search, whether it would just show the content rather than leading people to the pages. We have not seen a very big impact on that. I think overall, people tend to be more satisfied in getting the answer directly. And if they want to explore the topic more, they would click on the different links and articles. So I think overall, it’s actually not that much of an impact.
Tencent’s management is currently providing a lot of AI features for free and they are managing the AI-related costs of these features in a granular way such as using smaller models when applicable and improving the efficiency of inference with software; management wants to eventually monetise these AI features, but they think it is really hard for the user-paid model – popular in the US now for monetising AI models – to work in China; management currently prefers monetisation through advertising; management is seeing AI being monetised in Tencent by contributing to the growth of the overall business
[Question] You guys continue to offer increasingly more AI features to consumer free of charge, the delivery of these AI features is a lot more expensive than mobile Internet services, which will potentially hurt Tencent’s cost structure. Will management consider to start directly monetizing these consumer-facing AI features in the next 1 or 2 years?
[Answer] We are actually managing the cost in a relatively granular way, right? I think there are a lot of places in which if we can use smaller models, we’ll be using smaller models and the cost will be sort of much lower than using the flagship model. And so in a lot of these use cases, the cost is manageable if we can use smaller models. And at the same time, if we continue to improve the efficiency of inference through software upgrades.
And as it relates to whether we would be monetizing eventually — I think eventually, there should be some monetization. I think in China, in reality, it’s actually very hard to use the user paid model, which now populates the U.S. AI tools. And I think over time, we’ll try to figure out whether there will be some ad-supported way of monetizing. But at the same time, I want to point out that AI is already contributing to the growth and monetization of our existing businesses in different ways, right? So somehow we could also fund part of this “subsidy” for AI usage by the users through the growth in our other businesses.
Tencent’s management does not have a definitive answer on the import of US chips for AI but Tencent has sufficient chips for model training; management thinks Tencent has many options for chip-providers for AI inference; management is using software to drive inference efficiencies
With respect to the acquisition of chips, especially the U.S. chips, right, the answer is that we don’t really have a definitive answer on the import situation yet. I think there’s a lot of discussion between the 2 governments, right, and waiting to see what exactly come out of that.
But from our own perspective, we do have enough chips for training and continuous upgrade of our existing models. And we also have many options for inference chips. And we are also executing a lot of software improvement and upgrade in order to drive efficiency gain in inference so that we can actually put more workload on the same number of chips.
Tencent’s management sees higher depreciation expenses in the future because of AI-related investments but Tencent’s business is also growing because of the use of AI; the increase in expenses and revenue may not always match up, but both are definitely growing
I would say the depreciation cost related to AI will definitely continue to go up. But at the same time, we also see that we continue to reap the benefits of AI. And the issue is that these 2 may not match each other completely, but I think both of them will be moving in the same general direction.
Tencent’s management is tracking Tencent’s progress in AI in a number of ways, namely, (1) how AI is helping Tencent’s existing businesses, (2) performance and quality of Hunyuan, (3) usage of the Yuanbao app, (4) progress in AI products within the entire Tencent ecosystem
We do track our AI development progress very closely. And I think there are a number of indicators that we use right in tracking the progress.
And the first one is that we focus on tracking how AI is actually helping our existing businesses such as ads, such as games, such as FinTech. And I think that’s one area. And when we see that AI is actually being applied in driving the efficiency gain as well as the growth of these businesses, then that’s good.
Secondly, we focus on tracking the performance and quality of our large language model, HunYuan. And I think there’s a lot of metrics that we actually have to use in order to track the capability as well as the quality of the model.
The third one is we do track how our AI app is actually growing. How many users are using our AI app. And that would include users of our Yuanbao and users of our browser and user of our AI-powered search.
And finally, I would say we do track what’s the progress in the design of other AI-related innovative products within our entire ecosystem. And that would include, for example, the AI agent for WeChat that would include agents within our productivity tools. And these are the metrics that I think we will use in terms of tracking the progress of our AI development.
Veeva Systems (NYSE: VEEV)
Veeva’s management has made great progress with Veeva AI, an initiative launched in April 2025 that will see the company build industry-specific AI agents within its applications; the first AI agents under Veeva AI, for Vault CRM and commercial content, is on track for a December 2025 launch; management plans to release new AI agents and improve existing AI agents 3 times a year; management plans to deliver a host of new AI agents in 2026 and will launch Clinical data agents in 2027; management sees Veeva Business Consulting as an important part of Veeva AI because AI enables new ways of working for Veeva’s customers; Veeva is already working on its first AI-related Business Consulting project; management thinks Veeva AI will increase the value of integration between clinical data management and clinical operations for customers; management thinks Veeva will lead in industry-specific AI agents in Life Sciences because of the deep data that resides in Veeva’s software products; management will will allow customers to create their own AI agents with Veeva AI; management thinks Veeva AI will create billions of dollars of value in the Life Sciences industry and Veeva will be able to capture its fair share of value creation; management does not expect any material revenue contribution from Veeva AI in 2026 or 2027; management thinks it’s still early for customers to all-in on AI with Veeva because the company has not released any AI agents yet; management will enable Veeva’s AI agents to communicate with AI agents from other software platforms because that is of great benefit to customers
We are making great progress on Veeva AI which adds agentic AI to the Vault Platform and industry-specific AI agents in all Veeva applications. With agentic AI in the Vault Platform, we have an integrated platform that manages data, content, and agents together in a secure and maintainable way. Customers can use and extend our application agents and create custom agents of their own. This is a very fundamental change in the Vault Platform…
…Our first agents are on track for December release in CRM and commercial content. We will release new agents and improve existing agents with our releases three times a year. In 2026 we plan to deliver agents for clinical operations, regulatory, safety, quality, medical, and commercial. Clinical data agents are planned for 2027.
Veeva Business Consulting is a critical part of Veeva AI, helping customers with change management because AI enables new ways of working. We are already working on our first Business Consulting project for AI in the commercial content area…
…We continue to see customers looking for an integrated clinical platform across clinical data management and clinical operations. The value of integration is compelling and will only increase with Veeva AI…
… Veeva Vault platform, we started that in 2010, actually, late 2010. It was around this. They had content and it had data and they could do both. And that was very unique and users work with content and data and so we were able to make integrated suites in clinical and quality and regulatory and safety. And that’s what we’ve been doing for the last 15 years and working very hard at it and making these deep industry applications, the business rules around all the data and the content. Now this is the next phase where we’re going to have agents. We still have our data, we have our content. We have our agents and the users are going to interact with all and the agents also interact with the content of the data. So it’s a fundamental new thing. And what we — we’ve led really and are leading in this industry cloud area, industry-specific cloud applications. I think we’re going to lead in industry-specific agents and certainly inside life sciences…
…Customers can create their own custom agents, but mainly our industry-specific agents that they’ll get when they buy Veeva AI. With the model, MCP model context protocol, agent-to-agent, interoperability is really easy and also vault-to-vault interoperability. We will — in terms of monetizing that, we will create billions of dollars of value for the industry. No doubt about that. No doubt about that. Sometimes making humans much more efficient, sometimes reducing the need for certain people doing certain types of tasks. So there’s a tremendous amount of value to be captured by the industry, and we’ll get our fair share of that for sure…
…I don’t expect any material revenue contribution for ’26 or ’27, for example, but I expect it’s a significant increase in our market size. And that will play out over many years…
…I think it’s early for customers to be going all in on AI with EVA because we haven’t even released any agents yet, so we’ve got to work with our first early adopters and work that out…
…We’re architecting in that way that if you have an agent inside a Veeva, it can talk to an agent that might be inside of SAP or Workday or a different sales force one and vice versa. That’s I think that’s going to be one of the unheralded people don’t realize how much of a benefit that is when you have agents that can talk to agents across systems because they’re all following a common protocol much less brittle than you’re wiring things up with a mule soft and transferring data back and forth. I’m really excited about that potential, and it can expose from system to system communication but also for a user. I might be in my Microsoft Office, and I might say, “File this document in TMF.” Well, the Microsoft Office copilot may have that agent, the TMF filing agent from Veeva registered with it. So it says, “Any of the agents know how to do this? The TMF agent with AI sure do.” Okay. I’ll hand the document over to you and the way it goes.
Veeva’s management thinks Veeva has a structural advantage in AI in the Life Sciences industry because the company’s products are a system of record for customers, and the company has deep applications
[Question] Going back to the idea around the opportunity with AI, how you’re kind of thinking about Veeva’s platform approach, the network you’ve built, the scale you’ve built, giving you kind of that right to win as you embed more NII functionality across the platform?
[Answer] We refer to that as a structural advantage. When you have an application that’s a system of record, be it the e-mail system or the supply chain system or all the 50 sort of applications of Veeva has that are deep in life sciences and the CRM system to the drug safety system to the clinical trial management system. When you have that system of record with the users in there, you have the right to win the deep industry-specific agents because it’s in the user’s workflow. Think about it, if you use Google for your e-mail and your calendar, you would love an agent from Google that works seamlessly with that, if you could get it. So we have a right to win there. You called it right to win, I call it a structural advantage. We can knit that technology together so that it’s a seamless platform that handles the agents, the content the data. Another thing that Veeva has is we have a platform that’s broad. We make about 50 applications with our platform. So we can touch a lot of things with our platform. We put it in the Vault platform once, and it can extend area everywhere. So we have a structural advantage…
…[Question] Around AI and agents. Could you just sort of articulate what you view as the unique differentiator from an architecture perspective of Vault versus agent force or even the back end of IQVIA? Like what do you think puts you at an advantage?
[Answer] Our main advantage is that we have the deep applications. So if we just take a clinical example, again, we have the clinical trial management application. So that houses all the people that deal with clinical and all the data about clinical and all the business rules and all the content and all the security about clinical trials. So with Veeva AI, when we build an application agent, that’s built inside of the Vault platform. So it inherently knows all the security rules and have to deal with that. and it is running in the Vault application server. So it also has transaction control. So we can update the data in the content. It can act on behalf of the user inside of a workflow in a transactionally sound way. So that’s a structural advantage if you have the application.
Veeva’s management thinks AI agents will be doing some of the things humans will do, which will either free up productive-time for humans, or reduce the need for humans;
If you look at areas within safety and clinical, there’s some areas where there’s a lot of outsourced hundreds of millions of dollars of outsourced labor used to do processing type things. I think agentic AI can maybe remove the need for half of that. If you look at a clinical trial master file, agent is going to be pretty good at putting a document where it should go and telling you if you have all the documentation you need for that trial based on the protocol. And is any document blurry, is there any document eligible, et cetera, it’s going to be really darn good at that stuff. So it will be different by each area, but agentic AI is going to do things — some of the things that humans can do, agentic AI is going to be able to do that. That either frees up more human time for humans to be more productive on what they need to do or reduces the need for humans.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Adyen, Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, Nu Holdings, Okta, Salesforce, Sea Ltd, Tencent, and Veeva Systems. Holdings are subject to change at any time.