Still More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q4 earnings season.

A few weeks ago, I published Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s AI-first ARR (annual recurring revenue) in 2025 Q4 (FY2026 Q1) tripled year-on-year; management thinks Adobe’s AI-first business will be the company’s next $1 billion business

Our new AI-first offerings ending ARR more than tripled year-over-year, reflecting progress against this opportunity with individuals and enterprises alike…

…What we had identified as the AI first sort of book of business. That tripled, but that should be our next $1 billion business.

Adobe’s management thinks the company’s success in AI will be underpinned by its deep understanding of the creativity domains, its access to vast data, its delivery of complex workflows, and its great brand; enterprises are increasingly asking Adobe for help on their AI strategy in their customer experience orchestration; management thinks agentic AI will further enable outcome-focused enterprise workflows, and Adobe is uniquely able to meet the needs of enterprises in these areas; emerging new platforms have always been additive to Adobe’s market opportunity; management intends to integrate Adobe with leading AI platforms including Anthropic, Google, and OpenAI; management is collaborating with global system integrators (GSIs) such as Accenture and Deloitte to drive technological transformation 

Adobe’s continued success in AI will be underpinned by our deep understanding of creativity domains, the vast amount of data to which we have access, delivery of complex workflows driving business outcomes, and a great brand across individuals, small and medium businesses and enterprises…

…, Adobe has always been a trusted partner for enterprises and we’re increasingly being asked to help them drive their AI strategy across customer experience orchestration (CXO) globally. Enterprises are looking to the combination of employees and automation to deliver on the demands of content and marketing at scale. Agentic AI will further enable outcome-focused enterprise workflows as customers look beyond speed to elevate creative differentiation, brand governance, and personalized experiences across channels. Adobe’s end-to-end solutions are uniquely designed to meet these needs at scale…

…Emerging new platforms have always been additive to our market opportunity. In addition to Windows, MAC, iOS, Android, Chrome and EDGE, we intend to integrate with leading AI platforms such as Anthropic, Google, Microsoft, NVIDIA and OpenAI— providing customers with access, choice, and flexibility. We’re jointly driving enterprise transformation at scale in collaboration with global leaders such as Accenture, Cognizant, Deloitte, dentsu, EY, IBM, Infosys, Omnicom, Publicis, PWC, Stagwell, TCS and WPP.

Adobe’s management’s approach with AI is to expand access to AI in Creative Cloud and Acrobat, reach new audiences with Firefly and Express, and automate content production in Firefly Enterprise; AI usage at Adobe is growing quickly, with record generative credit consumption; Adobe’s content automation solutions are seeing record number of API (application programming interface) calls

Our approach is to expand access to AI across our existing audiences in products like Creative Cloud and Acrobat, reach new audiences with products like Firefly and Express, and help automate content production in enterprises with Firefly Enterprise…

…AI usage continues to grow quickly, as measured through record levels of generative credit consumption…

… Our content automation solutions continue to see strong enterprise adoption, as measured through record numbers of API calls.  These metrics highlight that we are executing against our strategy to empower individuals and businesses to create content in new ways in the era of AI.

Adobe’s management’s approach with AI across Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content; AI Assistant MAU doubled year-on-year in 2025 Q4 (FY2026 Q1) and Express MAU tripled; Express is now used in 99% of US Fortune 500 companies; Adobe Acrobat Studio, introduced recently, brings all of Adobe’s AI and creative capabilities into PDF tools, is off to a strong start

Our vision for Business Professionals & Consumers is to deliver AI-powered applications that reinvent how users comprehend, create and share content…

…PDF Spaces transforms collections of files and links into dynamic knowledge hubs that allow you to easily collaborate with others. Acrobat AI Assistant provides users conversational experiences that help them comprehend information faster and more accurately with an individual PDF or across documents in a PDF Space. Our Acrobat and Express integrations empower users to turn content they are consuming into generated presentations, infographics, audio summaries and more. It’s clear that these AI-based capabilities are resonating with users, as AI Assistant MAU doubled year over year and Express MAU tripled year over year. Express is now used in 99% of U.S. Fortune 500 companies.

In Q3, we introduced Adobe Acrobat Studio, a single offering that brings together all these AI and creative capabilities with the PDF tools users know and rely on. Subscription upgrades to offerings that include Acrobat Studio value are off to a strong start across routes to market, including Adobe.com and enterprise license renewals.

Adobe’s management is embedding Adobe products directly into chatbots; management launched Acrobat and Express for ChatGPT in 2025 Q4 (FY2026 Q1); management will soon launch similar integrations into Copilot, Claude, and Gemini; management recently launched a Photoshop conversational editing experience in ChatGPT; brands can now create ads for ChatGPT with Adobe’s tools

We are embedding Adobe’s capabilities directly into new conversational platforms. In Q1, we launched both Acrobat and Express for ChatGPT, significantly expanding the reach of our creativity and productivity workflows. You can expect to see similar integrations into Copilot, Claude and Gemini as those platforms support integrated application experiences…

…Photoshop launched a conversational editing experience in ChatGPT…

…Partnership in the OpenAI initiative to enable brands to create ads for ChatGPT

Adobe’s management’s approach with AI across Creators and Creative Professionals is to empower everyone to create, with Firefly, Adobe’s all-in-one creative AI studio, as the centerpiece; enterprises are increasingly turning to Firefly Enterprise to unlock content automation; Firefly users can access over 30 industry-leading models from both Adobe and leading AI labs; Firefly users can edit and assemble images, videos and audio with prompts and in an integrated way with Photoshop and Express; Firefly’s generative credit consumption was up 45% sequentially in 2025 Q4 (FY2026 Q1); Firefly’s generative credit consumption is skewing toward higher-value modalities, with video generative actions up 8x from a year ago and audio generative actions up 2x; Firefly subscription and credit pack ending ARR was up 75% sequentially in 2025 Q4 (FY2026 Q1); Adobe’s management has continued to add new AI capabilities into Creative Cloud applications, which has led to higher AI usage and in turn, a nice ramp in purchases of Firefly credit packs; Adobe’s Creators & Creative Professionals segment saw the traditional Stock business decline faster than management expected; the entire Firefly ecosystem’s ending ARR exceeded $250 million in 2025 Q4 (FY2026 Q1)

Our strategy for Creators & Creative Professionals is to empower everyone to create – from first-time creators to seasoned professionals to large enterprises seeking to scale content production. Firefly, an all-in-one creative AI studio, is the right tool for the next generation of creators and creative professionals…

…Enterprises are increasingly turning to Firefly Enterprise to unlock a new era of content automation.

Firefly is quickly becoming the go-to destination for content generation, ideation and assembly. Users can generate with over 30 industry-leading models, including Adobe, Google and OpenAI. They can collaboratively ideate with stakeholders in Adobe Firefly Boards. They can edit and assemble image, video and audio using Firefly’s prompt-based editing capabilities with integrated Photoshop and Express web journeys. Firefly momentum is strong, with generative credit consumption growing over 45% quarter over quarter. While that growth is broad-based, generations are skewing toward higher-value modalities, with video generative actions growing more than 8x year over year and audio generative actions doubling year over year, reflecting customers moving deeper into AI-assisted creation across the full creative process. As a result, Firefly subscription and credit pack ending ARR grew 75% quarter over quarter.

Creative Cloud applications continue to embed new AI capabilities, making users far more productive. Photoshop added new partner models and support for higher resolution image generation and editing. Illustrator expanded its generative design capabilities with models from OpenAI, Ideogram, and Google to support frequent vector workflows. Premiere added AI Object Mask, which quickly became one of the most used AI features in the application. As Creative Cloud users increase AI usage, we are seeing purchases of Firefly credit packs ramp nicely…

…While Q1 had many highlights, our traditional Stock business saw a steeper decline than we expected. This shift is playing out more quickly than we had planned for and our focus remains on giving customers meaningful choice between stock and generative AI as they build their creative and marketing workflows…

Firefly ending ARR, across Firefly App, Firefly credit packs, and Firefly Enterprise exceeded $250 million

Firefly Enterprise combines Firefly Services and Firefly Foundry; Firefly Services provides APIs for automated content production workflows, including 3D digital twin workflows, image and video resizing across every social and digital channel, campaign variant generation, and more; Firefly Foundry allows enterprises to build private, deeply tuned AI models trained on their own IP (intellectual property), and gives enterprises a commercially safe model that is able to accurately generate their branded assets; Firefly Enterprise’s new customer acquisition was up 50% in 2025 Q4 (FY2026 Q1) from a year ago; Firefly Foundry recently signed new partnerships in the media & entertainment vertical

Firefly Enterprise, the combination of Firefly Services and Firefly Foundry, is empowering the world’s largest brands to scale content production to unprecedented levels. Firefly Services provide enterprise-grade APIs, giving businesses more than 30 content production capabilities which can be run in automated workflows. These include 3D digital twin workflows for showcasing physical products, image and video resizing across every social and digital channel, and campaign variant generation and assembly for personalized marketing content. Firefly Foundry enables the world’s largest marketing teams and media companies to build private, deeply tuned AI models trained on their own IP. Unlike generic AI models, Firefly Foundry gives enterprises a commercially safe model that understands and is able to accurately generate their branded assets. Together, these products are driving measurable business outcomes, by increasing production scale, accelerating velocity and reducing costs. Firefly Enterprise new customer acquisition grew 50% year over year…

…Firefly Foundry continues to build momentum in the media & entertainment vertical, with partnerships including B5 Studios, Cantina Creative, Creative Artists Agency, United Talent Agency and WME. 

Adobe’s management sees Adobe as the  trusted partner for AI-powered Customer Experience Orchestration (CXO) for enterprises; management recently introduced new agents in Adobe Experience Platform (AEP); management recently expanded AEP’s Agent Orchestrator capabilities; AEP now handles 35 trillion segment evaluations and 70 billion profile activations daily; subscription revenue for AEP and native apps grew 30% year-on-year in 2025 Q4 (FY2026 Q1); traffic to retail sites from LLMs (large language models) was up 7x during the 2025 holiday season; traffic from LLMs to retail sites convert 31% higher and generate 254% more revenue per visit; Adobe has products that help brands engage consumers across their owned properties, search, social media, LLMs and agentic channels; Adobe LLM Optimiser helps enterprises improve their websites’ discoverability by LLMs; Adobe Brand Concierge helps enterprises configure and manage agentic AI experiences on their websites and mobile apps; Adobe is in the process of acquiring Semrush and management expects Semrush to help Adobe provide a comprehensive solution for enterprises to shape brand-image across their own websites, LLMs, and traditional search; 650 customer trials for  Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge are underway; AEP AI Assistant is now used by 70% of all AEP customers;   

Adobe has become the trusted partner for AI-powered Customer Experience Orchestration (CXO) through our thought leadership, rapid innovation, and omnichannel capabilities, while providing the security, reliability, data governance, global scale, and partner ecosystem that enterprises require. 

Adobe’s unified CXO platform provides solutions for brand visibility, content supply chain and customer engagement. Adobe Experience Platform (AEP) is a leading platform for digital customer engagement and brings together new AI-powered apps and agents to transform how businesses build, deliver and optimize marketing campaigns and customer experiences, as well as reduce costs. In Q1, we introduced new AEP Agents along with expanded Agent Orchestrator capabilities, now available to all AEP customers, via a Try and Buy program. The scale of our platform has grown to over 35 trillion segment evaluations and more than 70 billion profile activations per day. Subscription revenue for AEP and native apps grew over 30% year over year, demonstrating continued momentum and value realization…

…According to Adobe Digital Insights, during the 2025 holiday season, traffic to retail sites from LLMs increased nearly 7x, bringing qualified referrals that convert 31% higher and generate 254% more revenue per visit. Adobe’s brand visibility solution, which includes Adobe Experience Manager, Adobe LLM Optimizer and Adobe Brand Concierge, empowers brands to engage consumers across their owned properties, search, social media, LLMs and agentic channels. Adobe LLM Optimizer enables enterprises to enhance the discoverability of their websites by LLMs and significantly increase their organic traffic. Adobe Brand Concierge is an AI-first application enabling businesses to configure and manage agentic AI experiences on their websites and mobile apps to guide consumers from exploration to purchase decisions, using immersive and conversational experiences. We expect our pending acquisition of Semrush will expand our offering to provide marketers with a comprehensive solution to shape how their brands appear across their own websites, LLMs, traditional search and the wider web…

…Strong customer demand for our agentic web offerings with over 650 customer trials underway for Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge…

…Continued adoption and momentum for AEP AI Assistant with 70% of all AEP customers using the agentic capabilities;

Adobe’s management recently delivered innovation that enabled GenStudio-created content assets to flow directly into activation workflows across Adobe’s stack and some of the largest 3rd-party advertising platforms; Adobe GenStudio’s family of products saw ending ARR grow 30% year-on-year in 2025 Q4 (FY2026 Q1)

GenStudio is our comprehensive content supply chain offering, spanning content ideation, creation, production, and activation…

… In Q1, we delivered breakthrough innovations enabling GenStudiocreated assets to flow directly into activation workflows across the Adobe stack and a broad ecosystem of advertising platforms including Amazon Ads, Google, LinkedIn, and Meta. Ending ARR for the Adobe GenStudio family of products grew over 30% year over year as the world’s leading brands and agencies increasingly turn to Adobe to power their content supply chain.

Adobe’s management thinks that only 2-3 really large LLMs (large language models) will succeed because people are not interested in the model but the workflows; management thinks it’s the right strategic move for Adobe to provide a choice of models because customers can then use the right models for the right use cases; management thinks it’s a win-win for Adobe and the model providers for Adobe to be providing different models because the model providers want access to customers while Adobe wants different model-capabilities

My take on the model side would be as follows, which is they’re going to be 2 or 3 really large language models that actually succeed. All of these individual models that exist, small model companies in 1 part of a media ecosystem, I just don’t see how long term they survive because people aren’t interested in just the model, they’re interested in the workflow. And so for us, offering customers with that choice was actually very strategic because we can actually then provide for all of our creative customers the right model for the right case because these all have different brands…

…As it relates to the support of all these models, I think it’s a win-win. They would like access to customers, which Adobe has, and we would like access to these different models because they have different brand attributes. And I think if you look at the larger companies like Google, we’re actually with them and with Nano banana. It’s been a great partnership because we are providing them with a lot of customers and they’re providing us with great technology.

Okta (NASDAQ: OKTA)

Okta’s management thinks the market for securing AI agents is still early; management thinks that Okta is well positioned to help companies secure their AI agents; 91% of organisations surveyed by Okta are using AI, but only 10% have a governance strategy for their use of AI; when management is speaking to customers, they are asking how Okta can help them manage agents securely; management thinks that the surface area for threat actors increases as AI becomes embedded in more workflows and automations; management sees AI agents as a new identity type, and securing identities is Okta’s expertise; Okta can secure the entire agentic lifecycle and gives customers the freedom to deploy agents without any ecosystem lock-in; Okta’s solutions for securing AI agents, Auth0 for AI Agents and Okta for AI agents, treats AI agents similarly as human users; management believes that AI agents are the future of software; Okta for AI Agents became available in early access only in January 2026; Okta’s solutions can enable organisations to observe, govern, and secure the entire life cycle of an AI agent; management thinks identity is even more important in the agentic world than before; management thinks Okta for AI Agents is more unique and differentiated than Auth0 for AI Agents; Okta for AI Agents can help customers understand what different agents are doing;

I mentioned that our portfolio of new products now includes our AI products, Auth0 for AI Agents and Okta for AI Agents. It is still early for this developing market, but as the leading modern identity solution for workforce and customer identity, Okta is uniquely positioned to help organizations combat the growing security threat that AI agents represent. The reality is that the AI revolution has moved faster than today’s security frameworks. According to Okta’s AI at Work report, 91% of surveyed organizations are already using AI but only 10% have a governance strategy in place.

In meetings that I have had with customers and prospects over the past six months, the vast majority of the conversations revolve around their AI initiatives and how Okta can help them build and manage agents securely. As AI becomes embedded in more workflows and automations, the growing number of exploitable entry points—from nonhuman identities to unsecured integrations—expands the attack surface for threat actors. It is clear that in order to get AI right, you have to get identity right. Okta was built to meet this challenge…

…AI agents are simply a new identity type, and protecting them is a natural extension of what we do best. Okta’s neutral and independent identity solution is uniquely positioned to secure and govern the entire agentic lifecycle and gives customers the freedom to deploy on any agent without ecosystem lock-in, all while strengthening their security posture. Our two-pronged solution with Auth0 and Okta for AI Agents treats AI agents with the same importance as humans and gives customers everything they need to secure this powerful new technology. 

We are still in the early stages, but we believe that in a few years, agents and agentic systems will not be the exception to how enterprise software is built and operated. They will be the rule. We believe that AI agents represent nothing less than the future of software…

…Okta for AI Agents, which became available in early access in January…

…With our solutions, developers, administrators and IT teams can ensure that the entire life cycle of an AI agent from initial design through active deployment is observable, governable and secure…

…Identity is at the center of — traditionally, in legacy technology, it was always at the center. And in this agentic world going forward, it’s becoming clear to everyone, it’s even a bigger deal than it was before…

…[Question] It seems like you’ve got a real competitive advantage on the Auth0 side. Could you maybe compare, and contrast initial takes for sales cycles, competitive dynamics and velocity of each? I know it’s still early stages, but is Okta for AI Agents in a more competitive market?

[Answer] I think Okta for AI Agents is more unique and more differentiated than maybe we would have expected. I think Auth0 for AI Agents is unique and differentiated as well. But I think maybe the sentiment you’re expressing is it’s different than what we’re seeing. Customers need a solution that’s pre-integrated to all these agentic systems. I mean there’s no good way for customers to even understand what all these vendors are doing in agentic. There’s no catalog of systems that says, Salesforce is doing this. ServiceNow is doing this, AgentCore is this, Google is doing this, Microsoft is doing this. And that’s what Okta for AI Agents does. And then on top of that, models connections and has policy for connections that connects users to different agents, agents to systems.

A financial services platform company is an existing Auth0 customer and it picked Auth0 for AI Agents to build AI agents; the financial services platform found Auth0 for AI Agents offered enterprise-grade identity for humans and agents, and secure access to 3rd-party MCP (model context protocol) servers

An existing Auth0 customer is building AI agents as part of their leading financial services platform. These agents will help the firm’s advisers make better and faster decisions, but to do so, the agents need access to sensitive customer information, which must be least-privileged. And they need to work with existing systems and third-party services inside the financial institution. The customer picked Auth0 for AI Agents as it met their stringent requirements for a secure, extensible platform to build and deploy agentic systems. They needed a solution that offered enterprise-grade identity for humans and agents while providing secure access to third-party MCP servers, all while acting as a single source of truth.

A global business and technology services provider is rolling out AI agents across multiple agent platforms and chose Okta for AI Agents to manage identities for its growing sprawl of agents; Okta is an independent agent-agnostic platform

Another notable deal that included Okta for AI Agents, which became available in early access in January was with a top global business and technology services provider. They chose Okta for AI Agents to help them discover, control and govern identities for their growing sprawl of agents. Rolling out AI agents across multiple agent platforms is key to their ongoing transformation and centralizing agentic identities in an independent agent-agnostic platform like Okta will strengthen their cybersecurity posture.

Okta for AI Agents and Auth0 for AI Agents contribute very little revenue at the moment because they are still very young products, but management thinks they can be a huge source of upside in the coming years; Okta for AI Agents and Auth0 for AI Agents will lead to higher growth in current RPO before it flows down to revenue

Okta for AI Agents is not even generally available yet, and Auth0 for AI Agents is — just was generally available at the beginning of the quarter. So it’s off to a huge start. Now the relative number is small compared to our $3 billion revenue run rate. But looking forward to next year, we’re very, very excited about the potential of these products…

…Because the agentic products are so new, it’s tough to pour too much into our assumptions about growth in terms of guidance. But I think those things could be a huge source of upside over and above the guidance in the years ahead…

…We’re not thinking about this as an opportunity just for FY ’27. This is an opportunity to be accretive to growth for FY ’28, ’29. And we’ll see the results, as you guys know, in current RPO first before we see it in revenue…

There is some confusion that Okta’s customers have between identity infrastructure and identity security; identity infrastructure and identity security are separate things, and Okta is the only company that does both; management sees both identity infrastructure and identity security as being really important for the agentic market; management is not seeing any big change in the competitive landscape for Okta in the agentic market for identity infrastructure and identity security

I think the biggest confusion people have is the distinction between identity infrastructure and identity security. And they hear the word identity, and they think if you’re sitting on top of identity and detecting threats and blocking threats, you’re also identity infrastructure. So that’s one of the big confusions. And when you look at the agentic market, they’re both really important. It’s the identity security, making sure the agents are monitored and checked that they can’t go out of bounds. But just the infrastructure, just the ability for the agents to connect and just for tracking and visibility, that’s an infrastructure play. And we’re the only company that really does both. It’s at the security layer and the infrastructure layer. So I think that is maybe a little bit of a confusion and something that we’re working hard to make sure everyone understands the advantage of that position as well…

…From an Okta standpoint, we’re not seeing any material change in the competitive behavior in our transactions yet. Of course, we’re keeping our eye on the landscape.

Okta’s management has been speaking to customers, and they think there are 2 ways to charge for agents, (1) a multiplier on a person who uses agents, and (2) a fee that is based on the number of connections a non-human-connected agent has; it’s still early days for the pricing model Okta will adopt, but management sees the pricing as a nice step up for the company

We have these conversations with our 20,000 customers, we get really rapid feedback on how we can capture value, what would be most valuable for them, easy for them to consume. So it’s really a strategic advantage. We have this feedback loop, and we’ve actually structured the go-to-market team for AI agents to capture that feedback rapidly and feed it right back into the product teams. And what we’re seeing is that there’s really 2 ways that we charge for agents. One is like a multiplier on a person. So in the model where a human identity uses a number of agents to augment their work, there’s a multiplier on that agent or on that — what they pay for a person to what they pay for agents. And then also, there’s a — if the agent is not coupled to a person, there’s a — we sell it based on the number of connections the agent makes because that’s really the value. They want to secure those connections and filter on fine-grain access to all the back-end systems and the SaaS applications and the custom applications and data warehouses the agent connects to as they get more — the agent is more valuable as it has more fine-grained access to different things and it’s more secure. So there’s a multiple based on that. The pricing we’re working with these customers on is pretty early. So we’re — it’s a nice step up.

From a hypothetical point of view, Okta’s management thinks it’s really difficult and costly to vibe code a competing product to what Okta has built over the years because the vibe coder (1) needs to ensure there are no vulnerabilities and the product can scale, (2) is likely to incur significant inference costs, and (3) will suffer major costs if things wrong; Okta’s management is hearing customers share similar views as what they have when it comes to vibe coding; management is paranoid about competition from vibe coding and Okta is using LLMs and coding tools to build in the as fast possible; customers are telling Okta that they do not want to use startups for securing AI agents and they do not want to use just one provider for agents

[Question] When you look at what you’ve built over the years and the data that you’re sitting on, can you talk about sort of the structural advantages that you see over maybe some upstarts or some vibe coding alternatives?

[Answer] I think if you want to build what any SaaS company has done or what Okta has done, it’s years and years of hardening and making sure there’s no vulnerabilities and making sure it scales and it’s reliable. And it’s — if you — I don’t know what the inference cost to build that would be, but it would be pretty significant inference cost. And then if you flip it around, you just think about what’s the price of getting it wrong. And if getting it wrong, it’s hard to validate. It’s hard to prove you have it right. And if it’s wrong, you have a major security breach or you’re down and none of your agents or none of your people can access systems. So the cost of getting it wrong hypothetically and actually just the cost to do it theoretically, if it was even possible theoretically with an LLM or a tool would be pretty high. And that cost could change over time. We don’t know… But when you talk to customers and you hear their challenges and their opportunities, they — a lot of the same things are echoed. They want to identify key infrastructure pillars, and they want to standardize on them. And they see that as the unlock to hundreds of other decisions and hundreds of other builds versus buy decisions they have to make. And they’re putting foundational security, foundational identity in this bucket of things that they want to partner with a leader and trust it and go on top of that and figure everything else out. That’s what they’re telling me. And it kind of matches up with what I would think about hypothetically…

…We are paranoid. And we’re making sure that we are using all the latest technologies, LLMs, coding tools to make sure we have not only something that’s resilient and secure but has the best features and the best capabilities. And so we’re making sure that we build things internally as fast as anyone could build them because we — make no mistakes, the prize here that the whole industry is going after, which is this agentic future where digital labor is part of the TAM is a massive prize. And everyone is at some level; big picture is going to be going after this prize. And it’s exciting because it’s greatly expanded the TAM of what Okta could be…

…They’re reticent to trust a start-up with this critical piece of foundation because they know there’s going to be M&A, and they know there’s going to be start-ups going away. There are so many start-ups playing in this space that there’s bound to be a lot of failure, and they don’t want to build their whole foundation around something and have it be pulled out from under them. And the other factor that is in their minds is that they don’t want to be locked in. Think about — what’s happening at agentic and what’s happening in this world, these foundational models are moving incredibly fast. And its Anthropic foundational model that has the leap ahead and then it’s OpenAI and then it’s an open-source model and then it’s — and that’s going to continue for many years. And they don’t want to be locked into a certain stack and a certain set of tools. So they’re reticent to trust their foundational security with one provider, one platform. And back to the start-ups, they know that a bunch of these start-ups are going to get bought by the big players, so they’re thinking, even if I go with a start-up now, it’s going to get sold and then we’d be locked into Microsoft, and they don’t really want that.

Okta’s management thinks the proliferation of AI agents could massively expand Okta’s total addressable market (TAM); management thinks the SIEM (Security Information and Event Management) market is changing because of AI agents

Think about identity and what it’s been in the past. It’s roughly $20 billion TAM right now in terms of what people spend on the vendor data. We talk about an $80 billion TAM. I mean this could be bigger than — this could be the biggest part of cyber in a few years for sure. And it could be even bigger than that if you really think about the infrastructure that stitches together the entire agentic enterprise and is the plumbing that makes it run…

…The SIEM market is transitioning to be not just a platform for logging in and doing authentication authorization, but it’s a platform for customers building agentic interfaces to their customers and to agents coming into their systems. So Auth0 for AI Agents, that’s what it is. It’s a token vault. It helps agentic login. It helps customers hook other AI tools up to their customer login. And so I think over time, that market is evolving into something that’s hugely impactful and value delivering for our customers.

Okta’s management is working with standards bodies in building solutions for securing AI agents, but they do not think that there will be only one set of standards that will dominate

They’re all trying to do a ton of things and make their services more agentic and more compelling and security and the ability to have them be more enterprise-ready is on their list, but we have to convince them to get it higher on their list. So it’s not like a competing standard is like a prioritization thing. But remember, we are — we want to provide this identity infrastructure and make sure that we give people this solid foundation to build upon. And that’s going to require standardization just because it’s not going to — you can’t use a standard piece of foundation if everyone is doing their own things in a different way, which is why we’re working with standards bodies in general. It’s not just Cross App Access, but it’s an important part of the equation. But I wouldn’t say like the whole war rests on one specific standards body or standards battle. I think it will be an evolutionary thing over the next several years.

Sea Ltd (NYSE: SE)

Monee’s credit business grew in 2025 because of its AI-driven improvements in risk underwriting capabilities; management is experimenting with transformer-based AI models to assess credit risks and the experiments are showing very good performance

Our credit business expansion in 2025 was made possible by improvement in our risk underwriting capabilities. This improvement tapped on our rich ecosystem data and advancement in AI. Over the year, we made good progress training our risk models to better understand and map how user behavior evolves over time. We are better able to access individual repayment capacity alongside evolving market risk and dynamically adjust the credit limits as needed. Enhancing our models precision and performance enabled us to scale rapidly in 2025, while still maintaining a stable risk profile…

…We’re experimenting with the new AI — new risk model with the transformer structure as well to do a sort of a long sequence data training fit into our model to utilize many of the e-commerce data that we are not able to use in the traditional risk modeling, and it has been showing us very good performance.

Sea’s management has directed a lot of investments into AI for the Shopee business; for each AI investment in Shopee, management looks at the ROI (return on investment); Sea has used AI to improve the take rate on its advertising business; management recently rolled out multi-modal search for Shopee and the roll-out has delivered clear ROI; management is using AI to help sellers on Shopee; customers are able to talk to Shopee’s sellers with the help of AI and this helps sellers upsell and reduce manpower costs; Shopee has AI-powered tools for sellers to create pictures, videos, and descriptions of their products, and the tools have a fairly positive ROI

I think if you look at the e-commerce side, we do spend quite a lot of effort on the AI. I think you mentioned about AI investment there. For every — for the investment on the e-commerce for AI, we also look at the positive return of investment across the initiatives.

For example, if you look at one of the area we spend on AI is our search recommendation and also ad systems. The uplift on our ad take rate is a consequence of many of our AI efforts. For example, how do we actually expand the description for our products, we can understand the product better. For example, how can we expand the queries from the users, we can understand user intention better. Recently, we also rolled out a multimodal search in our platform as well. So user can search a picture plus a long description, and we are able to serve that just similar to how Gemini or ChatGPT would do. I think all those AI investment has a clear ROI.

We also spent quite a lot of effort using AI to help our sellers. For example, if you go to many of our countries, you can talk to the sellers with the help of AI already. So we built an AI chatbot for our sellers. Our sellers can customize it for their own purposes. This will help the seller to reduce their manpower and also make it not only reduce cost, but also have the better upsell for the buyers. And we also have tools for the seller to create videos and picture descriptions for their products, et cetera. All those typically come with a fairly positive return on investment for our ecosystems.

Tencent (OTC: TCEHY)

AI is benefitting Tencent’s game content development, user engagement, and marketing efficiency; management believes that Tencent’s business has a high degree of resilience in the age of AI because of (1) network effects, (2) a connection between the digital and physical world, (3) licensing requirements, (4) unique resources, (5) low take rates, and (6) proprietary data; AI can enable faster game development, but the gaming industry is already in a state of oversupply and it will be game-quality, which depends on human creativity, that will be the key success factor; management thinks games will benefit from AI as people will have more time on hand; 

AI contributes meaningfully to game content development, user engagement, and marketing efficiency. Video Accounts total time spent increased over 20% on upgraded recommendation algorithms and enriched content ecosystem. Our marketing services revenue growth outperforms the industry, benefiting from our upgraded ad tech model and newly introduced automatic campaign solution, AiM Plus…

…AI will affect every part of the technology industry, but some products and services are inherently more resilient than others. We believe that some of the characteristics of resilience would include network effects arising from consumer to consumer to content creator, and consumer to business interactions in descending order of strength. That’s number one. Number two, deep supply chain integration linking the worlds of bits with the world of atoms. Number three, stringent regulatory and licensing requirements. Number four, scarce or unique resources, including physical and intellectual properties. Number five, tick rates that are low compared to value provided or cost of switching. And number six, private data that is closed and interactive in nature. Using these criteria, we look across our major existing businesses. Our conclusion, which is supported by usage trends, is that each one of them has got a high degree of inherent resistance.

In particular, for our communication services, including Weixin, QQ, and Tencent Meeting, people use them to connect and interact with other people, largely their families, friends, and colleagues, and business partners. We believe this need for human interaction, together with the network effects and closed nature of the data arising from these interactions, have resulted in communication services being extremely sticky in the face of competing non-AI services in the past and will continue to be resilient versus AI-based services in the future.

Moving on to our games. They are also very resilient as our multiplayer games, especially PVP games, also enjoy network effects. Similar to sports, they are team-based in nature, and players play with and against other players. Just as people prefer to participate themselves or watch the teams they support compete in sports rather than watching AI sports, game players continue to enjoy the interaction with other humans that our games provide…

…While AI will enable more games to be made faster, the game industry is already in a position of excess supply, with 200,000 new games on mobile and 18,000 new games released on Steam every year. The limiting factor is that new games need to be high quality and more innovative than the best existing games, which in turn requires human creativity on top of cutting-edge technology. Game is a natural beneficiary of AI proliferation, also when people have more time at hand.

Our fintech services are also resilient as they depend on difficult to secure and retain licenses which are limited in nature and also set the boundary on how innovations can be introduced in an industry. We have also invested decades building a payment network of difficult to replicate rails into partner banks, merchants, and connecting them with more than 1 billion consumers, which brings its own network effects. Our mobile payment take rates are already among the lowest in the world, which we believe makes competing with us on price highly uneconomical.

Tencent’s management is deploying AI to strengthen the company’s core businesses; management thinks Tencent is at the forefront in China and globally in strengthening its core businesses with AI; Tencent is using generative AI in its games business to speed up content production, acquire new users, retain existing users, and improve the gameplay experience; Tencent is using generative AI in its marketing services to improve ad conversions and user experiences, allow advertisers to create more ads, and provide the AiM Plus automated advertising campaign solution; Tencent is using AI to enhance content recommendation for Video Accounts; Tencent is using AI to improve content production efficiency for digital content; Tencent is providing AI agents within its enterprise software products; Tencent is using AI in the Fintech business to improve credit scoring and fraud detection; management has integrated AI into Weixin to enhance the user experience in a wide range of areas; the improved user experience in Weixin include AI agents which autonomously interact on behalf of users within Weixin functionalities (see Point 3 for more on using Hunyuan to build AI agents in Weixin); management thinks the trend of AI agents, such as OpenClaw, being controlled through users’ existing communication apps, mean that Weixin and QQ, will be the most efficient place for users to interact with AI agents; management thinks Tencent is already seeing vey good ROI (return on investment) when applying AI to the company’s existing businesses

We believe that in each of our core businesses, we are now at the forefront of their respective industries in China and often globally in utilizing AI with positive initial results demonstrated by user engagement and revenue trends.

In games, we are deploying generative AI to accelerate in-game content production, enabling us to produce more content within our big games. We’re using generative AI to facilitate new user acquisition and existing user retention through measures such as targeted ads and personalized daily highlight reels. We’re enriching the core gameplay experience with AI features such as virtual teammates in PVP games and realistic non-player characters in PVE games. These initiatives are one reason why Tencent’s games are more and more evergreen, and our revenue growth of 22% in 2025 outperformed the 7% growth of the global games industry.

For marketing services, we scaled up our advertising foundation model to provide more relevant ads to more targeted users, boosting ad conversions for advertisers and providing better user experiences at the same time. We provide generative AI-powered ad creative solutions, enabling advertisers to create more ads which are more relevant to smaller set of users and more efficiently. We introduced our automated ad campaign solution, AiM Plus, under which advertisers can automate targeting, bidding, and placement, improving their return on marketing investments and increasing their budget allocation to us. These initiatives contributed substantially to Tencent’s marketing services revenue growth of 19% in 2025, outstripping the overall China ad industry growth of 14%.

For Video Accounts, deploying a longer sequence AI model which captures more of a user’s signals to enhance content recommendation is boosting user growth, engagement, and content distribution. Total time spent on Video Accounts increased more than 20% in 2025, and Video Accounts is now the second-largest short video service by DAU in China.

For digital contents, we utilize AI in content production, improving production workflow efficiency, and providing visually compelling special effects. AI also helps in content distribution through more intelligent content recommendations across music, videos, and literature.

We’re using AI in enterprise software to provide features such as AI agents that can take notes on and summarize concurrent meetings for users, and AI agents that generate intelligent summaries of customer service history for merchants. Our enterprise software products, WeCom and Tencent Meeting, are leaders in their categories in China in terms of usage and revenue.

For Fintech, we utilize lightweight AI models to enhance credit scoring processes and facilitate fraud detection, contributing to us sustaining better than industry non-performing loan rates…

…We have also integrated AI to enhance a range of existing user experiences within Weixin, including content consumption, information retrieval, and merchandise recommendation and customer service. We’re building AI agents which autonomously interact on behalf of users within Weixin functionalities, especially Mini Programs. The excitement around OpenClaw illustrates that people recognize AI can unlock computer use capabilities to improve their daily lives but also illustrate the risks around unleashing unsupervised AI. We want AI agents in Weixin to deliver AI productivity that’s beneficial to the general public as well as early adopters, and which will boost ecosystem activity and naturally generate revenue…

…OpenClaw is upgrading AI from thinking to doing via autonomous workflows and continuous task execution. Users control this new generation of AI tools through command line interfaces in their existing communication apps, which generally means Weixin and QQ in China, as it’s the most efficient for users to interact with digital agents in a place and format where they are already interacting with human contacts…

…We have already seen very good ROIs when we apply AI into our existing businesses, right? You know, so if you look at the breakdown of our financials, you know, if you look at the financials on a combined basis and then sort of we break it out and saying, oh, you know, these are the financials with existing businesses plus the investment into AI for supporting these businesses, right? You know, the growth is actually quite strong and if you exclude the investment in new AI products, then you know, the operating leverage is clearly there.

Tencent’s management sees substantial opportunities from configuring a strong foundational model for the company’s core customer-facing use cases; management thinks Tencent is not at the forefront when developing frontier models, but the company has revamped its AI-building capabilities; version 3 of Tencent’s foundation model, Hunyuan, is now in testing and it is a step-improvement compared to version 2; management thinks Tencent’s 3D text-to-image and world models are early category leaders; management believes that users of AI agents will have access to multiple foundation models, but integrating Hunyuan with Weixin will enable Weixin to have unique agentic capabilities; management spent RMB 7 billion on HunYuan and Yuanbao in 2025 Q4 alone, and RMB 18 billion in 2025, and expects to double the investment in 2026; management is confident that the investments in HunYuan and Yuanbao will lead to monetisation; management thinks the AI race is not just one race of model-building, but there are many different races taking place, so they are not worried about Tencent being relatively late; management believes that HunYuan will eventually be a SOTA (state of the art) model in the future

At the foundation model layer, we see substantial opportunities from combining a strong foundation model with configuration for core user cases such as chatbot, coding, multimodal, and agentic applications. 

Although we’re not the first mover in large language models, having already revamped our team, improved our data quality, and rebuilt our AI infrastructure for pre-training and reinforcement learning, we’re now iterating more intelligent models at a faster pace. HunYuan 3.0 is in internal testing and currently represents a bigger step in capabilities versus HunYuan 2.0 than HunYuan 2.0 was versus HunYuan 1.0.

For multimodal capabilities, our 3D text-to-image and world models are early category leaders and will increasingly benefit from leveraging our proprietary data and abundant use cases…

…AI agents are currently powered by a multiplicity of foundation models, and we expect that users at the application level will continue to have access to a range of models. However, improving the performance of HunYuan will enable us to offer new, unique to Weixin agentic capabilities. The Weixin and HunYuan teams will work increasingly closely together going forward…

…Our spending on our two biggest new AI products, HunYuan and Yuanbao, was CNY 7 billion in the Q4 of 2025 and CNY 18 billion for the full year. These figures are only for HunYuan and Yuanbao and exclude AI initiatives supporting our existing products and services, as well as exclude costs arising from providing GPUs to external customers via Tencent Cloud. We expect to more than double these investments in HunYuan, Yuanbao, and other new AI products in 2026, which we intend to fund from increasing earnings from our core businesses…

…Over time, we’re confident that monetization will follow usage for these new AI products…

…[Question] I have one question regarding the comment quite a few times that we mentioned that we are not a first mover or we are even a latecomer in AI. In the U.S., we have also observed that it’s becoming very difficult for some of the latecomers to catch up, even for those that have very high resources in terms of compute, talents, and data. How does management get comfortable and confident that we won’t be following the same path in terms of, you know, lagging behind, not able to catch up and around areas on compute modeled applications?

[Answer] If you are playing just one game, then basically it’s hard to sort of, you know, catch up on one game, right? You know, if you view AI as sort of, you know, a multiple of different games, then, you know, there are new opportunities, new frontier that’s opened all the time… All these elements can be packaged together, you know, in the new race of AI. It’s not sort of, you know, one race. It’s actually sort of, you know, a world of many, many races… I think, you know, that will, you know, increasingly manifest itself and as a result, there will be a lot of opportunities for different players to come up and innovate from behind. I’m not sort of, you know, very worried about, you know, being late, but I’d be worried about, you know, if we’re not innovating fast enough…

…Our HunYuan 3.0 is gonna be much better than HunYuan 2.0, and that’s actually just the starting point. I think, you know, over time, we’ll be able to iterate the training of our model faster and, you know, I’m very confident that, you know, if we focus on that, you know, we’ll reach SOTA at some point in time.

Tencent’s management thinks building AI chatbots is not the best way to use AI to help people; management thinks AI chatbots are competing with internet search; management is still finding product-market-fit for Tencent’s chatbot, Yuanbao; management will be deploying HunYuan 3.0 in Yuanbao in the near future and they think this will improve Yuanbao’s user experience; Tencent’s management is seeing that consumers in China are not willing to pay for AI subscriptions, unlike in the USA; management thinks Tencent’s consumer AI products, when introduced to Chinese consumers, will have to be seen as investments upfront because the company can’t charge for them at the moment, but management still thinks the AI products will generate a very attractive return over time; see Some observers in Chinese tech are single-mindedly focused on AI chatbots as the only means for bringing AI to users. We believe this mindset is overly simplistic because AI can help people in a multitude of ways beyond powering an information advice app. We believe that AI chatbot applications are largely competing with search applications rather than with every other application. For Yuanbao, our own AI chatbot app, we’re focused on finding product market fit and use cases which belong in chatbot AI app. We’re rapidly iterating Yuanbao to enhance its user experience by providing better search integration, improved speech recognition, easier access to multimodal capabilities, and exploration around group chat, which we believe will increase usage and user retention of the app. In the coming months, as we deploy HunYuan 3.0 in Yuanbao, we believe the core user experience will step up further…

…You know, we would be seeing new investments first, right? You know, there’s not that much of a revenue, especially in the context of China. Unlike in the U.S. where you can actually get consumers to pay subscriptions and you can get companies to pay for, you know, coding agents at a very high cost. In China, those are not sort of that available. I think these will present themselves as investments upfront. Over time, we believe, you know, we’ll be able to generate revenue from these new AI products and they would generate, you know, very attractive return for us over time.

Tencent’s management has introduced productivity-enhancing AI tools for OpenClaw; management sees OpenClaw as a decentralised model for how AI works, beyond just having two major chatbots; management thinks that users of OpenClaw will want OpenClaw to work with multiple models

Speaking of OpenClaw, we have introduced a number of AI tools for enhancing productivity, including WorkBuddy, QClaw, and Tencent Cloud Lighthouse. We provide downloadable skills to easily put these tools to use from our SkillHub…

…I think OpenClaw is actually a very exciting concept, right? You know, it actually sort of presents a decentralized model or a decentralized regime for, you know, how AI works in this world…

…For some time, right, AI seems to be sort of, you know, everybody is trying to fight to become the AI, AGI hegemon or monopoly. You know, there seems to be a point in it which like people said, “Oh, if there’s one model which is AGI, then, you know, it would rule over everybody,” right? You know, the reality is it’s not, right? You know, you have multiple models becoming, you know, very strong and, you know, they specialize in different kinds of activities, right? One in chatbot, the other one in coding, and the other one in multimodal. You also have open source, which are, you know, pretty good. You have a lot of other models which sort of, you know, fast followers too. Then there was a time in which, you know, in the two C world [referring to ChatGPT and Claude], there seems to be, the chatbot being sort of, you know, the single entry point. Now with Claw, you can see, you know, it opens up a completely decentralized regime where, you know, many companies can have their own Claw, and the Claw can be using all kinds of different models…

…If you use these OpenClaws, then you know you go into them, and you have a choice. Do you want to use, you know, model A, which is, you know, very high performance and high price per token, or, you know, model Z that’s medium performance and very low price per token, or models, you know, B through Y in the middle? You know, that’s part of the appeal of OpenClaw. You know, HunYuan is, you know, one of those models that is available. You know, we believe with the capabilities of the HunYuan team now in place, that going forward, HunYuan will get better faster, and therefore consumers will naturally increasingly opt to use HunYuan. I don’t think it will be a monopoly situation.

Tencent’s management thinks the company’s investments in AI will follow a similar experience with Tencent Cloud; Tencent Cloud was a late entrant into cloud services in China, but management was patient and knew that Tencent Cloud had scale right from the start; Tencent Cloud focused on high-quality services starting in 2022 which pressured revenue growth for some time, but Tencent Cloud ended up achieving operating profit breakeven in 2024; Tencent Cloud faced revenue headwinds in 2025 because of GPU-supply constraints, but it still grew revenue and earnings; Tencent Cloud is facing a better pricing environment in recent party because of AI demand; management has ordered a substantially higher volume of compute for Tencent Cloud in 2026, which would facilitate revenue growth; cloud services providers in China were suffering for years because the supply of infrastructure was ample, but the supply is now constrained; management will be passing Tencent Cloud’s higher supply costs to customers

I would like to present a case study on Tencent Cloud as the latest example on how we develop our services into market leaders with economic returns over time. That would follow games, payments, and long-form video. We expect it will be the same for our new AI products. Tencent Cloud was a relative late entrant in cloud services. However, we committed to a patient and long-term investment strategy, believing that it had scale from the start due to Tencent itself being the biggest single end user for a range of technology infrastructure in China, and that it could provide differentiated services arising from Tencent’s unique insights, ecosystem, and capabilities. For example, we believe that we were the first cloud service provider in China to fully recognize the stepped-up capabilities of AMD’s recent generations of CPUs, becoming AMD’s largest partner in the country, and that our cloud video streaming service is the industry leader in terms of streaming quality. 

After a period where Tencent Cloud prioritized the revenue growth somewhat misguided by other industry participants, in 2022, we aggressively restructured Tencent Cloud to focus on high-quality services rather than chasing high revenue but low-value-added activities such as reselling and customizing projects. This pivot cost us several quarters of revenue growth, but it enabled Tencent Cloud to achieve operating profit breakeven in 2024, up from significant losses in prior years. During 2025, although Tencent Cloud continued to face revenue headwinds due to limited availability of GPU for external customers as we prioritize our internal needs, it grew revenue and sharply improved earnings, achieving CNY 5 billion adjusted operating profit. In recent months, we’re seeing a better pricing environment, especially for memory and CPU, which, along with robust AI demand and overseas expansion, allowing Tencent Cloud to grow revenue at a faster rate. Moving through the year, we have ordered a substantially higher volume of compute, which should also facilitate revenue growth…

…For years the industry has suffered because the cloud services providers in China were operating at very low margins. One of the reasons they operated at very low margins was because, you know, if there was a new entrant or if the customers wanted to source infrastructure directly, they were able to telephone the supplier and, you know, order the infrastructure that they wanted from the supplier of, you know, CPU or GPU or DRAM. You know, that’s no longer the case. You know, now, the supply is booked out months, quarters, in some cases, years in advance. You know, the supplier is prioritizing the biggest, most regular customers, which are the hyperscalers such as ourselves. Therefore, you know, the smaller cloud providers no longer have certainty that they can source supply, and they need to come to the hyperscalers. You know, the hyperscalers have been operating at low margins and so, you know, when the demand picks up, then, you know, we almost sort of as an industry have no choice but to pass through higher prices. You have seen a number of price increases in China cloud in the last 24 hours as a result…

…We seek to deliver, you know, more value through, you know, enrichment. Enrichment means that, you know, at a minimum, if you have, you know, compute, you can rent it out bare metal and you get a certain low price and low margin. You know, preferably you rent it out. You subdivide it and virtualize it into tokens, and then you get a higher price and higher margin per unit of compute. Ideally, you bundle it into a platform as a service or software as a service. Then you can get, you know, the best pricing and the best margins. That’s part of the journey that we’ve been on, and that’s part of, you know, how Tencent Cloud has moved from a very substantial losses four years ago to pretty substantial profits last year.

Tencent’s management added Tencent CodeBuddy to Weixin’s developer toolkit, enabling developers to create mini-programs using natural language; management provided developers of AI native mini-programs with free compute resources

For Mini Programs, total user time spent increased over 20% year-on-year, driven by workplace productivity tools, mini-games, and novels. We added Tencent CodeBuddy to our developer toolkit, enabling developers to create mini-programs using natural language input, and we provided developers of AI native mini-programs with free compute resources.

Tencent’s management is using AI in Delta Force to improve user engagement and development efficiency

Delta Force leverages AI coding for development efficiency and deploys AI-powered companions to enhance user engagement. 

The Marketing Services segment’s revenue was up 17% year-on-year in 2025 Q4, driven by improved ad targeting, expansion of closed-loop marketing services, and tailoring of ad formats for specific advertiser use cases; management will be deepening collaboration of Marketing Services with e-commerce platforms; management has increased the inventory for video ads and Video Accounts; Weixin Search’s overall query volume grew rapidly in 2025 Q4 because of AI enhancements to search results, driving commercial query volume

For marketing services, revenue increased 17% year-on-year to CNY 41 billion. We experienced rapid growth from the internet services and local services categories, partially offset by slower growth from the e-commerce category due to platforms temporarily shifting budget from marketing to subsidies, and also from the financial services category due to the impact of policy changes affecting online lending during the quarter. Growth drivers included improved ad targeting, expanding our closed loop marketing services, and tailoring ad formats for specific advertiser use cases, such as ads that are playable previews of the mini games being advertised.

Entering 2026, we have deepened collaboration with e-commerce platforms, facilitating their merchants advertising within Tencent, and we’ve increased the inventory for rewarded video ads and Video Accounts, which have contributed to faster year-on-year marketing services revenue growth in the Q1 to date versus in the Q4 of last year.

At a product level, Video Accounts total time spent increased due to upgrades to the content recommendation algorithm, enabling faster growth in ad impressions while our ad load remained lower than peers. Better conversion rates contributed to more marketing spending for Mini Shops merchants. For Mini Programs, consumers engaging more with mini-games and mini-dramas attracted more marketing spend from the mini-game and mini-drama studios. Weixin Search overall query volume grew at a rapid rate due to AI enhancements to search results, driving growth in commercial query volume, while search pricing also increased.

Tencent’s management has obtained additional AI compute through leasing, through purchasing imported GPUs (likely referring to NVIDIA’s GPUs), and through purchasing domestic GPUs; the priority use-cases for Tencent’s AI compute is for HunYuan and the company’s new AI products; management currently does not want Tencent to design its own AI chips; management thinks there are many options for AI inference chips in China, and this has brought down the cost of inference chips; management wants Tencent to leverage the best training chips to build models

In terms of GPU constraints then, we’ve been quite actively provisioning, more compute, and that will be coming on stream, progressively, and increasingly quickly through this year, especially the H2 of the year. You know, that additional compute comes from leasing capacity. It comes from us purchasing, higher-end imported GPUs which are now becoming available again, and it comes from us purchasing, the increasing quantity of, domestically China-designed, GPUs. In terms of utilizing those, the compute for different use cases, you know, the priority right now is, you know, HunYuan and our new AI products more generally…

…[Question] We’re seeing a growing number of your tech peers are prioritizing the development of in-house chip design capabilities. I’m just curious where in-house chip development fits into Tencent’s own AI priorities.

[Answer] I think at this point of time, it’s not the most critical thing that we’ll be focused on. So if you look at the chip, you know, there is, you know, a difference between training chip and inference chip, right? You know, and for training chip, it’s actually very, very difficult to design and you manufacture, and you actually want to have access to the most state-of-the-art, you know, training chips to the extent possible and in the most flexible way so that, you know, you can actually sort of keep training for the best model. 

And then, you know, if you’re talking about inference, right, you know, I think inference, it’s mostly for cost. I think for cost at this point in time, there’s actually a lot of different suppliers in China, which is actually very different from, let’s say, in the training space, right, where there’s essentially one player or two players who can actually command a very, very high margin, right? You know, in the inference world, people basically sort of, you know, are earning much lower margin, and there are many more solutions and, you know, options. So, I think, you know, the key for us is actually sort of leverage the best training chips to train the best model at this point in time, and there’s a lot of value in being focused.

Tencent’s management thinks it’s really difficult right now to tell which layer of the AI technology stack will be commodities

[Question] If we think about the AI stack between, you know, the models, the orchestration layer, the application layer and so on, which parts would you say are most critical for Tencent to be best in breed versus, you know, areas where we think these will be commoditized?

[Answer] I think at this point in time, it’s actually very dynamic, right? You know, you’re in a fast-moving market. I think, you know, it’s very difficult for someone to say sort of, you know, oh, you know, there will be one layer more important than the others, right? You know, I think, you know, we have the resources, we have the people, we have the team to actually invest in all these layers.

It’s currently not possible to use AI to build games completely from scratch

There is not yet the capability to create games, you know, completely from scratch using AI for a number of reasons that we can get into.

Tencent’s management is seeing AI create demand for memory chips in two ways, namely, (1) GPUs requiring high memory capacity, and (2) AI creating software that requires memory to execute

You know, when people utilize the agentic tools that we’ve been discussing, they’re using them and they create software. You know, that software, you know, then primarily, it needs to be executed. When it executes, most of it is not executing on a GPU. It’s executing on CPU, and then as it executes, it creates, you know, memory demands. It’s not just, you know, GPU, DRAM, HBM where we’re seeing demand picking up. It’s also, you know, CPU. It’s, you know, regular RAM. It’s SSD. It’s hard disk drive.

Veeva Systems (NYSE: VEEV)

Veeva’s management thinks core systems of record such as Veeva will incorporate and work seamlessly with AI and not be replaced by it; Anthropic’s recent launch of Claude for Life Sciences has Veeva as a launch partner; management thinks LLM (large language model) providers’ launches of life sciences products will not cannibalise Veeva’s products; management thinks AI is a very positive thing for Veeva because it helps Veeva create and improve its software faster; management thinks core systems of records will be used by both agents and humans; management thinks it’s still early days of AI and it will play out over 10-20 years; management thinks the LLM providers and Veeva will have a symbiotic relationship; management thinks the LLM providers will not be interested in industry-specific software

There’s a lot of hype and fear that AI will replace today’s software systems. The reality is, not all software

is the same. Core systems of record like Veeva, SAP, and Workday are essential and will incorporate and

work seamlessly with AI, not be replaced by…

…[Question] Anthropic made a lot of noise when they launched Claude for Life Sciences and signed up a lot of deals and maybe lost in that was Veeva is an enabling and launch partner of Claude for Life Sciences. So Peter, how should we be thinking about the opportunity for Veeva to work with Anthropic, OpenAI, all the different kind of model providers out there, provide your domain expertise, provide the workflow expertise and kind of have a rising tide lifts all boats situation rather than obviously the current market view of it being more cannibalistic?

[Answer] I certainly don’t view it being cannibalistic for Veeva, absolutely not. I mean let me state that clearly. AI is a very positive thing..

…And these core systems are going to be used by agents as well as human users. Yes, that’s new. But these systems are essential, and they’re not going away…

…So we’re really in these early days of AI and people get a lot of hyper and they think it’s going to play out over 1 or 2 months. It’s not. It’s going to play out over 10 or 20 years…

…Specifically for Veeva, AI, that’s going to help us create and improve our core systems faster than before. So that’s where it will help our software development but not at the expense of quality, predictability, regulatory compliance and the real value that customers depend on…

…Anthropic or OpenAI and others, that’s an engine, and their engine will be used for a lot of things. They will be used by the Veeva applications or by custom applications that customers develop. So yes, it’s good for those large model providers. Now they have to watch their profitability, et cetera, but they’re an engine in the new wave of cloud computing. So that’s the new AWS, et cetera. So it’s a good business there. But just as AWS itself and also Microsoft Azure, Google Cloud, et cetera, that was very good business for those hyperscalers. But I think what sometimes gets lost, that actually enabled Veeva. You couldn’t have built the industry Claude for Life Sciences. You couldn’t have built those long tail of applications without those cloud infrastructure providers. And it’s the same way here with these large language models. Veeva could not build the AI applications that we’re going to build without these foundational LLMs. So I don’t know if I’ll use this word correctly. I think the word is symbiotic. I think so…

…I don’t think the AI vendors are really making industry-specific software applications, right? It takes a lot of dedication and effort to do that. So I think it’s a very symbiotic relationship. Just like the cloud area, yes, Amazon didn’t make industry-specific applications either. I don’t really see — why would somebody like Anthropic do that, right? They’re going to make broad applications and applications for coding itself, et cetera. That’s what I feel would happen.

Veeva’s management thinks the agentic layer will provide far broader value than LLMs (large language models); management thinks AI agents is a substantial opportunity for Veeva; Veeva has Vault CRM Free Text Agent that captures rich, compliant call notes; Veeva has PromoMats agents that deliver approved content faster; management will be introducing regulatory and safety agents in 2026 (FY2027); management thinks building industry-specific AI is difficult and requires proprietary data, sophisticated logic, domain expertise, and more; management thinks Veeva’s agents, if built well, can provide a lot of value to customers; management thinks Veeva is in a great position to lead in industry-AI for the life sciences industry; management is making great progress on Veeva’s first two AI agents for safety, and they will be launched in April 2026; management is pleased with the progress of PromoMats agents; there are early adopters who are live with PromoMats agents; management thinks their approach to data is resonating with the life sciences industry when building AI use cases; customers are excited about the PromoMats (Promotional Materials) agents because the agents really work and the customers have been burnt by failed AI experiments; management is seeing PromoMats agents delivering very clear ROI (return on investment) for customers; the two AI agents for safety that will be launched in April 2026 provides clear value for customers because they automate workflows that would require expensive labour; management thinks it’s still early to nail down the right pricing model, but Veeva will be going with a token-based pricing model; management is seeing most customers go with Veeva’s agents instead of them building their own agents with Veeva AI

While the major large language models are the catalyst for this shift, the agentic layer provides far broader and more diverse value. The agentic transformation underway represents a substantial opportunity for Veeva and life sciences. With our core systems of record spanning the industry’s most critical functions and unique datasets, we can deliver industry-specific AI deeply integrated into our core applications. 

For example, Vault CRM Free Text Agent captures rich, compliant call notes for deeper customer insights. PromoMats agents help deliver approved content faster. Regulatory and safety agents coming this year can streamline health authority interactions and safety case processing. And this is just the beginning.

Building reliable industry-specific AI across a wide range of use cases for a highly regulated industry is hard. It takes time, focus, and the right skills. It integrates proprietary datasets, sophisticated logic, validated processes, and depends on specialized domain expertise and safeguards to maintain compliance and data integrity. If done well, our agents will provide significant value for customers and Veeva.  

It’s early days for industry AI, and we are in a great position to lead. We have a well-established life sciences cloud that’s expanding to connect the industry, strong momentum with Veeva AI, and much more innovation on the way…

…We are also making great progress on our first two Veeva AI Agents in safety, Case Intake and Case Narrative coming in April. Customer interest is high as the industry looks to AI to drive efficiency in safety case processing…

…I am also pleased with the progress of Veeva AI for PromoMats. A number of early adopters are now live, more projects are underway, and the success of these agents is generating a lot of interest…

…Our unique and modern approach to data is resonating with the industry, providing a harmonized data foundation that fits seamlessly with our commercial software. High quality, standardized and connected data is critical for speed and efficiency and is a required foundation for AI…

…For example, in the promotional materials management area, and they’re pretty excited like that I can have a winning AI application that really works and is really durable and is from Veeva because they’ve been — a lot of them have been burned on a lot of experiments, but it’s not easy for customers to admit failed experiments because that’s just the dynamics. You don’t like to admit that. And failed is too hard of a word. Sometimes the experiment doesn’t work out, but it’s not a failure. You got a lot of learnings. But the experiments that can actually scale, they’re rare so far, and they know Veeva’s — we won’t do things unless we can scale them…

…[Question] Can you maybe speak to early proof points that you’re seeing on AI agents that, I guess, you’re planning to roll out over the course of the year? Are there any sort of ROI or tidbits from clients that you’re hearing that you can kind of comment on ahead of these releases?

[Answer] The one that’s farthest along, and we have multiple projects underway, is the commercial content area. And that — the ROI is just very clear. It’s faster content, lower cost to create that content, and that’s what it’s all about. Lower cost to create that content, I won’t quote specific numbers, but that’s pretty clear to quantify. Faster content just means better launches. That means that drives the top line before the patent on that product expires. So I get asked by that — by customers all the time. They know in the age of really omni-channel experience for their customers, which are patients and health care providers, omni-channel experience that includes AI doctors and large language models, the speed that you can get your content out there in a compliant way is just going to be critical. So the old way of approving content is just not going to suffice anymore…

…In terms of AI, it’s pretty clear there in — there’s a lot of human processing of case intake and case narrative generation that’s done by people. That’s not necessarily that high risk, but it has to be done well. And it’s expensive to hire those people, and it’s not easy. So in safety, it’s just very clear. It’s about replacing that type of labor with automation, with AI software…

…It is, as you said, still quite early. As we’re starting this year, we’re really expecting to be using a token-based pricing model, and so that gives us a little bit of predictability around the margin profile. But that may evolve over time…

…[Question] Within Veeva AI, what is the mix of customer adoption you’re seeing right now between prepackaged agents that you’ve built and custom agents that they’re building using Veeva AI?

[Answer] The bulk of it is with our agents that we’re designing. So part of it is our — I guess, our agents are probably a little more robust than our custom tooling right now. But if you look at our agents, there’s detailed work in the agents, right? There’s detailed data curation. There’s detailed testing pipelines. There’s a lot of logic in the agents, right? When we talk about AI agents, there’s a lot of logic, specific logic written in our Java code that’s hard that needs great product management. So in general, customers would rather get that solution rather than build that themselves.

Veeva’s management is not seeing AI-considerations being a major theme with the company’s customer-wins in 2025 Q4 (FY2026 Q4)

[Question] I wanted to ask if Veeva is starting to see some programs funded maybe in the name of AI readiness. I would imagine for a top 20 to commit to Veeva in any of the R&D areas, RTSM, quality, safety, it would seem you’re going eyes wide open into really viewing Veeva as a future foundation for everything AI related that is to come. And so I’m wondering if there’s an AI influence that you’re starting to see that’s contributing to the strong demand here at year-end.

[Answer] I wouldn’t say that’s a broad theme. There are cases, and it varies by area. More of the theme is, hey, we need core systems that will scale, either their existing systems are aging. So we talked about a top 20 safety win. There, their existing systems, because they were doing other things over the past years and just lots of deferred maintenance and that was going to become a critical risk for the company, so they have to get that in. There are sometimes where it will help our data business. They’re trying to clean up their clean reference data because they know AI is not going to work because, okay, garbage in, garbage out. So there’s a little bit of that, but more it’s just modernizing, getting rid of legacy and looking for increased automation. AI is — really, the goal there is automation, right? That’s the goal. But AI is not the only way you do automation. Part of it is you do automation through a system to have clean workflow. So it’s a driver, but I wouldn’t say it’s a major driver.

Veeva’s management is seeing life sciences companies group AI players into 4 buckets, namely, (1) the LLM providers, (2) the point solution providers, (3) their own in-house development teams, and (4) core application providers such as Veeva; when life science companies talk to Veeva about AI, they want Veeva to provide more AI solutions that are tightly integrated with their core systems because they trust the company; Veeva’s management thinks the company’s customers really want it to win in AI applications

They bucket into 3 — maybe 4 types of people that might be able to help them. One is the infrastructure providers, the LLM providers themselves, Anthropic, OpenAI, Microsoft in that camp, Amazon, NVIDIA, those types of things, what — how can they be leveraged there? And then they would look for point solution providers. There’s a specialized group of people in the specialized department, and they can do this proof of concept or maybe you scale it for me here. And then there’s their own employees doing custom software, and then there’s system integrators. And then you get the core application people like Veeva, like Workday, like SAP…

…When they’re generally talking to us, they want us to provide more AI solutions that are tightly integrated with their core systems because they trust Veeva, and they know we deliver quality and really know when we say something is going to work, it’s going to work, right, because our reputation is on the line versus a small start-up can just say whatever they want…

…Our customers really want us to win in AI applications. And so we have a right to win, and we just have to execute.

Veeva’s management thinks the real bottlenecks in life sciences is not the pace of drug discovery, but finding patients for clinical trials, and the pace of a patient getting the right drug for treatment; management thinks these bottlenecks are where AI can play the biggest role, and where Veeva can help; management thinks AI cannot really speed up clinical trials

[Question] Given how mission-critical this is and maybe how much it can be tied not just to better revenue outcomes but more importantly, better patient and better health care outcomes and better societal outcomes, do you see an opportunity to not just automate and drive faster time to value and efficiency but even leveraging AI within the Veeva platform to allow for better drug development, safer drugs out of the market, basically better outcomes rather than just faster time to value?

[Answer] Drug discovery is one thing, and there’s a lot of focus on that. And yes, that will get faster, but that’s not the real bottleneck. The real bottleneck is the clinical trial, the experiment that’s done in the human. And we’re always going to have to do those experiments in the human, and the human biology runs at the same speed. So that always has to be done, and the bottleneck now is finding the patients around the world that can get in those trials. So that’s one.

But the biggest bottleneck by far is there’s a patient somewhere out there in the world. They’re diagnosed with something by a doctor. How long did it take them to get diagnosed? And when did they get the right medicine that will best treat them? That’s where 90% of the value in life sciences is lost, because of that impediment, the basics of is the patient informed. Can they get to the right doctor? Is the right doctor informed? Is the payer informed? It’s — that’s where 90% of the value is lost. And I said value is lost, but on the other side, there’s a lot of people who don’t get treated correctly or timely around the world. And that affects productivity. That affects their family…

…So this is really important for us, and AI can definitely, definitely, definitely bridge that gap. AI doctors and large language models can help bridge that gap between doctors and patients, so maybe that 90% inefficiency goes down to 50%, and that will be a tremendous boom. And yes, Veeva will definitely play a part in that by connecting our customers, the industry to its external ecosystem. And its external ecosystems are clinical researchers, patients and doctors and regulators. And the industry is not well connected, and AI is going to provide a better method to do that…

…About AI speeding up clinical trials, I think AI can speed up some maybe in the start-up and in the close down but not that much really. It’s still based on the clinical protocol of the medicine, which is based on the time of the human body it takes to deal with that medicine and to prove it out and then the patient recruitment, which I don’t think is actually an AI problem, the patient recruitment. So speed it up some but not so much in clinical trials.

Veeva’s broad product suite is an advantage for customers when they are trying to implement AI

Let’s say they’re doing something with us in safety and they start doing an AI solution with us in safety. And 2 years from now, they go with us in clinical data management, and a year later, they put in an AI solution for clinical data management. Well, that AI solution is going to work with their safety solution pretty much out of the box. And that’s a benefit they never planned for they’re going to get. So I think customers start to see that it kind of fits together with Veeva.

Veeva’s management thinks customers are starting to realise that Veeva is the only company that can provide AI solutions that are also connected to all their other systems; management thinks customers are also starting to realise it’s not so easy to build and maintain their own AI solutions

But I think they’re starting to realize if you if you want to have a potential future where you have a great core safety system that has safety AI on top of it and is connected to your other systems in your company, Veeva is the only place you’re going to do that unless you’re going to build it yourself. I think most people are starting also to realize now that it’s not that easy to build and maintain these things themselves. So that’s kind of what’s leaning into our favor on the AI.

Wix (NASDAQ: WIX)

Wix’s management thinks AI and the acquisition of Base44 has dramatically expanded Wix’s market opportunity; the addition of Base44 has allowed users to build applications, content, and websites that are much more powerful and sophisticated than before

What started as a simple do-it-yourself website builder has grown into the leading online presence creation platform serving not just self creators, but also businesses of all sizes as well as professional designers and developers. In recent years, the web has undoubtedly become much more AI-first. That shift is redefining how and what people build online. AI has dramatically expanded the world of what is possible and created new dimensions that had not existed before. As a result, Wix.com Ltd.’s market opportunity today is exponentially larger than in 2025, primarily driven by our expansion into the application space facilitated by our acquisition of Base44…

…With the addition of Base44 to our platform, users can now build tailored software applications, smart mobile applications, pro-level visual content, and, of course, websites, but so much more powerful and sophisticated than ever before. These are all things you can create on Wix.com Ltd. today, which is incredible, but the possibilities ahead are much, much bigger.

Wix Harmony is a first-of-its kind website builder that blends visual editing with vibe coding; Wix Harmony is an AI layer that spans the entire Wix experience; Wix Harmony was launched in English in January 2026 and management will expand Wix Harmony globally in other languages; management is very pleased with the early conversion and monetisation of Wix Harmony; management intends to make Wix Harmony the default Wix experience for new and existing users over time; management expects negligible AI inference costs associated with Wix Harmony in 2026; management is not seeing Wix Harmony and Base44 cannibalise each other’s customer base; management built Wix Harmony for the self-creator market; users of Wix Harmony are using it for the same purposes as the old Wix; Wix Harmony currently does not support a database, but will soon do so; early users of Wix Harmony have better conversion, faster monetization, and higher ARPU (average revenue per user)

Wix Harmony is the first-of-its-kind website creation platform that blends intuitive visual editing with the flexibility and power of Vibe coding. Wix Harmony provides a unified AI layer that spans across the full Wix.com Ltd. experience, allowing for a real AI partner to be with you every step of the way as you create, manage, and grow an online presence or business. After launching in English in January, we are now expanding Wix Harmony globally in other languages, and I am very pleased with the early performance we are seeing, particularly across conversion and monetization metrics. We believe Wix Harmony has the potential to fundamentally reshape how individuals and small businesses build and scale online, not just on Wix.com Ltd., but across the Internet as it becomes increasingly AI-driven. Over time, we plan to gradually make Harmony the default experience for new and existing users, an evolution we anticipate will drive meaningful long-term impact across conversion, engagement, retention, and monetization…

…Negligible AI inference costs associated with Wix Harmony as a result of proactive infrastructure optimization completed last year…

…[Question] Just stepping back, what types of businesses or applications are you seeing users set up with Base 44? And how much crossover is there with what you see on Wix.com Ltd.’s core platform?

[Answer] We do not see any kind of competition, and you can see that they are very mostly different usage also, as you can see now. Clearly, Harmony is accelerating, Base 44 is accelerating. So, obviously, we do not think they take from each other…

…Harmony is a product we built for the self creators…

…We are pretty much seeing everybody using Harmony that was using Wix.com Ltd. before. So it is everything from personal websites to the hair salon website to large company and enterprises, so pretty much everybody. At this stage, Harmony does not support a database, but that will be added soon…

…[Question] On Harmony, just curious what the early cohort KPIs that you are seeing there in terms of conversion, ARPU, attach rate, churn, relative to the traditional cohorts and how durable you see these KPIs across your geos?

[Answer] We see a very good performance of the new cohorts. We actually see a better conversion, faster monetization, and also higher ARPU. So we believe, we hope that this strong trend will continue. Again, I think that it is too early, but we feel very positive about the first reaction and performance of this product.

Base44 expands Wix’s reach into vibe coding; Base44’s user base is scaling rapidly, with the number of new Base44 users today nearly 2/3 of the number of new Wix users; Base44 has reached $100 million of ARR (annualised recurring revenue) just 1 year after its founding and 9 months after Wix’s acquisition (Base44’s ARR was just a few million dollars when Wix acquired it); management is starting to see Base44 being used by enterprises from different industries to build their own software solutions; Base44’s current growth is completely organic as Base44 has no sales team; management believes the potential for vibe coding still lies ahead as the technology reaches the broader online population; 1/3 of Base44’s AI inference costs today are for free users; Base44 has positive non-GAAP gross margin today; management thinks Base44 has a tROI (time return on investment) of less than one year; management thinks there is a great opportunity for partners to use Base44 in the future; Base44 is driving users who joined Wix 10-15 years ago to become paid users

The second new pillar of our strategy is Base44, our leading Vibe coding platform that expands our reach into the vast world of software creation and significantly grows our TAM…

…Base44’s user base is scaling rapidly. Today, the number of new users joining Base44 is nearly two-thirds of the number of new users joining Wix.com Ltd…

…Just one year after Moar founded the company and nine months after our acquisition, Base44 recently reached approximately $100,000,000 of ARR, placing it among the fastest growing software platforms in history. While Base44 is already emerging as a top platform to build lightweight personal projects, we are seeing adoption from a growing community of businesses and enterprise-sized organizations too. Companies in the tech, banking, and healthcare industries, as well as government organizations and nonprofits, are using Base44 to build customized software solutions. We are seeing users develop their own CRM capabilities, product and project management tools, ERP systems, workflow automation frameworks, and financial reporting applications.

Importantly, this momentum and growth is completely organic. With no sales team at Base 44 today, self-propelled adoption by enterprise-size organizations demonstrates the strength of the platform as well as our successful marketing execution…

…I believe the real potential still lies ahead as Vibe coding permeates beyond early tech-forward adopters to the broad online population…

…Base44 finished the year with approximately $59 million of ARR, above our expectations at the time of acquisition. Excitingly, Base44 recently reached approximately $100 million in ARR, a major milestone that underscores our rapid growth and growing market leadership. Strong ARR growth was driven by product innovation that has resonated, a rapidly expanding user base, improving conversion and consistent upgrade and renewal trends…

…Approximately one third of Base44’s AI inference costs today is attributed to token consumption of free users…

….Even after incorporating AI-related costs associated with free users into cost of revenue, Base44’s non-GAAP gross margin is positive today and is expected to improve as the year progresses…

…Base is a very young company, very young product. And, by the way, this is why we are very also conservative about the guidance. But right now, based on the information that we have, based on the history that we already have, we are looking at less than one year of tROI and this is how we manage the acquisition cost…

…Base44 has a ton of interesting things for our partners that they can actually use for their customers, and it is more revenue stream for them. So we believe that although right now most of it is self creator-led, we believe that it is a great opportunity also for partners to use Base44 in the future…

…Base 44 is a very young product, on the Wix.com Ltd. cohorts, we are seeing people who are converting who joined us ten or fifteen years ago. That is amazing

Wix’s partnership with OpenAI is not built on APIs in the standard way, but rather, it’s built on two AIs that are collaborating

[Question] In addition to the apps partnership with OpenAI, do you see potential opportunities in terms of how Wix.com Ltd. websites are navigated and searched by OpenAI in the future, particularly ChatGPT?

[Answer] It is not APIs in the standard way, it is essentially two intelligences that are discussing and working together to give you a website. And that is a fantastic pattern that can be grown a lot.

Wix’s management has given Wix users the ability to open their websites for LLMs to crawl and read if they want to; Wix users can even give LLMs more content than what is offered over a website

As for how OpenAI or any other LLM can read Wix.com Ltd. sites, we support pretty much everything. We support, of course, make text. If our customers choose so, we can make the text visible and easy to crawl and built in a way that is very easy for the LLMs to process. And we also have ways so we can give the LLMs more than just the content that we normally offer over the website, because LLMs like to read a lot of content, when humans tend to want to read less.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Amazon, Meta Platforms, Microsoft, Okta, Salesforce, Sea, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.