Last week, I published The Latest Thoughts From American Technology Companies On AI (2025 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2025’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
- 2023 Q1 – here and here
- 2023 Q2 – here and here
- 2023 Q3 – here and here
- 2023 Q4 – here and here
- 2024 Q1 – here and here
- 2024 Q2 – here and here
- 2024 Q3 – here and here
- 2024 Q4 – here, here, and here
- 2025 Q1 – here and here
- 2025 Q2 – here and here
With that, here are the latest commentary, in no particular order:
Airbnb (NASDAQ: ABNB)
A key focus for Airbnb’s management is integrating AI across the Airbnb app
We are driving this growth by focusing on 4 key areas: making our service better, bringing Airbnb to more parts of the world, expanding what we offer and integrating AI across our app.
Airbnb’s management has been laying the foundation for a more personalised Airbnb, powered by AI, over the past year; management’s end goal is to have the entire Airbnb app become an end-to-end AI agent for users to plan and book their trips
Over the past year, we’ve been laying the foundation for a more intelligent, more personalized Airbnb from rebuilding our tech stack to launching a series of new AI features. We now have more than a dozen AI work streams underway, and they’re all focused on really creating a more personal experience for guests and hosts and making it easier to discover what we offer…
…What we want to do take AI search, which is conversational, AI customer service and the messaging platform, which is conversational and integrate them to one AI assistant or concierge. And eventually, the entire app will act like an AI agent from the top of the funnel through your trip on reservation and leaving review and then bringing you back through the app end to end.
Airbnb’s management rolled out an AI customer support assistant in 2025 Q3 that can take actions for customers and deliver personalised responses; the assistant is a custom-built AI interface designed by Airbnb; the assistant was initially launched in the USA and it has reduced customers’ need to contact a human agent by 15%; management will soon expand the AI assistant to over 50 languages in 2026; management thinks AI-powered customer support is a very difficult problem to solve for Airbnb because (1) every single accommodation option on Airbnb is unique, and (2) the stakes are very high; with AI-powered customer support, management has found that problems that used to take hours to handle can be resolved in seconds
This quarter, we rolled out smarter and faster AI customer support. Our AI customer support assistant has smarter responses. It includes answers your reservation or listing and also provides quicker, more personalized responses. It also lets you take common actions like canceling or changing reservation dates directly from the chat. So what we did is we designed this custom user interface that’s not just text-based, but it’s got rich user interface modules. So it’s a really custom-built AI interface built right into the messaging platform. Now we initially launched this in the United States, where it’s already reduced people’s need to contact a human agent by 15%. So now we’re going to expand it to more countries in more language, and we expect this to be in over 50 languages next year…
…Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind. And therefore, the issue types, customer service is really challenging, right? Oftentimes, customer service agent will hear an issue that they’ve never heard before because it’s from a host that might be a first-time host. And the guest and host might be speaking different languages. The might be simply locked out in a small town in a foreign country. You can imagine how complicated some of this stuff is. So we decided with AI to start with the hardest single problem we could think of, which was customer service. Customer service, we think, is a lot harder than, say, travel search. And the reason why is because the stakes are highest. You can’t hallucinate. You have to handle sensitive customer data. You’ve got to be fast in real time. You’ve got to escalate to the agent if there’s a trust and safety incident. And we are finding that it’s working really well. And in fact, we can go from solving a problem in hours to solving a problem in seconds.
Airbnb’s management will be rolling out AI search on the Airbnb app in 2026; the AI search function will allow customers to have a conversation with the Airbnb app to design the perfect trip; Airbnb has access to all the leading AI models that are publicly available, such as Alphabet’s Gemini and OpenAI’s GPT, to power the AI search function; an example of personalisation for Airbnb’s management is knowing what the user’s purpose of travel is and suggesting the appropriate type of accommodation option; management is testing out AI search right now; the roll out of AI search will be in 2 phases; Phase 1 of the roll out will be the ability for users to key in searches in a free text, natural language way, and receive responses in that manner; Phase 2 is when AI search can become truly conversational
We’re also building out AI-powered search. And this is a really, really big part of our AI strategy. You’re going to see this. We’re testing it now. You’ll see this rolling out through the app next year. And this will let people have a conversation with the app, just like a chatbot about what they’re looking for, so we can help them design the perfect trip. And remember that we have access to all the same models that every other chatbot and AI application has. So we think this is going to be a really delightful product to use…
…I think what we want to do in the future, and this is like now going back to our AI strategy is knowing more about the customer. understanding what their intent is. And if people are traveling for business last minute, one night, we should probably prioritize a hotel for them. Some people do want a more hotel-like experience. Other people are hardcore about the original philosophy of Airbnb. They want to feel like a local when they’re traveling. Those people probably should not see hotels very much…
…We have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API…
…What we’re testing now is if you go to the search box in Airbnb, there’s where, location, when, date, who guest, we’re testing a what box and what is a free text natural language input, which is similar to ChatGPT or Gemini. You’ll be able to type it in. And based on that, we’re going to essentially — you’re going to see like natural language results. So the search cards, not just will be structured data, but will be essentially natural language generated copy and search results. That’s Phase 1. Phase 2, it’s going to become what I guess you’d call an AI multi-turn. Multi-turn, I think, is just a fancy way of saying conversational. So you’ll be able to have a conversation. So you’ll be able to like — the information on the cards, my vision is instead of saying like 2-bedroom, 2 bath, $60, 5 reviews, a pool hot tub that no 2 people see the same copy, just like 2 people typing in ChatGPT see different outputs based on the memory and the type of question they have. So we want Airbnb to be the same way where the output is also natural language. It’s unique. And you’re going to start to see this iterably happen over the course of next year. Eventually, it will become more conversational.
Airbnb’s management thinks their approach to AI is different because they want to use AI to help people connect in the real world; management thinks that people will increasingly want real-life experiences in the age of AI, and this is especially so for the younger generation; management thinks a bet on Airbnb is akin to a bet that people will yearn for real-life connections as AI proliferates
What makes our approach different is that we’re not just using AI to pull people deeper into the screens. We’re using it to get them off their phones and help them connect to the real world. Because I believe in the age of AI, more and more, what’s going to happen is what’s on a screen will be artificial. You won’t know if it’s real or not. In the age of AI, people are going to increasingly want what’s real and what’s real is in real life. They’re going to create real experience with real people in the real world. And I think that’s especially true for younger generations who grew up on social media are now surrounded by AI-generated content. So we think Airbnb is the best way to experience the magic of the real world. So while other companies are using AI to keep you online, we’re really trying to do the opposite, get you off your phone and into the real world…
…A bet on Airbnb is a bet on AI because it’s a bet that the more AI proliferates the content we consume on devices, the more people are going to yearn for real connection with real people in the real world.
Airbnb’s management thinks that Airbnb can benefit from AI more so than other companies, especially other travel-related companies; management thinks specialization will win in travel when it comes to AI, and Airbnb has many unique capabilities
I think that Airbnb probably more than most other companies, especially companies in travel can benefit from AI. Probably the reason why is because primarily, we don’t have SKUs. Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind…
…We think that we’re going to be very successful at this because, number one, we have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API. So really, you’re not going to win or lose on the model because they’re all available. You’re going to win or lose on what you do with them. And our thesis of AI is that specialization will win in travel. That’s our theory, that specialization will win. We have a lot of unique capabilities. We understand travel, we have one of the best design teams in the world, so we can design custom interfaces…
…We do think Airbnb could be a one-stop shop for travel. And then we have a lot of capabilities that no one else has built, and we don’t think AI companies will want to develop like a messaging platform in the vast majority of people who book an Airbnb use the messaging platform.
Airbnb’s management thinks ChatGPT’s commerce-integration was not ready, hence Airbnb was notably absent from ChatGPT’s recent launch of app integrations; management thinks being integrated with ChatGPT will cause Airbnb to become a commodity-like data layer; management is open to integrating Airbnb with other chatbots, but there are a number of things that need to happen, namely, (1) customer integrations for Airbnb, (2) not being a commodity data layer, and (3) the right presentation of Airbnb results to highlight the unique nature of the company’s offerings
[Question] Airbnb was notably absent from ChatGPT’s app integration launch when other major travel players were there. Can you just talk about your thought process here?
[Answer] We just didn’t think the integration was ready. We care a lot about how Airbnb shows up in the world. And when I looked at what the demonstration, I thought it was a great concept. It was a little bit hard to discover at the time you had to actually download the app, the company’s application. We didn’t want to be positioned as essentially a data layer like a commodity. There were certain tools that we had to build.
When you book an Airbnb, you want to make sure that you see personalized results to you that you have to have an account on Airbnb, messaging is core to our platform. So it’s really about making sure that we had enough features. But we are not at all opposed to integrating into like chatbots. And I would imagine in the future that you would see Airbnb across a large surface area of the Internet. We just have a couple of principles when we are integrating.
Number one, we want to make sure that while we like the idea of being a launch partner, we still have — we like to have custom integrations if we’re going to be a launch partner, and we want to make sure that, that integration is really well developed. Number two, we don’t want to appear as a commodity. Number three, we certainly don’t want to be a data layer. And number four, we really want to make sure that people understand the uniqueness of Airbnb when they’re seeing results. So for example, we chose not to integrate with Google Hotel Finder because Airbnbs were positioned like commodities next to hotels, and we just didn’t think that was the right presentation.
Airbnb’s management is currently holding off on building an advertising business on the Airbnb app because they think AI search is disrupting the old digital advertising paradigm, so they want to nail down AI search first before introducing advertising; it appears that Airbnb may be introducing advertising very soon after the launch of AI search
With regards to advertising, we’ve been looking at this for a long time. One of the things that’s really changed is the entire paradigm of search is changing in the age of AI. So what we didn’t want to do is design a like kind of ad unit model around old search to then disrupt the ad model to AI search. So we really want to nail AI search so that as we think about advertising, we integrate into this new search paradigm, which we’re looking at right now. So that’s the status. I don’t have — and obviously, we don’t preannounce things. We are sharing that we are going to be launching AI search imminently. But beyond that, we’re not disclosing other pieces we’re launching, but expect more in this next year.
Arista Networks (NYSE: ANET)
Arista Networks’ management sees the company having superior AI networks that improves the performance of AI accelerators; Arista Networks’ strength in AI networking comes from a few sources, (1) superior hardware, (2) innovative fabric architecture, (3) AI-focused telemetry and provisioning automation, (4) high-quality software, (5) leadership of ethernet consortiums, and (6) partnerships with important AI players; Arista Networks’ Etherlink distributed switch fabric powers some of the largest AI fabrics
On September 11 at our Analyst Day, we showcased both networking for AI and AI for networking with our continued momentum across our data-driven network platforms. Unlike many others, our Etherlink portfolio highlights our accelerated networking approach, bringing a single point of network control for zero-touch automation, trusted security, traffic engineering and telemetry to dramatically improve compute and GPU utilization. Superior AI networks from Arista improves the performance of AI accelerators…
…Our success in AI has many sources, the sheer power and performance of our hardware platforms, our innovations in fabric architecture, our AI-focused telemetry and provisioning automation, our reputation for the highest quality software and our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN. And most importantly, the way we partner with the world’s largest AI companies…
…Our Etherlink distributed switch fabric powers some of the largest AI fabrics in the world. It’s also an excellent underlay for data centers of all sorts, providing a full line rate fabric with no hotspots at petabit scale for all workloads, including AI.
Arista Networks’ networking solutions are compatible with NVIDIA’s systems, but management is also keen to create an open ecosystem to build the AI stack, which includes compute, memory, and networking; the open ecosystems Arista Networks is participating in include the Ultra Ethernet Consortium (UEC) and Ethernet Scale Up Networking (ESUN); Arista Networks has unveiled its first ESUN specification together with 12 industry experts; UEC recently published its first specification; Arista Networks’ Etherlink portfolio is entirely compatible with the UEC; ESUN was started with 4 vendors including Arista Networks, but management expects 20-30 members over time
We interoperate with NVIDIA, the worldwide market leader in GPUs, but we also recognize our responsibility to create a broad and open ecosystem, including AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage and VAST Data to name a few, and build that modern AI stack of the 21st century. This stack includes the trio of compute, memory storage and a solid network foundation to run training and inference models…
…Our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN…
…At the Optical Compute Conference, OCP, Arista unveiled its first Ethernet for Scale-Up Networks or ESUN specification, along with important 12 industry experts. While we began with 4 co-founders, we are now supporting and increasing to more people so that we can build the right interoperable scale-up standard…
…After 2 years of lots of hard work led by Hugh Holbrook and now Tom Emmons, UEC did publish their first specification, I believe it was 1.0 in June of 2025. Arista’s Etherlink portfolio is entirely UEC capable, compatible, and we will continue to add more and more compliance, packet trimming, packet spring, dynamic load balancing. These are all important features that our switches support…
…We’ve been an early pioneer, 4 vendors started this together, including Broadcom, Arista and a couple of our cloud titan customers. I’m pretty sure it will be 20, 25, 30 over time. And having a standards-based OCP ESUN agreement will allow us to expand UEC into the scale up configuration as well, leveraging UEC and IEEE specs.
Arista Networks’ management is confident of hitting their previous goal of $1.5 billion in total AI-related networking revenue; management is targeting $2.75 billion in total AI-related networking revenue in 2026; management is now looking at the goal of $15 billion in revenue in the next few years, and a big chunk of the $15 billion will come from AI-related revenue
Our stated goal of $1.5 billion AI aggregate for 2025, comprising of both back end and front end is well underway. We are now committed to $2.75 billion out of our new target of $10.65 billion in revenue, representing 20% revenue growth in 2026…
…As we get now confident about exceeding our $10 billion goal next year, we’re looking at our next goal of $15 billion in the next few years. And I think AI will be a very large part of it
Arista Networks’ management is seeing unprecedented demand for AI build-outs; management sees a golden era in networking, driven by AI, and a growing total addressable market (TAM) exceeding $100 billion in the coming years
The demand and scale of AI build-outs is clearly unprecedented, as we look to move data faster across multiplanar networks…
…We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista’s driving network scale of AI XPUs, handling the power and performance. Basically, the tokens must translate to terawatts, teraflops and terabits. We are experiencing a golden era in networking with an increasing TAM now of over $100 billion in forthcoming years.
Arista Networks’ Autonomous Virtual Assist (AVA) has agentic capabilities that help customers troubleshoot issues with their networks
Arista AVA or Autonomous Virtual Assist, uses AI to help our customers design, build and operate their networks. AVA draws on both our internal knowledge base and also on the customers’ data stored in NetDL, Arista’s network data lake plus AVA has agentic capabilities to help troubleshoot proactively.
Arista Networks has a recent partnership with Oracle for Oracle Acceleron, which includes migrating Oracle’s Exadata platform from Infiniband to Ethernet
At Oracle AI World, Ken was invited to formally announce our collaboration with Oracle Acceleron. This builds upon a decade of partnership with Oracle, starting with our Exadata migration from InfiniBand to Ethernet for AI networks to RoCE, RDMA over converged Ethernet, and now multiplanar networking across cloud AI for on-time job completion in gigawatt scale AI data centers.
Arista Networks’ management continues to think that the company’s networking solutions can co-exist with white box solutions; management thinks white box solutions are suitable for companies with simple use cases, while Arista Networks’ solutions are suitable for more complex use cases; management has long seen Arista Networks as having 2 groups of competitors for AI networking, namely, NVIDIA’s bundle and white box solutions; management is seeing Arista Networks’ market share remaining stable relative to the other 2 groups of competitors; management is also seeing the entire networking market growing, benefitting all 3 groups; there was a recent case of a neocloud with non-NVIDIA GPUs that could not get a white box networking solution to work for mission-critical AI workloads, and had to adopt Arista Networks’ solution
Arista also continues to clarify our role in white box and how we will continue to coexist like we always have the past decade or more. The concept is clear. It’s all about good, better and best, where in some simple use cases, a commodity white box is good enough. Yet in other cases, customers seek the value of better Arista blue boxes with state-of-the-art hardware with built-in NetDI for signal integrity, physical, passive, active component and troubleshooting management. The best is, of course, the Arista branded EOS platform for the ultimate superiority…
…We always, as you know, coexist with 2 other types of competitors. One is the bundling strategy with NVIDIA and the other is the white box. So we have not seen any significant changes in share up or down at the moment, it’s stable. Having said that, it’s also a massive market. And we think rising tide rises all boats and this boat is feeling pretty good…
…I’ll give you one example where they were just not getting their white box to work. These are AI mission-critical workloads. And we’re seeing a neocloud come right in with, in this case, non-NVIDIA GPUs, in fact, where they’re looking to deploy Arista with its excellent hardware. And at first, they wanted to do an open NOS, but now they are adopting a hybrid strategy where it’s not only an open NOS, but Ken’s EOS is coming to shine in its full glory in this use case. So in this case, I think it’s a Blue Box to start with, but it’s quickly going into a hybrid state of blue and branded EOS box.
Arista Networks earns lower gross margins from its Cloud and AI Titans customers compared to other customer groups
We do have a mix of product margin where it’s significantly below 60% with our cloud and AI titans driving the volume and higher obviously, for the enterprise customers. The average of which, together with services is yielding that number. So when the mix tilts heavily towards the cloud and AI, you can expect some pressure on our gross margins.
Arista Networks is involved with the early designs of 5-7 AI accelerator projects (i.e. AI chip systems projects) at any point in time; management sees the possibility of 4-5 AI accelerators emerging over the next couple of years; management thinks the non-NVIDIA AI accelerators will emerge because the standards for Ethernet are getting stronger over time
I think at any given time, we have 5 to 7 projects with different accelerator options. Obviously, NVIDIA is the gold standard today, but we can see 4 or 5 accelerators emerging in the next couple of years. Arista is being sought to bring all aspects, the cabling, the co-packaging, the power, the cooling as well as the connection to different XPU cartridges, if you may, as the network platform of choice in many of these cases. So we are involved in a lot of early designs.
I think a lot of these designs will materialize as the standards for Ethernet are getting stronger and stronger. We now have a UEC spec. You heard me talk about the Scale-Up Ethernet spec, ESUN, where we can bring different work streams onto the same Ethernet headers, transport headers, data link layer, et cetera. So I think a lot of this will be underway in 2026 and really emerge in 2027 as Scale-Up Ethernet becomes a more important part of that.
In terms of deciding the networking platform of choice for GPU clusters, it’s a joint decision between the AI model builders and the cloud computing infrastructure providers, and Arista Networks works closely with both groups;
[Question] You mentioned large language model providers like OpenAI, Anthropic, and they have announced partnerships with your cloud titans. Can you share with us who is driving the decision-making on networking hardware on these announcements?
[Answer] Specific to who makes the decision, it’s really a combination. We intimately work with the software and LLM players because they certainly guide the design, but we also work with the cloud titans, and it’s a shared responsibility between both of them and where the responsibility for procuring the large data centers and the power and the location and the cooling is clearly done by our cloud Titans, but the specifications on exactly what’s required on the scale up, scale out network is done by the partners like OpenAI and Anthropic. So it’s really a joint decision.
Arista Networks is progressing well with its 4 major AI customers, with 3 of them having crossed the 100,000 GPUs mark in their clusters; the remaining major AI customer will be crossing the 100,000 GPU mark soon; Arista Networks’ work with the major AI customers has mostly been scale out; management thinks Arista Networks as being seen as a very important participant in the buildout of massive AI clusters
All 4 are doing well on the 100,000 mark. 3 have already crossed it. The fourth one, I don’t know if they’ll cross it by end of the year or next year, but they’re getting there. So we’re feeling pretty good on our large GPU deployments…
…Until now, majority of how we’ve measured our AI success through our Cloud and AI Titans has been number of GPUs and how much are they installing and can we verify that the Ethernet network works. The majority of it to date has been scale out…
…How are these being built? Clearly, they’re being driven by large language models, tokens transformers, inference use cases, you name it all. So the influence is clearly coming from these players you named. But the way they are driving the infrastructure, and I can’t keep track of the gigawatts myself, it’s 10 gigawatts here, 10 there, 30 there. It’s adding up to a lot. But I can just tell you, no matter what it is, Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer.
Arista Networks’ management is seeing AI demand taking longer to reach a stage where they have a sense of predictability on when the contracts land
The only other thing I’d add to this just generally as a topic is that when you think about that the large AI use cases are acceptance clauses, it really comes down to that coming together and the timing of that. That doesn’t follow a seasonality model…
…Good point. It lands when it lands. That is a very good point that Chantelle is making that in the cloud, we started having predictability of how they landed and how they got constructed. In AI, it’s taking longer.
As Arista Networks’ large customers focus on AI, the other parts of the company’s business are growing slower
It doesn’t leave the core business with a lot of opportunity. But that’s not to say it may be flattish, it may be grow. It’s to say that our customers are putting more attention there and that the existing business, which is already on very large numbers, will have lesser growth. We don’t yet know if it’s flattish or single digit or whether more will go to AI. We frankly can’t predict the mix this early in the game on 2026, but we think we’re in for a great ride in 2026.
Arista Networks’ management sees 3 big use cases for the company’s networking technologies for AI, namely, (1) scale up, (2) scale out, and (3) scale across; Arista Networks is also participating in scale across; management sees Arista Networks eventually participating in scale up, but it will take time; management thinks scale up deployments will have lower margins for Arista Networks, but they will carefully balance scale up, scale out, and scale across to achieve the overall appropriate margins; management thinks Arista Networks will be meeting scale up demand mostly with blue box solutions that come with lower software content from the company
There are 3 big use cases sitting in front of us, scale up, scale out and scale across. Arista’s participation to date has largely been in scale out. So we’ve got 2 major use cases in addition to augmenting this…
…Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer. Today, it is largely a set of proprietary technologies like NVLink or PCIe, and I think that will happen more in ’27…
…As we go to significant scale up volume, we expect more margin and economic capability coming together. In other words, the volume of these things will be larger, the pressure on margins will be greater. So — but we will carefully have a mix of scale up, scale out and scale across to not affect the overall margins, but definitely take our fair share in that…
…What I think the evolution of the blue box will be, I think it will be more significant in the scale up use cases where there’s a higher dependency on the strength of our hardware and our NetDI capability and a lower requirement for software.
Arista Networks’ management is seeing a sea-change happening in back-end networks where the use of Infiniband previously is now switching to the company’s Ethernet solutions; management is seeing back-end and front-end networking converging more and more; management is seeing that Arista Networks is the only networking company outside of China that is successfully selling both front-end and back-end networking; management thinks the convergence of front-end and back-end networking is really advantageous for Arista Networks
I think a year or maybe even 2 years ago, Meta, I may have told you this, we were literally outside looking in at all these back-end networks that were largely being constructed by — with InfiniBand. We’ve seen a sea change, particularly this year, where obviously, more and more times we’re being invited to construct their 800 gig, last year was more 400 gig. And I think next year will be a combination of [ 800 gig and 1.60 terabits ] on the back end. The back end is putting pressure on the front end, which is why it’s getting more and more difficult for us to say, okay, what’s the back-end number that natively connects to GPUs and what is the front end. But we know of concrete cases in our cloud titans, where not only is it putting pressure on the AI number, but they’re having to go and upgrade their cloud infrastructure to deal with it. That part is happening in a small sort of way, but what’s happening in a big sort of way is the back and front are coalescing and converging more. And it’s really becoming hard to tell, and it’s probably six of one half a dozen of the other…
…We’re seeing that Arista, I think, is the only successful vendor outside of China selling both front end and back end. And this is where our engineering alignment is so important because we can offer the customer a consistent solution across their entire infrastructure. I think this is a unique differentiator that will really help us succeed as these networks become more and more mainstream…
…In terms of the front end and back-end converging, this is truly advantageous to us because the front end requires a massive number of features. It’s incredibly mission-critical and supports a whole variety of applications, not just the straightforward of demanding communication patterns of the AI back end. So we see that the — our ability to tackle both of them effectively is a significant source of strength and a real differentiator and something that’s not easy for competitors to replicate. If you look at NVIDIA, for example, the sales volume is small in the front end and Cisco is small in the back end. And so I think we’ll see that kind of convergence being beneficial to us.
Arista Networks is doing well in both disaggregated scheduled fabrics and nonscheduled fabrics; management has no preference over one or the other and is happy to support whichever is best suited for customers’ needs
[Question] How should we think about your market opportunity between disaggregated scheduled fabrics versus nonscheduled fabrics, which appear to be used in the largest AI accelerator clusters at one of your largest customers?
[Answer] we’re not religious. We jointly developed the DSF architecture with one of our leading cloud titans, Meta. And we’ve been selling the nonscheduled fabric for a very long time. So we’ve never been religious about this. And both are doing very, very well at our cloud titans and specifically the one we co-developed with…
…We’ve had both architectures in massive production scale for, I think, 15 years now. And we’ll continue to offer this range of choice to our customers, offering them their choice between the highest value fabric with deep buffers, no hotspots, congestion-free, loss-free or an unscheduled fabric, which is maybe lower cost, but also can be more difficult to operate. And they both run the same software. So it gives the customer a range of options and a consistent operating model.
In the earlier days of the AI data center buildout, there were 2 of the larger neoclouds that did not even consider Arista Networks’ networking solutions because they wanted to go with NVIDIA’s GPU & networking bundle, but now, management is increasingly seeing more neoclouds wanting to work with Arista Networks
[Question] You mentioned neocloud is an area where you’re getting more momentum. I think you guys actually said at the Analyst Day as well. I’m just curious like what are you seeing with that customer set? I guess, from my perspective I’ve historically kind of thought of that customer as being more focused on the bundle, which isn’t necessarily your game, but it sounds like you’re maybe talking a bit more positively.
[Answer] In the beginning, we were looking at them bundling. I can think of 2 examples where we weren’t even invited to the party because you want my GPU, you’ve got to get the network from me, so we weren’t there. But there are — leaving the 2 aside, and even I think those 2 might be — might get open-minded over time, there are many more neoclouds worldwide coming up that are really looking for Arista’s help, not only on the product, but on the network design, on the software capability, they just don’t have the staff and expertise to do everything themselves, and they would rather let us satisfy their network needs. So we are taking down many neocloud and smaller enterprises, admittedly smaller numbers of GPU clusters as well.
Arista Networks’ management sees power as being a really important asset in AI data center buildouts
But if they start with 1,000 to a few thousand, then we’re hopeful they’ll grow because the one advantage they seem to all have is colo space and power, which is, as you know, is a very prestigious asset going forward.
The size of AI data centers are much larger than the traditional data centers that used to be Arista Networks’ bread-and-butter; companies’ attention are all on new buildouts for AI, and not on the refreshment of existing CPU-based data centers into AI data centers
[Question] We’ve seen a lot of the deals with the hyperscalers or the AI model companies with new data center build-outs, probably not a level since we’ve seen with the cloud build-out. So I’m just curious, is there a way to think about Arista’s opportunity with new network builds versus refreshing or upgrading existing networks?
[Answer] That’s exactly the way to think about it because in the past, with the cloud, we rarely got to talk about gigawatts and beyond. So much of them were multi-megawatts. So these are newly constructed AI build-outs as opposed to the traditional CPU or storage-driven cloud build-outs. Of course, they will have refresh too. But frankly, they’re not getting the attention. All the attention is going to the new build-outs for AI. So that’s the right way to look at it.
Coupang (NYSE: CPNG)
Coupang’s management is focused on building Coupang’s internal AI computing infrastructure; management is running small tests on opening the infrastructure to 3rd-party usage, but there are no concrete plans to do so at the moment; management’s focus with AI is to generate practical savings for Coupang; management is seeing AI deliver tangible benefits across Coupang’s operations, such as in demand forecasting, automating fulfillment processes, optimizing delivery routes, and more; management is confident that AI will deliver significant savings for Coupang; management also sees AI as an opportunity to improve Coupang’s service quality and customer satisfaction
We are focused on building our own internal AI computing infrastructure to support our operations and improve performance and cost efficiencies. We have some small effort to test and learn on the — on making parts of that technology available externally. But we’re not at the stage of having or discussing any real customer demand or capital plans there. I think in all that we do, we’ll focus on practical applications, practical savings for the company for the — primarily and remain disciplined in how we allocate resources…
…AI has always been very central to operations, and that’s only becoming more true. AI is developing — delivering tangible benefits across our operations, including in areas that relate to demand forecasting, automating fulfillment processes, optimizing delivery routes among many other applications. These advances are helping us reduce waste, improve productivity and enhance the customer experience. We’re confident that AI will deliver significant savings and improve our P&L over time. And we have many efforts underway that we expect to bear fruits along those lines…
…AI is also more than just about efficiency. It provides an exciting opportunity to raise the bar for service quality and customer satisfaction. And we’re just as eager to expand our investment and experimentation cycles on that front.
Datadog (NASDAQ: DDOG)
Datadog experienced strong revenue growth in AI native customers in 2025 Q3; management saw an acceleration of growth in the AI cohort in 2025 Q3 when excluding the largest customer (likely to be OpenAI); management is seeing AI native customers broaden in number and size; Datadog has more than 500 AI native companies, of which 100 are spending more than $100,000 annually (was 80 in 2025 Q2), and 15 are spending more than $1 million (was 12 in 2025 Q2); management sees the activity of the AI native customers primarily as an indication of what’s to come as companies of every size and industry incorporate AI; AI native customers accounted for 12% of Datadog’s revenue in 2025 Q3 (was 11% in 2025 Q2); management thinks the percentage of Datadog’s revenue from AI native customers will be less relevant metric over time as AI usage in production broadens to non-AI natives; Datadog’s larger AI native customers encompass a fairly broad group of AI companies
We also experienced strong revenue growth for our AI native customers and a broadening contribution to growth among those customers. There, too, we saw an acceleration of growth in our AI cohort in Q3 when excluding our largest customer…
…We continue to help AI native customers big and small to grow and scale their businesses. And we continue to see this group broaden in number and size with more than 500 AI native companies in this group, but 100 of which are spending more than $100,000 annually with Datadog and more than 15 who are spending more than $1 million annually with us. While we know there’s a lot of attention on this cohort, we primarily see it as an indication of what’s to come as companies of every size and every single industry incorporate AI into their cloud applications…
…In Q3, this group represented 12% of our revenue, up from 11% last quarter and about 6% in the year ago quarter. I will note that over time, we think this metric will become less relevant as AI usage in production broadens beyond this group of customers…
…[Question] On the AI side, and I don’t want to talk about the customer, but more the other ones, like 15 customers over 1 million. That’s like a big number and 100 over 100,000. How do we have to think about the nature of those?
[Answer] It’s actually fairly broad. So there is model vendors, there’s models — model that can be the lens model that can be video, it can be sound generation, it can be all of the various parts of the stack you see as independent companies. It can be — there’s quite a few companies that do that work on the coding side. So coding assistants and vibe coders and everything in that range. Some of these are very new companies. Some of these are not very new companies, some of these started 5, 7, 8 years ago. And we’re sort of not necessarily AI native from day 1, but very quickly, that would give them the growth they see today with the people to AI. So we see a little bit of that. We have companies that are other parts of the stack in AI on the, say, the server side, the other components of the infrastructure. And we have other companies that are purely applications filled with AI. So we have a bit of everything in there.
Datadog’s management is seeing high customer interest in Datadog’s Bits AI agents; the Bits AI SRE agent already has thousands of customers in preview-access; management is getting very enthusiastic feedback from customers on the time and cost savings Bits AI is delivering; a RUM (Real User Monitoring) product user has used Bits AI SRE Agent to significantly improve meantime to resolution; management is currently unsure if Bits AI will have a bigger impact on Datadog’s business from direct monetisation, or indirect monetisation; management thinks Bits AI has been a differentiator for Datadog; management is improving Bits AI in a very aggressive manner; Bits AI has helped Datadog land some large Cloud SIEM (security information and event management) deals
We are seeing high customer interest in our Bits AI agents, which we announced at our DASH user conference in June. We have now onboarded thousands of customers for preview access, the Bits AI SRE agent. And as we prepare for general availability, we are getting very enthusiastic feedback on the time and cost savings enabled by Bits AI. As RUM user recently told us, with Bits AI SRE being on call 24/7 for us, meantime to resolution for our services has improved significantly. For most cases, the investigation is already taken care of well before our engineers sit down and open their laptops to assess the issue. And this is not an isolated comment. We see the potential here for our agents who radically transform observability and operations…
…In terms of the impact for next year, on the packaging side, I’m not completely sure yet whether the biggest impact will be seen from what we charge Bits AI itself or for the rest of the platform, that it gets benefits from the differentiation of Bits AI…
…But what we can tell is this is differentiating, this is good. It works significantly better than anything else we’ve seen or heard of in the market, and we are doubling down on it. We have many, many teams now working on deepening Bits AI SREs to making sure it goes further into the resolution doesn’t just point to the issue, but fixes the code that all these kind of things working hard on that. We’re also working on breadth, making sure that we train it on many more types of data, many types of sources, sometimes even systems that are observe the systems that are not dialed up, so we can cut across to other systems our customers are using. So we are very, very aggressively developing Bits AI SRE…
…The Bits AI Agent is — it really has a growth factor for customers. So what works really well is and we’ve seen that number of times, like we set it up for them. It’s running on their alert and they go through an outage and they still go to the motion, so they still go — they still set up a bridge and they have 20 people and they spend 2 hours and in the end, they have an idea what went wrong. And then they go to Datadog and they see, oh, there’s an investigation that had run. And 3 minutes into the outage, it got the same conclusion that we got 2 hours later with 20 people on the call. And that’s completely eye-opening for customers when they see us. And we have — so that’s why we get many quotes about it…
…That’s what helped us win some large land deals for our Cloud SIEM products because the combination of the SIEM that runs extremely efficiently on top of observability data that runs very efficiently on top of Flex Log, but also saves an immense amount of time by getting 90% of the issues out of the way with automated investigation that’s extremely attractive to customers.
Datadog’s management recently launched LLM Experiments and Playgrounds for general availability, so companies can rapidly iterate on LLM applications and AI agents; management also recently launched LLM-as-a-judge evaluations for general availability for customers to access their AI application’s quality and safety; the number of LLM spans customers are sending to Datadog has quadrupled in the past few months; management is seeing a lot of interest in Datadog MCP servers; the Datadog MCP servers bridges Datadog and AI agents from 3rd parties; preview customers of Datadog MCP servers are using real-time production data for trouble shooting, root causes analysis and automation in AI agents; management sees MCP as a way to cement Datadog into customers’ workflows; management continues to see customer interest grow for next-gen AI observability; 5,000 customers are sending AI data to one or more of Datadog’s AI integrations; Datadog’s foundational model for time series forecasting, TOTO OpenWave, is one of the top downloads on Hugging Face over the past few months; Datadog currently has products getting into market for GPU monitoring, but those are not generating any significant revenue; all of the details on AI-related revenues management has shared are not for GPU monitoring
In LLM observability, we recently launched LLM experiments and playgrounds for general availability, helping teams to rapidly iterate on LLM applications and AI agents. We also launched custom LLM-as-a-judge evaluations for general availability, which lets customers write evaluation prompts to access application quality and safety. As an illustration of growth and adoption in the past few months, the number of LLM spans customers are sending to Datadog has more than quadrupled.
We are seeing a lot of interest in the Datadog MCP servers. Our MCP server acts as a bridge between Datadog and AI agents, such as Codex OpenAI, Claude by Anthropic, Cursor, GitHub Copilot, Goose by Block and many more. Our preview customers are using real-time production data context to drive trouble shooting, root causes analysis and automation in these agents. One user told us, “The Datadog MCP server is a great tool. It enables me to get the last 5 of my app and follow the spans and traces all the way to the root cause. I have never been more hooked on Datadog.” So we see MCP adoption as a great way to cement Datadog even further into our customers’ workflows…
…We continue to see rising customer interest for next-gen AI observability with over 5,000 customers sending us AI data to one or more of our AI integrations…
…A shout out to our AI research team for the amazing work they have published. Our TOTO OpenWave time-series forecasting model has been one of the top downloads on Hugging Face over the past few months, and that is across all categories. It is very impactful as, among other things, the high quality of this work allows us to attract world-class AI researchers and engineers…
…We have products that are getting into the market now for GPU monitoring. But these don’t generate any significant revenue yet. So all the revenues we’ve shared, like the acceleration, et cetera, that’s not related to us capitalizing more on GPUs, that’s a future opportunity.
Example of a 7-figure expansion deal with a heavy equipment company; the heavy equipment company will replace its open source log solution with Datadog’s products; the heavy equipment company also plans to adopt Datadog’s LLM Observability
We signed a 7-figure annualized expansion with a Fortune 500 heavy equipment company. With this expansion, this customer will replace its open source log solution with Datadog log management and Flex logs. They plan to adopt LLM Observability and their IT team is using cloud cost management to improve cost visibility and governance.
Example of a 9-figure expansion deal with a leading AI company; the AI company has expanded its usage of multiple Datadog products and has committed to an early renewal with higher commitments to secure better terms; the AI company is Datadog’s largest AI native customer (and is likely OpenAI)
We signed a 9-figure annualized expansion with a leading AI company. This company has been a long-time Datadog customer and has expanded their usage of multiple products, securing better economics for a higher commitment with an early renewal…
…We extended the contract of our largest AI native customer.
Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management sees the market opportunity in cloud and AI growing rapidly into trillions of dollars
There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers of our business. Meanwhile, we are advancing rapidly in AI, where we are incredibly excited about our opportunities. We’re building a comprehensive set of AI Observability products to help our customers tackle the higher complexity that comes with these technologies. And we are building AI into Datadog…
…The market opportunity in cloud and AI is expected to grow rapidly into the trillions of dollars and companies of every size and industry are looking to adopt AI to deliver value to their customers and drive positive business outcome. So we’re moving fast to help our customers develop, deploy and grow into the cloud and into the AI world.
Datadog’s management’s current guidance for 2025 is significantly higher than the guidance given in the 2024 Q4 earnings call, and the biggest surprise has been the adoption of AI growing faster than expected
[Question] If we go back to the beginning of the year, Datadog was expecting 19% revenue growth. It looks like you’re tracking to something over 26% growth now, and that’s just the high end of your guidance. So I guess my question is, what surprised you the most this year?
[Answer] I think the biggest surprise for us has been that — so AI in general has or AI adoption has grown faster than we thought it would at the beginning of the year. So we’ve seen that across our AI cohort. We’ve seen also that we got some of our new products and new, like the changes we’re making on the go-to-market side to click perhaps earlier than we would have thought otherwise. So all in all, we saw the leading part of the business with AI growth faster, not the lagging but the slower growing, more traditional part of the business also accelerate and that gets us where we are today.
Datadog’s management thinks that customers will eventually want agentic monitoring capabilities in a unified platform for observability because (1) it’s not practical for customers to manage so many integrations that each have their own management control and observability control, and (2) the AI parts and non-AI parts cannot be separated
[Question] When you look at some of the independent software vendors that are releasing Agentic solutions, Agentic portfolios. A number of them are including observability as part of their sort of value proposition. Is there any work you think Datadog has to do to sort of infiltrate that market or make sure that customers look to Datadog as that Agentic monitoring capability as some of these independent software vendors try to bundle in observability into their solutions.
[Answer] There’s absolutely no doubt to us that the customers will even want a unified platform for observability for all of this. There’s 2 parts to that. One is, historically, every single piece of software we integrate with, whether that’s SaaS or things that customers on themselves, also has its own management control and observability control. But you’re not going to log into [ 70 ] or in the case of customers we mentioned that they use 60 integrations for the smaller customers, 150 integrations for the larger ones. It’s not practical to actually go and manage that separately. So we think all of that belongs in a central place, and that’s the historical trend we’ve seen. We also think that you can’t separate the AI parts from the non-AI parts of the business. So you’re not going to look at your agents separately that you do at your web hosting and your database and your — everything else you have in your stack. So all of that in the end will be attached to observability.
Datadog’s management thinks AI technology allows Datadog to build capabilities into the On-Call product that it otherwise could not; management thinks the future of On-Call, when infused with AI in eras such as incident prediction, is very exciting
We entered the field with On-Call because we wanted to own the end-to-end incident resolution. So we wanted because we before that, we were detecting the incidents and sending the alerts, and then we were pretty much where the resolution happened after that. Customers were spending their time in data to diagnose and understand what was going on. So we wanted to own the full cycle. .
And we thought that with AI, in particular, we’d have the ability to do things if we are on the whole cycle that we couldn’t do otherwise. So what you see right now is, I mean, this resonates with customers, they adopting to product. We’ve mentioned like some exciting customers with say, [ one ] with 5,000 seats for On-Call, which is very exciting. But in the future, there’s many more things we can do in working on for that product.
If we both detect incident and notify, we can do some sort of things such as even predicting the incident and notifying early or rerouting early or telling people before the incident actually takes place, how they can potentially fix it. So these are all things we’re working on. I mean, look, if you look at the various product announcements we’ve made, whether that’s Bits AI or SRE or the time series forecasting model we have released. When you assemble all that, you get to a very, very interesting picture of what we can do in the future. So we’re excited by that.
Nu Holdings (NYSE: NU)
Nu Holdings’ management’s vision for Nu Holdings is to be AI-first, where foundational models are deeply integrated into the company’s operations; management thinks Nu Holdings is uniquely positioned to be a leader in the use of AI in financial services, ahead of incumbent banks and regional fintechs; management has developed Nuformer, in the past 12-15 months; Nuformer is Nu Holdings’ proprietary approach for building large generalizable models that are based on principles similar to those behind leading LLMs (large language models); Nuformer has 330 million parameters and was trained on approximately 600 billion tokens, which is already a leading scale of data for the financial services industry, but Nu Holdings’ full data set is much larger; management believes Nu Holdings’ full data set gives Nuformer a unique edge in improving its capabilities; the adoption of foundational models has delivered an improvement 3x higher than what’s typically observed in successful machine learning upgrades for credit models; the adoption of foundational models has helped Nu Holdings meaningfully increase credit limits for eligible customers while maintaining the same level of risk; management is scaling the use of foundational models to Mexico and every other part of Nu Holdings’ business; management thinks that embedding AI into Nu Holdings is a nce-in-a-lifetime opportunity to further differentiate the company from traditional banks
Our vision is to become AI first, which means integrating foundation models deeply into our operations to drive an AI-native interface to banking, while creating meaningful benefits for both our customers and our business…
…We believe Nubank is uniquely positioned to become AI first and a leader in the use of AI in financial services globally, and we’re already starting to see the first breakthroughs. Since our early days, we’ve known that technology and data will be our strongest competitive advantage, being cloud native and built entirely on modern architecture enables us to simulate, experiment, train and deploy foundation models at scale. Coupled with our proven ability to attract world-class talent, this puts us ahead of incumbent banks and regional fintech competitors and places us in a unique position globally.
Over the past 12 to 15 months, we developed Nuformer, our proprietary approach for building large generalizable models based on advanced transformer architectures and self-supervised learning principles similar to those powering world-class LLMs. These models provide a deeper understanding of customer behaviors and can be deployed across our critical risk and personalization engines. To reach this level of performance, the first generation of our Nuformer model was built with 330 million parameters and trained on approximately 600 billion tokens, an unprecedented scale of data by financial industry standards. That data represents only a fraction of our full data set, which spans trillions of tokens and reflects the vast scale and diversity of Nubank’s platform. Our business model with principality at its core generates a deep repository of high-quality transactional and behavioral data, giving us a distinctive edge by enabling Nuformer to learn from richer context and continuously strengthening its predictive power.
Historically, gains in credit performance have come from our main fronts, incorporating more and better data sources into models, expanding training samples or reducing bias within them, optimizing positive frameworks, including the use of complementary models that evaluate different dimensions of credit risk; and finally, refining modeling techniques from definition of targets to model architecture and feature engineering. The adoption of foundation models represents a radical expansion of this last frontier. It brings a research-driven approach that moves the needle through advances in model architecture and training processes, enabling rapid and continuous improvement as AI researchers push the boundaries of what’s possible. When we applied this approach, the models were built to deliver an average improvement about 3x higher than what’s typically observed in successful machine learning model upgrades. Translating this into business outcomes, our initial models enabled a major upgrade to credit card limit policies in Brazil, allowing us to meaningfully increase limits for eligible customers while maintaining the same overall risk appetite. This successful breakthrough within an already robust underwriting model, like credit card in Brazil, underscores the significant potential of these advanced approaches. We’re now focused on scaling this innovation beyond Brazil, already in motion in Mexico and extending them across every part of Nubank from personalization and cross-sell to fraud and collections, further reinforcing both the strength of our model and our ability to execute at scale.
That said, we’re still just scratching the surface. As always, at Nubank, it’s still day 1, but we believe that embedding AI into our business represents a once-in-a-lifetime opportunity to further differentiate Nubank from traditional banks.
Nu Holdings’ management sees AI improving Nu Holdings’ understanding of each customer; management sees AI changing the way users interact with the company; management sees significant opportunity to use agentic workflows across Nu Holdings’ products
For our customers, AI is enhancing our understanding of each individual and their financial needs, allowing us to deliver personalized recommendations, contextual offers and products and proactive insights at the right amount. It will also transform the way people interact with Nubank, be it through a simpler and seamless app or to a number of additional channels, embedding conversational user interfaces. We think there is a significant opportunity to include Agentic workflows across most products and services, improving customer experiences across the board.
Nu Holdings’ management thinks AI is helping Nu Holdings improve risk management and scale efficiently; AI is helping Nu Holdings reduce credit losses and fraud losses; AI is helping Nu Holdings improve productivity
For our business, AI is strengthening how we manage risk and scale efficient. It is helping us to design safer and more precise financial solutions, reducing credit and fraud losses and enabling tailored collection strategies that drive better recoveries. At the same time, it is enhancing productivity across the company from leaner operations to faster development cycles and higher engineering throughput.
Paycom Software (NYSE: PAYC)
Paycom’s management has now enabled IWant, Paycom’s new command-driven AI product, across the entire client base; IWant has successfully responded to millions of queries from employees, managers and executives; management is seeing a dramatic uptick in usage of IWant, especially among new users (and this includes C-suites); management is particularly encouraged by the engagement with IWant among C-suites; IWant is hosted by Paycom and draws from a single database, which minimises errors; management sees IWant as a new way of accessing and navigating Paycom’s software ecosystem; IWant is changing how new and existing Paycom users are accessing Paycom’s software ecosystem and deriving value; IWant gives C-suites access to information about their companies that they previously did not have directly; management is currently seeing sticky user behaviour with IWant, but it’s still early for IWant since it was launched in July 2025
We also executed the launch of our award-winning and industry-first command-driven AI product, IWant. Now enabled across our entire client base, IWant is transforming how our clients and their employees engage with their HR and payroll data.
IWant has already successfully responded to millions of queries from employees, managers and executives, extending the power of our full solution automation. We are seeing a dramatic uptick in usage, especially among new users, which include the C-suite and newly onboarded employees of our clients. The intuitive nature of IWant means new employees no longer need training on the system and are able to utilize the full solution upon hire. I’m particularly encouraged by the engagement we are seeing among the C-suite. Traditionally, executives have not been daily users of HCM solutions. With IWant, thousands of C-suite executives are already pulling data and insights directly from the Paycom system and the feedback has been phenomenal…
…IWant hosted by Paycom only draws from Paycom’s single database, which eliminates conflicts created by inconsistent or duplicative external data sets, significantly improving data integrity and the quality of the user experience…
…If you’re a new user being added on to our system, meaning you’re a new employee, meaning you’re just now gaining access to this system, it’s your predominant way to use our software. And so as we look into the future, I would expect we would see more and more people utilizing IWant as a way to access and navigate through our system in order to make changes and receive information than what you would — those that are actually navigating through the traditional way…
…With IWant, the more of our product that you have that you’re utilizing, the more access to the information that you have. So it becomes important in that as well as with IWant, you’re eliminating all navigation as well. So you don’t really need training on the system. Most new employees, they would come into our system and they would have some level of training on how to use the system. With IWant, we’re just not seeing that with new employees coming on to the system. You just tell it what you want, and it takes it there. And so again, sometimes usage patterns are hard to change. And I don’t think someone should change their usage pattern unless there’s an opportunity to be more efficient or get something — get there quicker. And we’re seeing that with new people that are onboarded in the system. And then we’ve also seen that with traditional users that may not have been achieving full value for all the modules that they have…
…I, as a CEO, I’m not set up on our benefit system to go run benefit information. I’m not set up on our applicant tracking or talent acquisition system. I’m not set up on our payroll to run all the payroll stuff or HR or any of it, expenses, any of it. With IWant, I can go in and I get access to everything. I don’t need to know how to use it. I don’t need to know how to do anything. I just tell it the information that I want…
…[Question] We see some AI systems out there that users may use initially, but then go back to how they’ve operated before. Can you share a little bit about the ramp and consistency of usage you’re seeing so far?
[Answer] We’re not seeing people use it a couple of times and then stop using it. I will say that when you looked at it in the early days, people didn’t know how to use it. If you ask IWant where the closest pizza restaurant is to you, it’s not going to be real successful in answering that question. And so people had to kind of learn how to use it to their benefit. And it’s been a short period of time. Again, we’ve had IWant out since July. And every client we have has it and all their employees do now.
Paycom’s management thinks that command-driven functionality will be the future for all software
I’m confident that command-driven functionality is the future for all software.
Paycom’s management has significantly expanded Paycom’s data center capabilities to support the company’s push for automation and future AI developments; management frontloaded $100 million in capex in 2025 Q3 to match the IWant rollout; the $100 million capex provides Paycom with multiyear capacity to support its AI initiatives; management has to do extensive optimisation to run IWant on Paycom’s own infrastructure; management thinks it’s more expensive to rent AI compute capacity than to build Paycom’s own capacity, especially since Paycom has been operating its own data centers for the last 27 years
To facilitate the automation experience, including IWant and future AI developments in the pipeline, we significantly expanded our data center capabilities, spending roughly $100 million of AI-focused CapEx on our Phoenix and Oklahoma City data centers. We front-loaded this CapEx to match the timing of our IWant rollout in Q3…
…More specifically, we invested approximately $100 million into our data centers, and that spend is now largely complete. This investment provides us a multiyear capacity runway to support our AI initiatives…
…[Question] Are you guys doing anything to optimize the usage of GPUs to better handle the millions of queries you’re already seeing, whether it be in the underlying LLM or just teaching users what they can and can’t do?
[Answer] There’s a lot you have to do to optimize. It matters how many times you’re hitting it. It matters how you’re filtering through. We use these things to also look at nonresponse rates and everything else. So there’s a lot that we go through to be able to analyze. And this is a daily analyzation of what’s going on within our product. So I don’t want to describe everything that we’re doing. It does matter though, how you develop something to how much capacity of the GPU you’re going to actually utilize or need…
…We also looked at utilizing public cloud type data centers, if you will, to be able to host for us and utilizing their GPUs. And with where we see ourselves going in the future and what the costs were associated with just being able to handle our current load, initial load for IWant, we felt it better for us to go ahead and just set up and buy our own plus that way we have control over it, and it’s operating just as all the rest of our business has for the last 27 years of operating our own data centers. So it’s really worked for us.
Paycom’s management does not see the need for major capex again in the next few years after the $100 million capex for AI infrastructure
We did have to make a spend in order to have that capacity for both what we’re doing now and into the future. So we’re in this business now. I don’t expect that we would have any level even close to this type of spend over the next couple of years…
…I was just saying I don’t know of any major CapEx opportunities for next year or even the year after from a CapEx perspective…
Paycom’s management is not seeing Paycom’s competitors coming up with AI-powered solutions
To the extent our competitors do have AI, we’re not running into it when we talk to their clients. And I don’t know how they’re paying for it because when we looked into it, it was pretty expensive to rent.
Sea Ltd (NYSE: SE)
Sea’s management is seeing Shopee’s AI efforts contribute meaningful monetisation gains in 2025 Q3; Shopee’s AI efforts include (1) smarter search, (2) better recommendations, (3) more personalized content, (4) enhanced product discovery for shoppers, and (5) generative AI tools for sellers to make their product listings more appealing; Shopee’s AI efforts have led to the following in 2025 Q3, (1) a 10% year-on-year increase in purchase conversion rate, (2) a 12% year-on-year increase in buyer purchase frequency, and (3) a 15% year-on-year increase in average monthly active buyers; Shopee will not be building foundational large language models or data centers like what Big Tech is doing; management wants Shopee to utilise AI technologies developed by Big Tech, and focus on applications; the majority of Shopee’s customer service is now handled by AI and customer-satisfaction is very high
Our AI efforts have already begun to bear fruit, contributing meaningfully to our monetization gains in the third quarter. Smarter search, better recommendations and more personalized content have made Shopee easier and more enjoyable to shop on. We have also used AI to enhance product discovery beyond search, helping buyers find relevant and interesting items even when they arrive without a specific purchase in mind. We empowered sellers with AI tools, enabling them to generate image, videos, text descriptions and virtual showrooms to make their product listings more appealing. These initiatives have increased buyer engagement, improving our purchase conversion rate by 10% year-on-year in the third quarter. Taken together, all these efforts have resonated with our customers. Buyer purchase frequency across our markets continued to improve, going up a further 12% year-on-year in the third quarter. Average monthly active buyers also increased 15% year-on-year in the third quarter…
…We’re not going to like develop — trying to make some fundamental large language model breakthrough. We’re not going to build data centers. I think like for that part, we are very much like open to work with all the like big tech like who are kind of — we have a lot of admiration with respect to how much effort and how much they can do to continually have the breakthrough of the technology and make technology more powerful and more useful. And what we are going to more focus on applications and how that technology built in Silicon Valley or anywhere in the world transform to a consumers’ daily life, a small business like in Indonesia, in Vietnam, in Brazil. So that will be specially what we are good at…
…Now majority of our customer service is handled by AI like a chatbot and the satisfaction rate is very, very high.
Shopify (NASDAQ: SHOP)
Shopify’s management sees Shopify holding the data-advantage in the AI revolution in commerce; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly
If AI is fueled by data, then Shopify has a clear advantage. We power millions of merchants and billions of transactions. That gives us access to a world of data across a spectrum of commerce. And we’re using that data to create better shopping experiences for both merchants and shoppers…
…We’ve structured data across billions of products so our partners can surface the most relevant items in seconds.
Shopify’s management is thinking of AI in 3 ways, (1) how AI helps merchants sell better, (2) how AI helps merchants operate better, and (3) how AI helps Shopify operate better
We think about the evolution of AI in 3 ways: how AI will help our merchants sell everywhere, how AI will help our merchants operate smarter, and how we, as a company, will use AI to build better.
Shopify’s management thinks agentic commerce will fundamentally change how consumers shop; Shopify has a number of tools for merchants to thrive with agentic commerce, namely, Catalog, Universal Cart, and Checkout Kit; management thinks agentic commerce has 3 layers, (1) discovery, (2) the purchasing experience, and (3) the post-purchase journey; management is building for a seamless and intuitive shopping experience across the 3 layers; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly; leading AI players including ChatGPT and Perplexity, are already using Shopify’s Catalog tool to power product discovery directly inside their chat interfaces; Universal Cart and Checkout Kit are powering in-chat shopping flows within ChatGPT and Microsoft Copilot; Shopify is building tools that help AI agents keep customers engaged and informed throughout the entire post-purchase experience; management believes that Shopify is helping its merchants be primed for success in agentic commerce; management has seen AI-driven traffic to Shopify stores grow 7x, and orders attributed to AI searches up 11x, since January 2025; a recent survey of shoppers by Shopify showed 64% of shoppers are likely to use AI for their buying in BFCM (Black Friday – Cyber Monday); management thinks agentic commerce is still really early; Shopify’s AI agent tools were built only last year
AI is helping our merchants sell everywhere, what’s known as agentic commerce. Put simply, AI is able to fundamentally change how we shop, moving from search to conversation, helping all consumers purchase more efficiently. And that’s why we built the commerce for agents tools that we introduced on our last call, Catalog, Universal Cart and Checkout Kit. These tools make it easier for agents to shop across merchant stores on a buyer’s behalf…
…Agentic commerce is so much more than just the last click. Think about it in 3 layers: product discovery, purchasing experience and the post-purchase journey. Now if you’re only looking at the payment or checkout layer, you’re missing the bigger picture of what we’re building: a seamless and intuitive shopping experience end to end.
First, let’s talk discovery. We’ve structured data across billions of products so our partners can surface the most relevant items in seconds. It’s clear where this is going. Shopping is becoming more conversational, more personalized and much more efficient. And that’s why the leading AI partners are already using Catalog to power product discovery inside their experiences. I’m sure you all saw the announcement about our partnership with ChatGPT, which is a strategic play that we’re really excited about. But let me be clear, we’re also partnered with other leaders in conversational AI like Perplexity, and our goal is to power product discovery for all agents, making us the standard across the Internet…
…On purchasing experience. Once a shopper finds what they want, Universal Cart and Checkout Kit make add to cart and checkout seamless inside the conversation. ChatGPT, along with Microsoft Copilot have already partnered with us here to make in-chat shopping flows possible.
And finally, post purchase. We’re investing in tools that help agents keep customers engaged and informed, order status, return, support, reorder prompts, so the experience stays smooth and merchants build durable relationships with their customers…
…What all this should tell you is that our merchants are primed for success in the new world of agentic commerce…
…Since January, we’ve seen AI-driven traffic to Shopify stores up like 7x. And we’ve actually seen orders attributed to AI searches up like 11x since that. So the data is showing it’s already growing. And we actually just recently did a survey for — to consumers to better understand some BFCM trends, and something like 64% of shoppers told us they’re likely to use AI to some extent in their buying…
…It’s still obviously very, very early. But what we’re really trying to do is laying the rails for agentic commerce…
…We built AI agent tools last year, now we’re partnering with everyone that matters.
Shopify’s AI assistant for merchants, Sidekick, saw 750,000 shops using it for the 1st time in 2025 Q3; to-date, Sidekick has had 100 million conversations with merchants, with 8 million in October 2025; merchants’ conversations with Sidekick can go 50-100 turns deep, covering a wide range of topics; Sidekick will get better over time; Sidekick was built 2 years ago, before there was hype about AI assistants
Sidekick, our on-platform intelligent assistant, is a prime example of that commitment. And frankly, the rate of adoption speaks for itself. In Q3 alone, over 750,000 shops used Sidekick for the first time. And to date, Sidekick has had almost 100 million conversations with merchants, with 8 million in October alone. And it’s quickly becoming the default way merchants get things done. Hundreds of thousands of merchants are running core parts of their business using Sidekick. In fact, conversation can go from 50 to 100 turns deep, covering everything from analytics and building new customer segments, to automating better SEO and so much more…
…At this scale, Sidekick will only get smarter and more powerful…
…We built Sidekick 2 years ago, well before any of the hype around that.
Shopify’s management is using AI to drive Shopify to build better products; Shopify has an internal tool known as Scout, which is a voice of the customer system that indexes hundreds of millions of merchant feedback items and makes them searchable; anyone in Shopify can use Scout to get grounded answers in seconds when similar requests would have taken weeks in the past; Shopify is developing other similar tools to Scout to make faster, better, decisions
The last thing I’ll touch on with AI is how we’re using it to build better products. For years, we’ve been honing our internal capabilities in the same way we’ve been empowering our merchants: shipping fast, measuring what matters and scaling what works using AI…
…We’re turning vast amounts of raw signal into ship products and features quickly and relentlessly…
…We have a tool effectually known as Scout. Now Scout is an internal voice of the customer system that indexes hundreds of millions of merchant feedback items, making them searchable within our tools. Any PM, designer, engineer or, frankly, anyone at the company, including myself and Jeff, can ask a question and get grounded answers in seconds. That used to take weeks. Patterns emerge by market, vertical and merchant size, allowing us to write clear specs, prioritize better and ship with confidence. And Scout is just one of many tools we’re developing to turn our own signals, whether it’s support tickets, usage data, reviews, social interactions or even Sidekick prompts into fast informed decisions.
Shopify Campaign has seen 9x year-on-year increase in budget commitments in 2025 Q3; Shopify Campaign has seen 4x year-on-year increase in merchant adoption of Shopify Campaign in 2025 Q3; management has delivered product-improvements to Shopify Campaign, including an AI-powered ranking improvement
We’ve seen 9x year-on-year increase in budget commitments from merchants this quarter for Campaigns. In fact, if you just look at Q3 2024 to Q3 2025, we’ve actually seen a 4x year-on-year growth in merchant adoption of Campaigns…
…On the product side, this thing keeps getting better and better. We introduced Gross Sales, which is this new default high-reach objective in campaigns. We just shipped an AI-powered ranking improvement, which is showing some really good early results in terms of performance gains.
Tencent (OTC: TCEHY)
Tencent’s investments in AI are benefiting its ad targeting, game engagement, coding, and gaming and video production activities; management is currently seeing AI efficiency gains in the form of growth in revenue and gross profit; management thinks that AI enables Tencent to build more, instead of reducing costs
Our strategic investment in AI are benefiting us in business areas such as ad targeting and game engagement as well as efficiency enhancement areas such as coating and game and video production…
… And if you look at the benefit AI, at this stage, a lot of the efficiency gains are more on the revenue side. and the gross profit side. So you see pretty good growth in those items. But in terms of the cost item, I would say we have already done a pretty big organizational optimization a few years back. And the organization that we have is actually an efficient and AI adoption actually allows our team to do more as well as instead of to reduce cost, which I think some other companies you are probably comparing with.
Tencent’s management is upgrading the team and architecture of Tencent’s Hunyuan foundation model; management believes that Hunyuan’s imaging and 3D generation models are industry-leading; management is hiring more top-notch research talent for Hunyuan; management believes that Hunyuan’s capabilities will improve, and that all the models in China are currently pretty similar
We are upgrading the team and architecture of our Hunyuan foundation model, whose imaging and 3D generation models are now industry-leading…
…In AI, we enhanced Hunyuan’s large-language models, complex reasoning capabilities, especially in coding, mathematics, and science. Our Hunyuan full length image generation model is ranked first globally at text-to-imaging models by LLMArena. And our Hunyuan 3D model is the top-ranked generative model of Hugging Face…
…In terms of the Hunyan team and the Hunyuan architecture, we are actually hiring more top-notch talent, especially in the research area in order to complement our existing strong engineering team and they are complementary to each other. And we have also been improving the Hunyuan overall architecture across different dimensions such as improving the hardware and software infrastructure in order to support better data preparation, to support better pretraining of the model, as well as to support reinforcement learning across different knowledge domains at scale. So these are the improvements that we are making more specifically on the Hunyuan team as well as the Hunyuan architecture…
…And I would say we are actually happy with the progress we have made already. And if you wait a little bit for our mix model, you can see, meaning for improvement in terms of the Hunyan capability. And I believe with the new improvements that we have been making, we’ll continue to pick up pace on the Hunyuan capability. And at this point in time, we actually do not believe that there is a decisive better model in China as everybody is actually locked in the pretty close range and different models may be differentm maybe better in different use cases as well. So we don’t believe we are really behind.
Tencent’s Management thinks Weixin’s adoption will gain further traction as Hunyuan becomes more capable, Yuanbao becomes more widely used, and more agentic AI capabilities are introduced by Tencent; Tencent’s AI assistant Yuan Bao has new features to serve Weixin users better, such as Yuanbao-generated content in Tencent News Feed; management wants to add more functionalities from Yuanbao into Weixin; management thinks as Weixin users get exposed to Yuanbao’s capabilities through Weixin, they will become Yuanbao app users too; management is currently seeing a pretty good ramp in Yuanbao engagement; management’s blue-sky scenario for agentic AI is an AI agent that can help users perform a multitude of tasks within the Weixin ecosystem; Tencent is still very, very early in building a capable AI agent; management is also starting to work on vertical agentic capabilities
As Hunyuan’s capabilities continue to improve, our investment in growing Yuanbao adoption and our efforts in developing agentic AI capabilities, we think Weixin will gain further traction…
…Now in terms of how Yuanbao and Weixin complement each other. I would point to the fact that Weixin has actually introduced a number of AI features based on Yuanbao’s capability…
…And we also enriched the Tencent News Feed in Weixin with Yuanbao-generated content and allowed a lot of users to use that as a way to explore more news content, related news content, as well as ask questions on the news content. And we’re actually adding more and we are planning to add more functionalities of Yuanbao into Weixin. So those functionalities actually, one, serve the Weixin users better; and two, actually help Yuanbao to gain a larger audience and more and more of these audiences find Yuanbao’s capability through Weixin and eventually become a Yuanbao app user…
…We actually have been also seeing quite a good ramp in terms of Yuanbao engagement. So I think you see both the model capability as well as our AI products keep on improving…
…I think the blue sky scenario is that eventually, Weixin will come up with an AI agent actually can help the user to essentially do a lot of tasks within Weixin and leveraging AI, right? Because if you look at the ecosystem of Weixin, it has a very strong communications and social ecosystem and it has a lot of data that allows the agent to understand about the users, feeds as well as the intentions and interest. It has a very strong content ecosystem in the form of official accounts and video accounts. It has mini-program ecosystem, which essentially includes most of the use cases on Internet. It has a commerce ecosystem, which allows people to buy stuff and the payment ecosystem, which actually allows people to pay for it almost immediately. So that is almost ideal assistant for users and understands about the users’ needs and can actually perform all the tasks within the ecosystem. So that’s the blue sky scenario.
Now I think, how do we get there, right? At this point in time, it’s actually very early stage in terms of development. Weixin is doing a number of things. In parallel, for example, it’s introducing Yuanbao capabilities into Weixin so that we can test out a lot of the AI features on a stand-alone basis with innovation. It’s also enhancing search with AI so that we can serve the users search and information collecting as well as analysis needs more efficiently.
We are also starting to work on vertical agentic capabilities. And that’s something that we are working on. We have not launched it yet. But then very likely, we’ll be sort of working on a functionality one by one.
Tencent’s management has introduced AI Marketing Plus, Tencent’s automated ad campaign solution for targeting, bidding, placement, and ad creation; AI Marketing Plus helps improve advertisers’ return on marketing spend; increases in commercial query volume and kick-through rates contributed to notable revenue growth in Weixin Search; management has increased the relevance of Weixin search ads through the upgrade of LLM capabilities; AI Marketing Plus helps advertisers reach inventories and user profiles automatically rather than manually, and SMBs are the most eager to adopt AI Marketing Plus; management is also seeing large enterprises being interested in AI Marketing Plus, in a similar manner to how enterprises are adopting Meta Platforms’ Advantage+ automated ad campaign platform; when AI Marketing Plus was initially released, large advertisers initially had some trust issues, but they started adopting it when they tested it and saw superior ROI (return on investment); the percentage of Tencent’s advertisers and the percentage of Tencent’s advertising spending that is going through AI Marketing Plus are steadily increasing
We introduced our automated ad campaign solution, AI Marketing Plus through which advertisers can automate targeting, bidding and placement, as well as optimize ad creation, improving their return on marketing investment…
…In terms of the AI Marketing Plus automated campaign solution. We believe the automated ad campaign solution benefits all the advertisers who deploy it by enabling them to automatically reach inventories, as well as user profiles that are more performant than the inventories and user profiles they were manually targeting. You’re right to say that small- and medium-sized businesses are the first — or the most eager to adopt this kind of product because they have the least legacy process to replace, and that’s what we’re experiencing right now. But we’re also seeing bigger advertisers adopting AI Marketing Plus too, parallels the experience of Meta’s Advantage+ automated ad solution overseas…
…In terms of the advertising revenue, roughly half of the growth or about 10 points was due to a higher CPM, which contribute primarily to AI supported ad tech as well as to close loop benefits. And then the other half was due to increased impression volume, which reflects increased user engagement and increased — in terms of the commercial payment volume trends, then there is a measured improvement…
…[Question] What you hear how the AI marketing plus product, any early data points on the performance and ROI for merchants on that.
[Answer] When you introduce this automated ad campaign system, the biggest sort of leap for the advertisers is allowing us as the platform operator to actually manage the bidding process on their behalf. And of course, there’s a degree of internal conservatism within the bigger advertisers as to whether to entrust the platform to manage the price or not. And typically, the larger advertisers will run the automated and the manual processes in parallel for a period of time and compare the ROI to verify whether the automated process is delivering more performance or not. And we’ve turned on that automated bidding tool relatively recently. But the early results are positive, those advertisers who are adopting the automated solution are enjoying superior returns. And therefore, the percentage of our advertisers and the percentage of our advertising spending that is going through AIM+ are steadily increasing.
Within the Fintech and Business services segment, Business Services revenue grew in the teens year-on-year in 2025 Q3, despite supply constraints with GPUs; management thinks Tencent’s current amounts of GPUs are sufficient for the company’s internal usage, although there are some shortages for external usage; Tencent’s cloud revenue would have grown more if not for GPU constraints, because management prioritises internal consumption of GPUs
Turning to Business Services. Despite supply chain constraints on sourcing GPUs, revenue grew at a teens rate year-on-year in the third quarter, benefiting from higher cloud services revenues and increased technology service fees generated from rising Mini Shop e-commerce transaction volumes. Revenue from our Cloud Storage and Data Management products, namely Cloud Object Storage, TC House, and factor GB grew notably year-on-year due to increased demand, including from leading automotive and Internet companies. And for WeCom, we launched an AI summarization feature to generate project recaps and provide advice based on users e-mails and conversations to enhance some project collaboration efficiency…
…In terms of Yuanbao adoption and also the CapEx spending at this point in time. We actually believe that there’s no insufficiency of GPUs for us at this moment. All our GPUs actually sufficient for our internal use. And there is some limiting factor for external cloud revenue…
…In terms of cloud business, I think we have been increasing our revenue finally sort of this year, right? In the past few years, our revenue has not grown that much, but our gross profit has grown very significantly. And this year, we’re growing both the revenue as well as the gross profit and the business is actually sort of profitable. One constraint of cloud business growth is availability of AI chip because when AI chips are actually in short supply, we actually prioritize internal use as opposed to renting it out externally. And the other way to say is if there is not an AI supply constrained and our cloud revenue should be growing more.
Tencent’s operating capex in 2025 Q3 was down 18% year-on-year because of supply changes; non-operating capex was down 59% year-on-year; free cash flow was flat year-on-year but down 11% sequentially; there is a difference between capex and cash-capex because of timing differences; management has lowered capex target for 2025 from previous level of low-teens percentage of revenue; the lower capex target for 2025 is because of AI chip availability
Operating CapEx was RMB 12 billion, down 18% year-on-year, primarily due to supply changes. Nonoperating CapEx was RMB 1 billion, down 59% year-on-year, reflecting higher base last year relate construction. Free cash flow was RMB [ 38.5 ] billion, largely stable year-on-year as operating cash flow growth was offset by higher CapEx payments. On a quarter-on-quarter basis, free cash flow was up 36% due to higher gains gross ratio…
…[Question] This quarter, CapEx was around RMB 13 billion, but the cash payment for CapEx was RMB 20 billion. So how should we interpret the difference between these 2 figures?
[Answer] In terms of CapEx, the difference timing gap between the accrual of server-related expenditure and cash payment, which can cause temporary mismatches between the two. In particular, the credit period for us to pay server suppliers is usually 60 days…
…In terms of the CapEx for 2025 to share with you in 2024, our total CapEx grew by 221% year-on-year was about 12% of the revenue. Previously, 2025, we guided total CapEx was as a percent of revenue to be at low teens. The 2025 CapEx will be lower to our previous guided range, but the amount will be higher than that of 2024…
…[Question] The capex for 2025 will be lower than the previous guidance, but higher than the ’24 actual capex spending, if I get that right, does it reflect a change of AI chip availability or a change of investment strategy or a change of your expectation of a future token consumption.
[Answer] It’s not a change in terms of expectation of future token consumption. It is indeed a change in terms of AI chip availability.
The Trade Desk (NASDAQ: TTD)
Trade Desk’s management sees AI improving the effectiveness of the open Internet by allowing vastly superior price discovery
AI is accelerating the improved effectiveness of the open Internet. Of course, every significant AI innovation and AI product needs quality data. The most valuable data to an advertiser is their own conversion and customer data. We will win long term because we built a business where buyers can own their future, which requires them to own, protect and use their own data. AI is fast-tracking progress for companies that are eager to put their data to work and can leverage automation intelligently…
…AI is accelerating the path to the open Internet having vastly superior price discovery and fungibility. A world with better price discovery and better open Internet supply chains for the quality content will decrease the value of user-generated content destinations and other similar apps and sites that are full of ads and unsafe content.
Nearly all of Trade Desk’s clients have already tried Kokai, and 85% are using Kokai as the default; management thinks Kokai is a significant upgrade over Trade Desk’s previous platform, Solimar, when Solimar was already the most performant DSP (demand side platform) in the world; compared to Solimar, Kokai has delivered 26% better cost per acquisition, 58% better cost per unique reach, and 94% better click-through rate; Kokai has a distributed AI architecture where every function has a separate AI model; management thinks the distributed AI architecture allows Kokai to parallelize all AI efforts and enables checks and balances between disparate functions; Bayer used Kokai to advertise on Spotify and saw 15% growth in incremental reach; Specsavers used Kokai in the UK and saw a 43% reduction in cost of securing customer appointments, and a 50% reduction in conversion time; Danone used Kokai and achieved a 33% increase in conversion rates
Today, nearly all of our clients have tried Kokai with nearly 85% using Kokai as their default experience…
…Kokai is the best upgrade we have ever made to our product relative to all previous versions and certainly relative to Solimar. Campaigns that have switched to Kokai are seeing impressive results. Since its launch, Kokai has delivered on average 26% better cost per acquisition, 58% better cost per unique reach and a 94% better click-through rate compared to Solimar. These are incredible performance improvements on top of what was already considered the most performant DSP in the world.
Kokai has a number of features in it that are game changers for our clients and for the open Internet. We’ve used the industry’s most advanced AI to enhance our system with an architecture we call distributed AI. We break down every function and create separate AI models for each of them from valuing impressions to managing identity to choosing supply paths to predicting a price required to clear and to forecast the performance and reach of a campaign before a single dollar is even spent. This effort to distribute allows us to parallelize all AI efforts and enables checks and balances between these disparate functions…
…Bayer recently added Spotify to their omnichannel campaigns on Kokai and saw a 15% growth in their incremental reach…
…Specsavers in the U.K. saw a 43% reduction in the cost of securing customer appointments using Kokai while also cutting the conversion time by almost 50%. Danone saw conversion rates go up by 1/3 for their Actimel yogurt product, leveraging the retail data marketplace and omnichannel strengths of Kokai.
Trade Desk’s management has launched and grown several AI-powered products in 2025 that upgrade the supply chain so that advertisers get more bang for buck; OpenPath has grown by hundreds of percent in 2025 9M; OpenPath gives advertisers a clearer picture of the inventory they are buying, and gives publishers better sense of what advertisers are willing to pay; management has just launched OpenAds, an auction that Trade Desk developed and sometimes hosts as an option for publishers; Trade Desk enables other buyers or DSPs to use OpenAds to bid into a fair auction; Trade Desk is working to integrate OpenAds with more than 20 of the web’s biggest publishers; management expects OpenAds to dramatically improve the supply chains of mobile in-app ads and browser-based ads; Trade Desk has launched Pubdesk, which is improving the supply chain by publishing data for the sell side; the data includes what advertisers paid the supply chain and what signals advertisers value; Pubdesk was largely fueled by Sincera, which Trade Desk acquired earlier in 2025; Deal Desk helps advertisers better manage one-to-one deals; Deal Desk is powered by AI and can predict how a deal will perform relative to the open market; management thinks Deal Desk can replace outdated upfronts; deals on Deal Desk are performing 35% better than those running on Solimar; OpenPath connects directly into premium publisher auctions; Disney is using OpenPath because it wants its premium inventory to be correctly assessed and valued; management will open source key elements of OpenAds; Hearst used OpenPath and achieved a 4x improvement in ad fill rates and 23% revenue increase; SSPs (supply side platforms) are integrating with Deal Desk; Trade Desk’s use of AI in supply chain optimization is finding better paths to publishers with double-digit percentages of efficiency
It cannot be overstated how much AI has changed and will change our business and the open Internet. This year, we’ve launched and grown several products that are solely focused on substantially upgrading supply chains so that buyers get more for their money.
We have grown OpenPath by many hundreds of percentage points this year, which means our clients are getting clear views of exactly what they’re buying and publishers have a clearer sense of what advertisers are willing to pay when they describe their inventory in a transparent and accurate way.
OpenAds is an auction that we develop and sometimes host as an option for publishers. We then bid into a fair auction and even enable other buyers or DSPs to do the same thing, too. The market needs a healthy auction and some sell-side players have continually weakened the integrity of the auction. So we’re developing an open source option that raises that bar. We just launched it, and we’re already working to integrate with more than 20 of the biggest publishers on the web. We expect this to dramatically improve the supply chains of mobile in-app ads and browser-based ads, which, of course, can use the help in an AI scraping world.
Pubdesk is improving the supply chain by publishing data for the sell side. Resellers, sellers and publishers can log into the platform and see what we paid the supply chain, what signals we value and adjust their sites and inventory to get more. This is largely fueled by the Sincera team and data that we acquired earlier in the year.
Deal Desk is a better way to manage one-to-one deals. Not only does it facilitate the buy, but using AI, it predicts how a deal will perform relative to the open market. This product enables them to do deals, but also gives them the unprecedented data and tools to avoid bad deals. It is important to note that this product will be foundational to a healthy forward market that can replace the outdated upfronts. So far, deals on Deal Desk are performing about 35% better than those running on Solimar, which is more similar to the way they run everywhere else in the programmatic ecosystem…
…OpenPath connects directly into many of these premium publisher auctions and companies like Disney do this because they want to ensure that their premium inventory can be correctly assessed and valued…
…We will open source key elements of OpenAds, and we will expose its mechanics for review. Just like recent innovations such as UID2 or OpenPath or Ventura, our intent here is to incentivize a more transparent, competitive marketplace for all…
…Publishers like Hearst are seeing a 4x improvement in ad fill rates and 23% revenue increase when integrating OpenPath…
…On the supply side, SSPs such as PubMatic are integrating with Deal Desk using the new price discovery provisioning API that helps sellers better understand and identify how sellers can increase the quality of their inventory.
The injection of AI into our supply path optimization is finding better paths to publishers with double-digit percentages of efficiency.
Trade Desk’s management thinks that the emergence of agentic search will not change much of the premium open Internet, but it will result in more search-like inventory available, and this inventory will be new premium advertising opportunities that is up for grabs for Trade Desk
[Question] Are you seeing an impact from agentic search on available publisher inventory?
[Answer] We look at roughly 20 million ad impression opportunities every single second. That’s about 1.7 trillion every single day… When you look at that many impressions, and just to be open, we buy a low single-digit percentage of that total, you’d have to when it’s that big. So that means that if we take 20 million down to 15 million per second because of AI, there’s not really much different about our business model, nothing at all…
…I actually see whatever effect the AI search world is having on inventory supply, given that I’ve said over and over again in this call that there’s more supply than demand and it is more of a buyer’s market than ever. Whatever effect it’s having on it, first of all, is fairly de minimis as it relates to the open Internet at large. We shouldn’t define the open Internet as just what happens in a browser. It is much bigger than that. It’s everything that happens in CTV, movies, sports, journalism, everything, and that’s both in an app and in a browser, that’s in every form of media that touches the Internet. If you look at it from that perspective, I don’t think it’s had any meaningful effect. And because I don’t think CTV is going anywhere, I don’t think music is going anywhere. I don’t think sports is going anywhere. I think that the premium open Internet will continue to play the most significant role in building brands and doing actual advertising. And I don’t think that AI will change that.
I do actually think that there will be more search-like inventory available, which I think is really premium advertising opportunities. In the past, companies like ours have not had access to companies like Google’s ad inventory as it relates to search. I think in a world where that’s much more competitive and there isn’t a winner-take-all outcome, which I don’t think there will be. I think there’s going to be a bunch of opportunities for us to buy into their inventory. think it will actually look a lot like CTV in the sense that fragmentation will be nearly perfect in the sense that there are enough players that there’s competition and that no one is big enough to have a monopoly or be a draconian, but it’s consolidated enough that everybody will be rational and highly competitive. And I anticipate that, that will create new advertising opportunities that will have really amazing results and efficacy.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Coupang, Datadog, Meta Platforms, Nu Holdings, Paycom Software, Sea, Shopify, Tencent, and The Trade Desk. Holdings are subject to change at any time.