Earlier this month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2025’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
- 2023 Q1 – here and here
- 2023 Q2 – here and here
- 2023 Q3 – here and here
- 2023 Q4 – here and here
- 2024 Q1 – here and here
- 2024 Q2 – here and here
- 2024 Q3 – here and here
- 2024 Q4 – here, here, and here
- 2025 Q1 – here and here
- 2025 Q2 – here and here
- 2025 Q3 – here, here, and here
With that, here are the latest commentary, in no particular order:
Adyen (OTC: ADYYF)
Adyen’s platform now has Dynamic Identification, which enables real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision; Dynamic Identification enables agentic commerce; 95% of Black Friday Cyber Monday shoppers were recognised by Dynamic Identification across online and in-store channels; Dynamic Identification was created to address the challenges AI was posing to document-based approaches to identity and risk; Dynamic Identification uses AI to draw insights from trillions of interactions across Adyen’s online and in-person flows, instead of performing static checks; Dynamic Identification powers Adyen Uplift, which makes payments decisions that balance conversion, cost, and risk; Dynamic Identification is the foundation for the Personalise module in Adyen Uplift that was developed in 2025 H2; Dynamic Identification helps merchants deal with policy abuse that includes exploitation of returns, promotions, and refunds; Dynamic Identification helped a global luxury group and a large sports and entertainment company identify highly problematic shoppers that were previously undetected; Dynamic Identification is not a product itself
Dynamic Identification adds an intelligence layer to our platform, enabling real-time decisioning that improves conversion, reduces cost, and manages risk with greater precision as our customers scale across channels…
…This new foundational layer also addresses policy abuse and enables emerging models such as agent-led commerce. Peak events validate the strength of this new layer, with ~95% of Black Friday Cyber Monday shoppers recognized across online and in-store channels…
…Advances in AI, increasingly sophisticated fraud, and the growing misuse of digital systems are exposing the limits of static, document-based approaches to identity and risk. Designed for a different era, traditional controls add friction for legitimate businesses and shoppers, while struggling to prevent abuse at scale. To address this, we have integrated a third foundational layer: Dynamic Identification. Moving beyond static checks, we designed this layer to draw on trillions of interactions across our online and in-person flows. By embedding this intelligence directly into our stack, we assess risk dynamically and adapt decisions in real time, enabling us to eliminate friction while tightening security with surgical precision…
…The most immediate impact of Dynamic Identification is visible across our optimization and risk products. It is the intelligence layer that powers Adyen Uplift, enabling decisions that balance conversion, cost, and risk across the full payment flow rather than in isolation.
Building on this foundation, we introduced the newest module within Adyen Uplift in 2025: Personalize. It was developed and validated through pilots with a select group of enterprise customers in the second half of the year, focusing on one of the most common trade-offs merchants face as they scale across channels: how to lower payment costs without negatively impacting conversion. Lower-cost payment methods are often available, but encouraging shoppers to choose them indiscriminately can increase checkout abandonment and degrade the customer experience. Dynamic Identification allows this trade-off to be managed intelligently. By understanding who the shopper is and how they behave across both online and in-person touchpoints, we can personalize the payment experience in real time, guiding shoppers toward preferred and lower-cost options only when the data indicates they are likely to complete the transaction…
…For our customers, an underestimated share of losses comes not only from traditional payment fraud, but also from policy abuse: repeated exploitation of returns, promotions, and refunds that often appear legitimate in isolation but compound into material cost over time. Without visibility into repeat behavior, merchants are left to rely on manual reviews or broad policy restrictions, increasing friction for legitimate customers while failing to address the underlying problem.
In the second half of 2025, we applied Dynamic Identification to this challenge through targeted pilots with enterprise customers. By linking refund activity at the identity level, rather than viewing transactions in isolation, we were able to surface patterns that had previously remained hidden.
The pilots showed strong engagement, with merchants using these insights on a daily basis, rather than only for ad hoc investigations. More importantly, they reported a step change in confidence: they were able to identify abuse clearly, measure its true scale, and pinpoint its sources. This replaced fragmented, manual processes with shared, data-driven visibility. Capabilities such as identifying top refund contributors at the shopper level were consistently cited as materially reducing investigation time and operational overhead…
…One global luxury group identified individual shoppers each receiving up to €5k in refunds, in some cases up to twenty times their average basket value, revealing potential material losses that had gone unnoticed. In another case, a large sports and entertainment customer identified a shopper with roughly 70% of transactions refunded over several years, exposing a long-standing abuse pattern that had not been visible through traditional transaction-level analysis…
…Dynamic Identification is our way of applying AI to the large data set we have…
…Dynamic Identification is in itself not a product. So one of the product suites that is built upon Dynamic Identification is Uplift.
Adyen’s management sees Dynamic Identification as an enabler of agentic commerce; management thinks merchants see clear potential in agentic commerce, but merchants also want to retain ownership of the customer relationship, control over payments and data, and the same level and type of risks; Dynamic Identification enables verification of shopper intent, adaptive authentication, and identity-informed risk decisions even without a human in the loop; Adyen is engaged with the broader ecosystem in enabling agentic commerce; agentic commerce currently has immaterial volume on Adyen; management is not including agentic commerce in Adyen’s 2026 guidance, but thinks it will be a growth driver in the long term; management sees trust as a really important component in agentic commerce, and that’s where Dynamic Identification helps; management thinks it’s really important for Adyen to work with key players in the agentic commerce ecosystem, such as OpenAI and Google, to develop protocols
Dynamic Identification is also a critical enabler of emerging models such as agentic commerce. As this evolution unfolds over time, traditional identity signals are likely to fall away. Transactions initiated by agents will require new trust frameworks, relying on infrastructure, behavioral context, and adaptive risk models rather than direct human interaction.
In H2, we focused on understanding our customers’ needs and how we can best build to meet them. We held extensive conversations with enterprise merchants across retail, luxury, travel, entertainment, and platforms to understand both their ambitions and their concerns. While merchants see clear potential in agent-led commerce, they are equally clear about what must not change: ownership of the customer relationship, control over payments and data, and confidence that new channels can be adopted without introducing new risk…
…Rather than building isolated agent experiences, we are extending our existing platform so that agent-initiated transactions become another channel within a merchant’s existing workflows, governed by the same principles of control, security, and interoperability. Dynamic Identification plays a central role here, enabling verifiable shopper intent, adaptive authentication, and identity-informed risk decisions even when a human is no longer directly in the loop…
…We deepened our engagement with the broader ecosystem by collaborating with partners including OpenAI, Google, Cloudflare, Visa, and Mastercard, and joining the Agentic AI Foundation. Together, we are contributing to the development of open standards that allow agent-led commerce to scale safely and interoperably, without locking merchants into closed systems or fragmenting the ecosystem…
…At the moment, the number of transactions is still immaterial on our platform. We started with it. I think that’s very important, so we started with Agentic Commerce. It’s an additional sales channel, and the beauty of having a single platform globally is that we basically have all the building blocks to cater it and to start growing this sales channel with our customers…
…Take agentic commerce as one example. It’s not gonna drive short-term revenues, right? So it’s not a big part of our 2026 revenue expectations, but if it’s a top priority for your customer, you want to be there, and you want to support them with it, and that’s where we’re well-positioned to do it, and it will help us drive growth over a longer period of time, right?…
…In this new world, we need to know who is the consumer behind the agent, and how do we know that we can trust the agent, that he’s indeed acting on behalf of the consumer? And that’s where Dynamic Identification really helps. So it helps to look at the signals that we get and compare that to the signals that we have in our system, and then come up with the right outcome or decision, whether this can be trusted or not…
…I’s also very important to shape the protocols with OpenAI, with Google, to make sure that that information is not get lost, and making sure that also our merchants do not lose the connection with the consumer behind the agent. Because that’s one of the key elements that our merchants find important, and we want to make sure that that connection is not lost.
In pilot tests, Personalise, which is powered by Dynamic Identification, helped merchants improve conversion by 6% while lowering transaction costs by 3%; mobility provider Hoppy used Personalise and achieved 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities; Personalise was able to to dynamically prioritise the payment methods riders were most likely to use for Hoppy
Insights from the H2 pilots demonstrate the value of this adaptive approach. Merchants observed conversion improvements of up to 6%, alongside transaction cost reductions of up to 3%, achieved through personalized optimization rather than static, rule-based, and generic logic…
…Mobility provider Hoppy realized 2% payment cost savings while maintaining a locally relevant checkout experience as it expanded into new cities. By dynamically prioritizing the payment methods riders were most likely to use, while favoring cost-efficient options where possible, Hoppy protected margins without compromising conversion. Together, these results show how moving beyond static checkout logic enables businesses to better align shopper preferences with cost-efficient payment methods, turning checkout into a scalable driver of growth and profitability. This is the power of Dynamic Identification: translating real-time intelligence into decisions that drive tangible results.
Airbnb (NASDAQ: ABNB)
Airbnb’s management chose to deploy AI for customer support as the first use case within the company; Airbnb built an AI agent trained on millions of support interactions; Airbnb’s AI agent is now resolving 1/3 of support issues, and resolution times are now much faster; Airbnb’s AI agent is live in the US, and management plans to roll it out globally; management’s vision for the customer support AI agent is for guests to be able to call and talk to the agent; management thinks that an AI agent that can converse with guests via voice will (1) lower customer support costs for Airbnb, and (2) improve the quality of customer support
The final piece that accelerates everything we do is AI. Now we’ve taken a really intentional path here. While other companies rush to build chatbots into their existing apps, we started by solving the hardest problem, customer support. We built a custom AI agent trained on millions of our support interactions. It’s already resolving 1/3 of the support issues without needing a live specialist and resolution times are significantly faster. It’s live across North America, and we’re planning to roll it out globally…
…Right now, nearly 30% of tickets in North America that are English-based are handled by an AI agent. A year from now, if we’re successful, significantly more than 30% of tickets will be handled by a customer service agent in many more languages, in all the languages where we have live agents and AI customer service will not only be chat, it will be voice. You can actually call and talk to an AI agent. We think this is going to be massive because not only does this reduce the cost base of Airbnb customer service, but the kind of quality of service is going to be a huge step change. Not only can you get responses in seconds, but the agents using AI are going to be significantly more productive.
Airbnb’s management is building an AI-native experience within the app that knows guests and hosts and will help (1) guests plan their entire trip, and (2) hosts run their businesses better; management will build the AI-native experience without spending significant sums of money on data centers; management will build the AI-native experience without building AI models; management thinks Airbnb’s investments into AI will not affect the company’s profit; management thinks AI will help personalise the user-experience for guests on Airbnb
We’re building an AI-native experience where the app doesn’t just search for you. It knows you. It will help guests plan their entire trip, help hosts better run their businesses and help the company operate more efficiently at scale…
…We don’t operate experiences, and we’re not building data centers. What we’re doing is finding small wins and scaling them profitably…
…I think one of the great things about Airbnb is that we have a very, very cost-efficient innovation model. So unlike other companies, we’re not building models. We do not have a huge CapEx cost base. So our investment in AI will not affect the P&L. I don’t think you’ll see it in the P&L…
…AI allows us to personalize. Some people come to Airbnb and all they want to see are unique homes. And before AI, like, personalization was a little more primitive. So if they saw a hotel, it might be jarring. Now we can really personalize. So people who just want to see Airbnbs can see Airbnbs. People just want to see hotels, we can eventually personalize, they can just see hotels. If people want to see both, we can know if you’re booking last minute, 1 night, then we’re going to show you a hotel. If you’re booking a family of 5 in Italy, we’re going to show you a home. So it really goes back to personalization.
Airbnb’s management believes that LLM (large language model) chatbots cannot disintermediate Airbnb because they lack access to the unique data and functionality that Airbnb has; management believes that adding an AI layer onto the Airbnb app will create something that is impossible to replicate; management thinks LLM chatbots will be very similar to online search in being good top-of-funnel discoveries for guests and this will be positive for Airbnb; management has seen that traffic from LLM chatbots converts at a higher rate than Google traffic; management sees AI models as being available for use by anyone; management thinks specialisation will win in travel with AI because Airbnb can use any leading AI model and customise it based on Airbnb’s millions of interactions, and hook up the model to important contact points; management does not think that one model builder will end up owning everything
This approach is also our strongest defense against disintermediation. A chatbot can give you a list of homes, but it can’t give you the unique points you find in Airbnb. A chatbot doesn’t have our 200 million verified identities or our 500 million proprietary reviews, and it can’t message the host, which 90% of our guests do. It can’t provide global payment processing, customer support or insurance. By layering AI over the entire Airbnb experience, we believe we’re building something that’s impossible to replicate…
…I think these chatbot platforms are going to be very similar to search. They’re going to be really good top-of-funnel discoveries. And in fact, what we’ve seen is, I think, they’re going to be positive for Airbnb. And I’m very, very deep in this space. And what we see is that traffic that comes from chatbots converts at a higher rate than traffic that comes from Google. But the other thing to know, and this is the most important point, is that these models are not proprietary. The models in ChatGPT, the models in Gemini, the models in Claude and the models like Kiwi are available to every single company. And so pretty soon, every company becomes an AI platform if they make the shift. We will be able to build everything everyone else will have if we use their models. And we believe specialization will win in travel because if somebody wants to find an Airbnb or have a trip, we can take their model, the same model they use, we can post-train it and tune it based on our millions of interactions. We can connect it to our customer support agents. We can connect it to our hosts. And that’s fundamentally what we think…
…I don’t think that one company is going to own everything. I think we’re going to be able to work together. And these companies will be very helpful top-of-funnel traffic generators for Airbnb just like Google was.
Airbnb’s management wants to nail down AI search for Airbnb first and then applying the AI search form factor to sponsored listings; Airbnb is currently conducting small-scale tests on AI search; management can’t pin down a concrete timeline for building AI search; management thinks AI search is difficult problem to solve for e-commerce because it is multi-modal; management thinks a chat interface for AI search for e-commerce (and travel) is not ideal, and Airbnb needs to innovate on the user interface
One of the things that’s been really clear with the — after the launch of ChatGPT was that traditional search was going to become essentially conversational AI search. And that what we wanted to do is really design AI search, really see how that works. And then if we are going to do sponsored listings, we design that ad unit in that form factor. So we’re focused, first and foremost, on the most perishable opportunity, which is AI search. Actually, funny enough, we are doing tests as we speak. So AI search is live to a very small percent of traffic right now. We’re doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we’re going to be experimenting with making AI search more conversational, integrating it into more of the trip. And eventually, we will be looking at sponsor listings as a result of that. But we want to first nail AI search…
…AI search will eventually — I can’t put a time line on it because AI is obviously highly unpredictable. But we want to be — we would love to be the first company in e-commerce that really nails AI search, conversational search. I think it’s really hard not just in travel, but all e-commerce. One of the reasons that chatbots are really hard for commerce is because they’re very visual. They’re photo forward. You need to be able to compare. You need to be able to open different tabs. So a text forward chatbot interface is not the ideal. So we have to actually innovate on the user interface.
Airbnb’s management thinks AI will significantly improve productivity for all Airbnb employees; more than 80% of Airbnb engineers are currently using AI tools
It’s going to make our engineers and everyone at Airbnb significantly more efficient. More than 80% of engineers are now using AI tools. That soon will be 100%.
Arista Networks (NYSE: ANET)
Arista Networks has exceeded its goal of earning $1.5 billion in AI center networking revenue in 2025; management has raised their AI center revenue-goal for 2026 and now expects Arista Networks’ AI center revenue in 2026 to be double that of 2025’s; management’s target for AI center revenue in 2026 includes both front-end and back-end networking
As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion as well as $1.5 billion in AI center networking…
…With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue…
…We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion…
…3 years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? 3 years later, I’m actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it’s tied to the back-end GPU clusters, and it’s an all Ethernet, all AI system for agentic AI applications.
Arista Networks’ products can interoperate with NVIDIA, but management sees Arista Networks emerging as the gold standard network for running training and inference models that process tokens at teraflops speed; Arista Networks is co-designing AI rack systems with 1.6T (1.6 terabits per second) switching coming in 2026
We interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage and VAST Data, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops…
…We are codesigning several AI rack systems with 1.6T switching emerging this year.
Arista Networks’ management recently launched its flagship 7800 R4 spine product for routing use cases that include AI spines
In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases.
In 2025, Arista Networks participated in Ethernet-based industry standards for AI scale-up and scale-out networking; Arista Networks’ networking portfolio are successfully deployed in scale-up, scale-out, and scale-across AI networks; management thinks AI networking architectures need to handle both training and inference frontier models to ease congestion; the key metric when handling training is job completion time, while the key metric when handling inference is time taken to a first token; management sees Arista Networks’ portfolio has having the features to handle the fidelity of AI and cloud workloads; management’s strategy for AI networking is based on Autonomous Virtual Assist, which helps instrument customers’ networks for enhanced security, observability and agentic AI operations
In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 Specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front-end of compute storage, WAN and classic cloud networking. Our AI accelerated networking portfolio consisting of 3 families of EtherLink spine-leaf fabric are successfully deployed in scale-up, scale-out and scale-across networks.
Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different. It’s the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow and all the patterns associated with it.
Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our published subscribed state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers’ networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for Network as a Service.
Arista Networks’ purchase commitments at the end of 2025 Q4 was $6.8 billion, up 42% sequentially; the sequential increase in purchase commitments was for chips related to new products and AI deployments, and was affected by the supply constraint on DDR4 memory chips; pricing for memory chips have gone up significantly for Arista Networks; management sees memory chips as the new gold in the AI sector
Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing such as the supply constraint on DDR4 memory and the lead times from our key suppliers…
…Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they’re more memory intensive. Add to that, that we’re expecting increases from the silicon fabrication that all the chips are made, as you know, essentially with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We’re having to smile and take it just about at any price we can get and the prices are horrendous. They’re an order of magnitude exponentially higher. So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector.
The demand for Arista Networks’ networking products in AI data centers comes only after the data centers are built and after the GPUs and other AI chips are purchased; management sees demand for Arista Networks’ products as being very good, but the exact timing for shipments is harder to pin down
That’s an important thing to understand, that we don’t track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network comes — lags a little. So demand is going to be very good, but whether the shipments exactly fall into ’26 or ’27, Todd, you can clarify when they really fall in, but there’s a lot of variables there.
Arista Networks was initially working with only a small handful for model builders and AI chip designers, but the company is now working with many more of such entities; NVIDIA had essentially 100% market share just a year ago, but Arista Networks’ management now sees AMD AI chips as having about 20%-25% market share; Arista Networks is the preferred provider for AI data centers that utilise AMD AI chips
If you look at us initially, we were largely working with 1 or 2 model builders and 1 or 2 accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there’s really multiple layers in a cake where you’ve got the GPU accelerators…
…Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or xAI or Anthropic Claude or OpenAI and many more coming. These models and the multiprotocol algorithm or nature of these models is something we have to make sure we build a network correctly for. So that’s one…
…A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they’re building best-of-breed building blocks for the NIC, for the network, for the I/O and they want open standards as opposed to full-on vertical stack from one vendor.
Arista Networks’ management thinks AI model builders will be working with multiple cloud providers, and Arista Networks will be working with all the cloud providers
I think the biggest issue is not only the model builders, but they’re no more in silos in one data center, and you’re going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we’ve historically not worked with this. So I think you’ll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.
Arista Networks’ management are careful going into business with some AI neoclouds (the ones that converted from oil money or crypto money into AI) because their businesses and financial health are questionable
There are a set of neoclouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI. And over there we are going to be much more careful because some of those neoclouds are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a onetime. We don’t know the exact nature of their business and those will be smaller.
Arista Networks’ management does not believe that AI is eating software; management believes that AI enables better software to be built
I don’t think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software.
Arista Networks’ management thinks that the rise of agentic AI will increase demand for all kinds of XPUs
The rise of agentic AI will only increase, not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Arista Networks’ 4 major AI customers are all deploying AI with Ethernet; 3 of the 4 customers have deployed 100,000 GPUs each, and they are growing; the remaining customer is migrating from Infiniband and is still below 100,000 GPUs
We are in all 4 customers deploying AI with Ethernet. So that’s the good news. 3 of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it’s still below 100,000 GPUs at this time, but I fully expect them to get there this year, and then we shall see how they get beyond that.
Arista Networks has extended the ability to stream the state of a network into AI clusters
The EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, the CloudVision or whatever system can then receive it. And we’re extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what’s going on in the RDMA stack on the host, what’s going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull those — that information all together in CloudVision and give the operator a unified view of what’s happening in the network and what’s happening in the host.
Cloudflare (NYSE: NET)
A leading AI company expanded its relationship with Cloudflare, and Cloudflare is now the AI company’s only long-term infrastructure provider with 100% traffic allocation; Cloudflare’s management is seeing a trend of AI companies choosing Cloudflare as their infrastructure platform
A leading AI company expanded their relationship with Cloudflare, signing a 2-year $85 million pool of funds contract for our full platform, selecting Cloudflare as their single long-term infrastructure provider with 100% traffic allocation. Following a rigorous RFP, they selected Cloudflare over major hyperscalers not just for our unified stack and rapid innovation, but also for our strategic neutrality. This win underscores a growing trend, the most sophisticated AI companies are choosing Cloudflare as their mission-critical, independent platform to connect, protect and build the future of the AI-driven Internet.
A leading AI company expanded its relationship with Cloudflare; this AI company chose Cloudflare in a build versus buy scenario; Cloudflare enables the AI company to manage global traffic with 99.999% availability
Another leading AI company expanded their relationship with Cloudflare, signing a 1-year $5.4 million contract for our Workers developer platform and application services. What’s most compelling about this win is that it was a classic build versus buy scenario against the hyperscalers. In an industry where being first matters, our ready-to-deploy developer platform provided the agility and speed to market they couldn’t find elsewhere. With Cloudflare, this customer is now able to manage heavy global traffic with 99.999% availability. This deal is a testament to our shift from being just a vendor to instead being a strategic co-innovation partner for the world’s most sophisticated AI companies.
A Fortune 100 company that is also a leader in AI expanded its relationship with Cloudflare; the Fortune 100 company requires zero downtime and chose Cloudflare not because of price, but because of performance
A Fortune 100 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million contract, representing a notable upsell from their initial engagement with us in mid-2025. As a leader in AI, this customer operates under a strict mandate for global resiliency requiring a multi-vendor architecture to ensure zero downtime for their application performance. We beat out the competition not on price but rather on performance and engineering innovation.
A European Global 2000 technology company expanded its relationship with Cloudflare, and is in discussions with Cloudflare about AI Crawl Control
A European Global 2000 technology company expanded their relationship with Cloudflare, signing a 3-year $5.8 million pool of funds contract to provide seamless access to our entire platform. We signed our first deal with this customer back in February. After quickly realizing the power of Cloudflare’s platform, they came back to us looking to move from a small variable commitment to a deep strategic partnership. Unlike their legacy incumbents, our combination of best-of-breed security and our Workers developer platform enables sophisticated automation to manage their global infrastructure and greater flexibility to innovate at scale. It’s early days with this customer, and we’re already in discussions regarding AI Crawl Control.
A US media company signed a contract with Cloudflare for AI Crawl Control; the media company was facing a massive increase in AI scraping and chose Cloudflare to gain visibility into which AI models are consuming their data; with the visibility on the AI models, the media company can better monetise its content
A U.S. media company signed a 3-year $3.1 million contract for AI Crawl Control, along with application services and Workers. This customer was facing a massive increase in AI scraping, which was crushing their network and driving up infrastructure costs. They chose Cloudflare to gain visibility into which AI models are consuming their data, allowing them to protect and eventually monetize their unique content. By leveraging Cloudflare Workers to replace years of complex technical debt from an incumbent, they were able to migrate massive Internet properties into production in just 2 weeks. This deal proves that as AI accelerates, Cloudflare is the partner of choice for companies looking to protect their IP while improving performance, reducing operational costs and enhancing their security postures.
Cloudflare’s management is seeing the shift to AI and agents driving more demand for the company’s services; management thinks AI agents (1) look at significantly more sites when making decisions, (2) allow for much greater degree of software customisation, and (3) never need to rest, unlike humans; management thinks AI agents are changing the economics of software from a seat-based model, to one where the importance lies with providing the compute, connectivity, and guard rails for agents; management thinks Cloudflare is able to capture value on both sides of agentic interactions; most vibe coding platforms are either built on Cloudflare Workers or have it as their preferred deployment target; human developers are using Cloudlfare Workers to manage inference with caching, rate limiting and observability; usage of AI is driving adoption for Cloudflare’s Zero Trust platform; management is seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven apps; management sees Cloudflare, which has 20% of the web sitting behind its network, as the global control plane for the agentic internet; management thinks the agentic internet is creating new growth opportunities for Cloudflare; a Fortune 500 pharmaceutical company is using Cloudflare to build AI tools; a technology company is using Cloudflare Containers to allow its customers to deploy AI tools in a secure isolated environment; a leading financial services company used Cloudflare to launch a MCP (model context protocol) server for AI agents to interact directly with its payment services; management thinks companies like Cloudflare for deploying AI because it offers (1) a complete tool kit, (2) a modern architecture that fits agentic work, and (3) cost-efficient scalability; management sees AI as a pure tailwind for Cloudflare’s business
Second, we are seeing the shift to AI and agents drive more demand for Cloudflare services. What we’re witnessing is a fundamental replatforming of the Internet. AI is driving a paradigm shift in how software is both created and consumed, and that is turning out to be the biggest tailwind for Cloudflare’s network and Workers developer platform. If you look at the last 30-plus years of the Internet and software ecosystem, they were built for human consumption, people in seats and clicks. Now the agentic Internet is emerging, and we can already see its trends. If humans looked at 5 sites when they were making a decision, agents might look at 5,000. If humans had to fall back on generalized software and interfaces, agents allow for infinite customizability of every software application for every need. If humans follow a common circadian rhythm to work, agents never need to sleep. Agents in other words, are the ultimate infrastructure multiplier. In turn, they are reshaping the very economics of software. The industry is transitioning from a business model defined by seat licenses to one where the winners are those providing the compute, connectivity and rails and guardrails for these new digital workers at scale. Cloudflare was built for this moment. We are uniquely architected to capture value on both sides of the agentic interactions. That means we win when AI applications are built on Cloudflare Workers, but we also win just from the increased usage of all of our products and agentic Internet drives…
…When the cost of generating code drops to near 0, the volume of new applications explode. It’s not a coincidence that most so-called vibe coding platforms are either built on Cloudflare Workers or have us as their preferred deployment target. We exited 2025 with more than 4.5 million human developers active on our platform. It’s a lot more if we count their agents. Developers are using Workers to run autonomous logic across our global network, containers for sandboxes and AI gateway to manage inference with caching, rate limiting and observability. AI usage is even driving adoption of our Zero Trust platform to ensure that data is compartmentalized and access granted in limited and controlled ways…
…We’re seeing agentic workloads generate an order of magnitude more outbound request to the web than traditional user-driven applications. Over the month of January alone, the number of weekly requests generated by AI agents more than doubled across the Cloudflare network. This is driving increased demand for our whole platform. This is where Cloudflare’s scale becomes our moat. With more than 20% of the web already sitting behind Cloudflare’s network, we are effectively the global control plane for the agentic Internet. That’s creating a number of new growth opportunities, both with our traditional business as well as what we’ve begun calling Act 4, helping invent the future business model of the Internet. If AI agents are the new users of the Internet, Cloudflare is the platform they run on and the network they pass through. This creates a virtuous flywheel, more agents drive more code execution on our Workers development platform, which in turn drives more demand for Cloudflare’s performance, security and networking services…
…There’s a Fortune 500 pharmaceutical company that literally built a vibe coding platform on Cloudflare where their internal developers are using Workers AI and Durable Objects to build AI-assisted tools…
…Another publicly traded technology company is migrating their plug-in sandbox infrastructure to Cloudflare Containers for secure isolated execution of code at scale, which let their customers then prompt deployments directly to their system, but do it in a way which is secure because one of the things that’s really scary sometimes about deploying AI tools, especially to customer-facing applications is there can be a lot of damage that they do if one of these agents goes rogue or something goes wrong, the way that we’ve architected sandboxes allows them to — and containers allows them to do this secure isolated code deployment. And again, it all comes as part of the toolkit of Cloudflare Workers, which is allowing them to go really quickly…
…A leading financial services company has partnered with us to launch an official MCP server designed to allow AI agents like Claude, Cursor or OpenAI to interact directly with the company’s payment services. The whole thing is built on Cloudflare Workers. And this allows merchants to manage commerce tasks, such as creating invoices, checking transactions, processing and payments using natural language command and using things that are running on Cloudflare…
…I think what they like about us is, first, you get a complete toolkit. Second, that toolkit has been architected in a modern way to build exactly what you need for agents and AI applications. And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…
…I know that AI is putting pressure on some companies that are out there. It’s not putting pressure on Cloudflare. We are seeing it as nothing but a tailwind for us, both for our developer tools and kind of the Act 4 stuff that we’re working on, but actually for even our legacy products like application services and Zero Trust as well.
Cloudflare’s management thinks the hyperscalers have no incentive to figure out how to run AI workloads more efficiently, unlike Cloudflare; management thinks Cloudflare can get up to 10x the amount of work done off the same GPU compared to a hyperscaler; because of Cloudflare’s efficiency, its capex has not increased significantly to handle Ai workloads; management thinks Cloudflare’s infrastructure offers much higher levels of flexibility to users when it comes to scaling up or down AI compute consumption when compared to the hyperscalers; management thinks Cloudflare is increasingly shifting AI compute-spend away from the hyperscalers
Cloudflare is in the business of getting work done. And so what we are constantly doing is having research teams inside of Cloudflare figure out how you can run AI workloads significantly more efficiently. The hyperscalers is actually have no incentive to do that. They don’t want AI workloads to be more efficient because that just means you have to lease fewer machines from them. Whereas we — because we only charge you for the actual work that’s getting done, that means that we’re just getting oftentimes as much as 10x the amount of work off of the same GPU that you might get with a hyperscaler. That advantage is part of how we’re able to just bring much more out of the CapEx that we spend than others are. Our CapEx has ticked up a little bit, and I think that that’s in response to the fact that we’ve seen an increase in terms of workers, but it’s nowhere close to what we’re seeing from the hyperscalers…
…And then third, you get it in a way that can scale up infinitely if it becomes wildly popular and can scale down instantly to zero. So you don’t blow the budget if somebody is not actually using the system. That’s very different than the hyperscalers, which in order to be able to get access to a GPU at a hyperscaler, anything close to a competitive price, you also have to commit leasing that server for an entire year, which, again, if the project that you’re leasing it for doesn’t go well, that’s out of your budget…
… I think that the work that we’re doing to really embed with customers is driving success there. And again, we’re still not to a point where we’re going to be doing a $100 million deal a quarter, but we will get to that point. And I think we’ve seen an enormous total addressable market for the Cloudflare Workers platform. And I think that will shift more and more spend away from what people are using the hyperscalers for.
Cloudflare’s management thinks that the predominant business model of the internet in the AI era will shift away from advertising and subscriptions; Cloudflare’s recent acquisition, Human Native, will have an important role in helping the company come up with the next business model for the internet; Cloudflare is able to rewrite internet content that flows through its infrastructure, so it will be able to rewrite internet content in the best way for AI agents to consume; management thinks Cloudflare’s business is incredibly durable because it is able to automatically bring along the part of the internet that sits behind the company into whatever comes next in the AI era; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge
In Human Native case, they’re really helping us think through what is the next business model of the Internet going to look like. It’s going to move, I think, away from advertisement. It’s going to move away from subscriptions. It’s going to move to something else. And Human Native who came out of Google and we are just extraordinary in thinking about what that future business model looks like. I think that you’re going to see extraordinary things from them and they fit right in a Cloudflare and we’re excited to have them…
…But then because our application services sit in front of people and one of the things that people don’t understand is, there’s a lot different than what people think of sort of just traditional CDNs or other things like that, is that we’re actually able to rewrite the content that flows through us as it flows through. So if it turns out that agents are better at speaking, I don’t know, Latin than they are speaking English, we can literally rewrite the content that’s behind Cloudflare in Latin rather than being in English. Now that’s not going to be what agents are good at, but they are going to be better probably at speaking code than they are going to be maybe speaking on other things that we might invent. So I think that what we’re able to do and part of the reason we think that our legacy business is going to be incredibly durable is that it’s going to be able to automatically bring along all of the rest of the Internet that already sits behind us into whatever comes next. And I think we’re going to figure that out…
…So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.
Cloudflare’s management thinks that agentic commerce could put a lot of pressure on small businesses, and management is figuring out how they can bring all these small businesses along in an incredibly intuitive and easy way for the small businesses to adopt; management does not have the solutions yet, but they’re confident they can figure it out
One of the things I’m thinking a lot about is what happens to small businesses in a agentic commerce world. There’s a lot of ways where agents could be very consolidating and actually put a lot of pressure on small businesses. And so I think us in combination with great companies that we’re working with, like a Shopify or a Visa or PayPal or Mastercard, we’ve got to figure out how do we make sure that we bring all of these small business along, give them the right tools. And that’s exactly the sort of thing that we’re thinking about as we think about Act 4 and it’s not going to require you to have to go in and rebuild things. We want to make it one click simple where as soon as we figure out this is what really works, you push a button in that just whatever you had as your old shopping marketplace, that just comes along with it and gets to support whatever agents are going to be providing in the future. I don’t know exactly what all those things are going to look like, but we’ve got an incredible team
AI companies are looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services; non-AI companies are also looking to Cloudflare’s traditional products to help them differentiate between human and non-human users of their services because the non-human users were generating an order of magnitude more volume than the human users
The first place that we saw just demand was actually from a lot of the AI companies, where the AI companies would say to us, we can’t continue to operate our systems unless we can have the security and ability to deal with the load, which Cloudflare provides by default. Every time you run a query against an AI company, it’s pretty expensive to deal with those queries. And so being able to sort out who’s a human and who’s not a human, which is something we’re the best in the world at, is really important for the AI companies, and that’s driven actually just a lot of those initial relationships that are there.
What really took off in Q4, though, was where we saw other companies, media companies, e-commerce companies, companies that were just doing more traditional things online, seeing such an enormous uptick in how agents were interacting with their systems. I mean if any of you have used a tool like a ChatGPT or a Grok or a Claude, and you just watch how many different things it is looking at for every query that you send out, that’s just an order of magnitude increase in the volume of queries that are coming to the Internet. And so the people who are providing what is that Internet that they’re querying against, they need ways to do that in a way which is efficient and able to continue to scale. And Cloudflare is — and again, those application services functions that we have, the kind of Act 1 products that we have, are really critical of being able to deliver that.
Cloudflare’s newer but still-legacy Zero Trust products are helping users to secure AI agents
If you look at something like the new agents that people are running on their own machines often, the amazing thing is that people are waking up very quickly. We’re sort of speedrunning all of the security challenges that are out there, where all of a sudden you say, I’ve just given my agent access to everything in my life, what could go wrong? People are very quickly figuring out a lot could go wrong and so you got to put controls in place. And that’s exactly where our Act 2 or Zero Trust products come into play, where we’ve actually seen a real uptick even in a self-service business of the Zero Trust products.
Content publishers have been overwhelmingly positive towards Cloudflare’s Crawl Control product; Cloudflare’s management has been positively surprised by the reaction from research teams in the finance industry towards Crawl Control; AI companies may not necessarily like Crawl Control, but Cloudflare’s management thinks the AI companies understand why Crawl Control needs to exist; large technology companies have tried to establish content marketplaces, but Cloudflare’s management thinks that content publishers have higher trust in Cloudflare as a neutral 3rd party; management thinks 2026 will be the year where the future business model of the internet, based on Crowd Control, will emerge
[Question] Just double-clicking into Act 4, particularly in light of the wins, like the media company signing that $3.1 million contract for AI Crawl Control. So as you’re engaging with publishers, can you share early feedback around adoption towards this opt out controls to block scraping, but also the evolution of a structured marketplace model here.
[Answer] We’ve been sort of that neutral honest broker between the 2 sides that can come together and say, okay, like in order for this to all work, the Internet needs to have a business model, like people who create content deserve to get paid. And one of the things that actually surprised me to some extent, which might be relevant to a lot of you listening in, is we’ve actually been getting called not just from like the Associated Press and BBC and New York Times, but we’ve been getting calls increasingly from banks where their research teams are saying, we’re actually seeing fewer people subscribe to and read our research because the AI companies, the people are just turning the AI companies, they’re slurping all the data down and taking that intellectual property. Again, I think journalists get deserved to get paid, but so do research analysts…
…The reaction from the content creator side has been just overwhelmingly positive. And we come back to something pretty simple, which is just if you create content, it should be up to you who gets access to it and who doesn’t, and we can provide the tools to do that. On the AI company side, they also — again, nobody wants to pay for something that they were getting for free. But I think that they understand that we’re a fair broker. And when we walk them through what happens if we don’t create some healthy ecosystem here, they say, we get it. We just want to make sure that everyone is treated fairly…
…Microsoft and Amazon have announced content marketplaces. And they may be successful, but what we’re hearing from both the AI companies and from the content creators is that because Cloudflare is that trusted neutral third-party that we can be that honest broker between them that they would rather us be the one that figure out what that future business model looks like as opposed to one of the hyperscalers, which is out there creating their own foundational model themselves and might have a very different incentives. So I think 2026 will be the time that we start really talking about what this future business model looks like and how that is going to impact us financially.
Datadog (NASDAQ: DDOG)
Datadog’s management sees a positive demand environment, driven by cloud migration; management is seeing strong growth from both non-AI native companies and AI-native companies; in particular, the AI-native companies have very high growth and are going into production
We continue to see broad-based positive trends in the demand environment. With the ongoing momentum of cloud migration, we experienced strength across our business, across our product lines and across our diverse customer base. We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products.
Datadog’s management sees the company’s AI initiatives as being split into 2 buckets; one bucket is AI for Datadog, where management is building AI products to make Datadog better for customers; in AI for Datadog, management made Bits AI SRE (site reliability engineering) Agent, which does root cause analysis, generally available in December 2025 and it had 2,000 trial and paying customers in January 2025; Datadog has other AI products, such as Bits AI Dev agent, Bits AI Security Agent, and the Datadog MCP (Model Context Protocol) server; Datadog MCP server saw an 11-fold increase in tool calls in 2025 Q4 compared to 2025 Q3; the other bucket is Datadog for AI, where management is building capabilities for end-to-end observability across the entire AI stack; management is seeing an acceleration in growth for the LLM (large language models) Observability product; LLM Observability has 1,000 customers and number of LLM spans customers are sending to Datadog is up 10x over 6 months; management will soon release AI Agent Console to monitor AI agents; management is working on GPU monitoring; management is seeing Datadog’s overall customers base increase their usage of GPUs; management is improving the ability of Datadog’s products to secure the AI stack against attacks; management continues to see customer interest grow for next-gen AI observability; 5,500 customers are sending AI data to one or more of Datadog’s AI integrations (was 5,000 in 2025 Q3); management recently launched Feature Flags, which could be the foundation for automatically validating applications written by AI agents; management thinks that observability products for LLMs are currently undifferentiated but it will be differentiated in the future; management thinks observability tools for LLMs should be the same as for the rest of an organisation’s systems because LLMs do not work in isolation
We are executing relentlessly on our very ambitious AI road map, and I will split our AI efforts into 2 buckets: AI for Datadog and Datadog for AI.
So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers. We launched Bits AI SRE Agent for general availability in December to accelerate root cause analysis and incident response. Over 2,000 trial and paying customers have run investigations in the past month, which indicates significant interest and shows great outcomes with Bits AI SRE. And we’re well on our way with Bits AI Dev agent, which detects code level issues, generates fixes in production context and can even help release the monitor a fix. And Bits AI Security Agent, which autonomously triages SIEM signals, conducts investigations and delivers recommendations. The Datadog MCP server is being used by thousands of customers in preview. Our MCP server responds to the AI agent and user prompts and uses real-time production data and rich Datadog context to drive troubleshooting, root cause analysis and automation. And we’re seeing explosive growth in MCP usage, with the number of tool calls growing 11-fold in Q4 compared to Q3.
Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for LLM Observability. Over 1,000 customers are using the product and the number of spans since has increased 10x over the last 6 months. In 2025, we broadened the product to better support application development and iteration adding capabilities such as LLM Experiments and LLM Playground, LLM Prompt Analysis and custom LLM-as-a-judge. And we will soon release our AI Agent Console to monitor usage and adoption of AI agents and coding assistance. We are working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. And we are building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking and data poisoning among many other risks…
…We continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI and LLM usage…
…In software delivery, in January, we launched Feature Flags. They combine with our real-time observability to enable canary rollouts, so teams can deploy new code with confidence. And we expect them to gain importance in the future as they serve as a foundation for automating the validation and release of applications in an AI agentic development world…
…We mentioned our LLM Observability product. There are a few other products in the market for that. I think it’s still very early for that part of the market, and that market is still relatively undifferentiated in terms of the kinds of products they are, but we expect that to shake out more into the future. We think, in the end, there’s no reason to have observability for your LLM that is different from the rest of your system in great part because your LLM don’t work in isolation. The way they implement their smarts is by using tools, the tools on your applications and your existing applications or new applications you build for that purpose. And so you need everything to be integrated in production, and we think we stand on a very strong footing there.
Example of an 8-figure land deal with a high-profile AI foundation model builder (most likely Anthropic); the model builder’s observability stack was fragmented; the model builder will consolidate more than 5 observability tools into Datadog; the model builder wants to focus on building its own products; this model builder is the 2nd high-profile model builder that Datadog has as a customer (with the other being OpenAI); every customer of Datadog is also using some in-house or open-source observability tools and the same goes for the AI companies; management is seeing AI model builders’ having the same reasons as non-AI companies for adopting Datadog and that is Datadog is able to prove its value very quickly
We landed an 8-figure annualized deal and our biggest new logo deal to date with one of the largest AI foundational model companies. This customer has a fragmented observability stack and cumbersome monitoring workflows leading to poor productivity. This is a consolidation of more than 5 open source, commercial, hyperscaler and in-house observability tools into the unified Datadog platform that has returned meaningful time to developers and has enabled a more cohesive approach to observability. This customer is experiencing very rapid growth. Datadog allows them to focus on product development and supporting their users, which is critical to their business success…
…[Question] It’s now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?
[Answer] Every customer we land has some — has had some at homegrown. They have some open source. They might still run some open source, like that’s typically where we see everywhere. The — it’s cheaper to do it yourself is usually not the case. So your engineers typically are very well compensated in the big part of the spend in this company. Their velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So it’s not any different from what we see with any other customer. And also within the AI cohort, it’s not original at all like — AI cohort in general is who’s who of the companies that are growing very fast and that are shaping the world in AI and they’re all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.
Datadog’s management continues to believe that digital transformation and cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks that agentic coding is beneficial for Datadog because it leads to more coding volume to observe, and the need for observability in areas that were not necessary before; Datadog’s management thinks it’s very hard to tell what level of model-inferencing will happen because of the gargantuan amount of capex from the hyperscalers, but they think it’s likely to lead to more complexity in the technology ecosystem, which will benefit Datadog’s business
There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience and security needs. Meanwhile, we’re moving fast in AI, by integrating AI into the Datadog platform to improve customer value and outcome and by building products to observe, secure and act across our customers’ AI stack…
…[Question] In the context of a lot of advancements when it comes to agentic frameworks, agentic deployments, the stuff that we’ve seen from Anthropic and new frontier models from OpenAI, just in terms of like what this means for observability as a category, defensibility of it in terms of can customers use these tools to build homegrown solutions for observability?
[Answer] There’s a few different ways to look at it. One is there’s going to be many more applications than there were before. Like people are building much more and they are building much faster. We covered that in previous calls, but we think that the — this is nothing, but an acceleration of the increase of productivity for developers in general, so you can build a lot faster. As a result, you create a lot more complexity because you build more than you can understand at any point in time. And you move a lot of the value from the act of writing the code, which now you actually don’t do yourself anymore to validating, testing, making sure it works in production, making sure it’s safe, making sure it interacts well with the rest of the world, with end users, make sure it does what it’s supposed to do for the business, which is what we do with observability. So we see a lot more volume there, and we see that as what we do basically where observability can help. The other part that’s interesting is that we — a lot happens — a lot more happens within these agents and these applications. And a lot of what we do as humans now starts to look like observability. Basically, we’re here to understand — we’re trying to understand what the machine does. We’re trying to make sure it’s aligned with us. We’re trying to make sure the output is what we expected when we started, and that we didn’t break anything. And so we think it’s going to bring observability more widely in domains that it didn’t necessarily cover before…
…[Question] I’m wondering if you’ve collected enough signal from the last couple of years of CapEx, that trend to estimate how much of that is training related and when it might convert to inferencing where Datadog might be required? In other words, are you looking at this wave of CapEx and able to say it’s going to create a predictable ramp in your LLM observability revenue?
[Answer] I think it’s pretty too reductive to peg that on LLM observability. I think it points to way more applications, way more intelligence, way more of everything into the future. Now it’s kind of hard to directly map the CapEx from those companies into what part of the infrastructure is actually going to be used to deliver value 2 or 3 or 4 years from now. So I think we’ll have to see what the conversion rate is on that. But look, it definitely points to very, very, very large increases in the complexity of the systems, the number of systems and the reach of the systems in the economy. And so we think it’s going to be — like it’s going to be of great help to our business, let’s put it this way.
Datadog experienced adoption growth in AI native customers in 2025 Q4 that significantly outpaced non-AI customers; Datadog now has more than 650 AI native companies (was 500 in 2025 Q3), of which 19 are spending more than $1 million (was 15 in 2025 Q3); 14 of the top 20 AI-native companies globally are Datadog customers; management chose not to share the percentage of revenue coming from AI native customers in 2025 Q4 (was 12% in 2025 Q3); the AI native companies are not dilutive for Datadog’s gross margin; the large AI native customers get the same kind of volume discount as the large non-AI customers
We are seeing continued strong adoption amongst AI-native customers with growth that significantly outpaces the rest of the business. We see more AI-native customers using Datadog with about 650 customers in this group. And we are seeing these customers grow with us, including 19 customers spending $1 million or more annually with Datadog. Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers…
…[Question] Can you give us the percent of revenue of the AI cohort this quarter?
[Answer] We didn’t — have not put it in there…
…[Question] On margin, are the large AI-native customers significantly dilutive to gross margin?
[Answer] On a weighted average, they’re not. As we’ve always said, for larger customers, it isn’t about the AI-natives or non-AI-natives, it has to do with the size of the customer. We have a highly differentiated — diversified customer base. So I would say we’re essentially expecting a similar type of discount structure in terms of size of customer as we have going forward. And there are consistent ongoing investments in our gross margin, including data centers and development of the platform. So I think it’s more or less what we’ve seen over the past couple of years, not really affected by AI or non-AI native.
Datadog’s management’s basis for guidance is to have conservative assumptions on usage growth trends observed in recent months; in setting guidance, management made the conservative assumption that Datadog’s core business is growing faster than the business from its large AI customer (OpenAI)
Our guidance philosophy overall remains unchanged. As a reminder, we based our guidance on trends observed in recent months and apply conservatism on these growth trends…
…We noted that with the guidance being 18% to 20% and the non-AI or heavily diversified business being 20% plus, that would imply that the growth rate of that core business assumed in the guidance is higher than the growth rate of the large customer. It doesn’t mean the large customer is growing any which way. It’s just that in our consumption model, we essentially don’t control that. And so we took a very conservative assumption there.
Datadog’s management thinks that as agentic developers proliferate, there will be a lot more automation in observability workflows, but there will still be a need for UIs (user interfaces) for human developers to interact; to prepare for the rise in automation in observability workflows, management is exposing a lot of Datadog’s functionality directly to agents; management thinks it’s likely that Datadog’s MCP (Model Context Protocol) server will be part of how agents interact with Datadog’s products
[Question] In a world where there’s a greater mix between human SREs and agentic SREs, is there any sort of evolution that we need to think about in terms of whether it’s UI or how workflows work in observability and how maybe Datadog sort of tries to align with that evolution that’s likely to come in the next couple of years?
[Answer] There’s going to be an evolution, that’s certain. There’s going to be a lot more automation. We see it today, like we see the — all the signs we see point to everything moving faster, more data and more interactions, more systems, more releases, more breakage, more resolutions of those breakages, more bugs, more vulnerabilities, everything. So we see an acceleration there. At the end of the day, the humans will still have some form of UI to interact with all that. And a lot of the interaction will be automated by agent. So we’re building the products to satisfy both conditions. So we have a lot of UIs, and we are able to present the humans with UIs that represent how the world works, what their options are, give them familiar ways to go through problems and to model the world. And we also are exposing a lot of our functionality to agents directly. We mentioned on the call, we have an MCP server that is currently in preview and that is really seeing explosive growth of usage from our customers. And so it’s a very likely future that part of our functionality is delivered to agents through MCP servers or the likes. Part of our functionality is directly implemented by our own agents, and part of our functionality is delivered to humans with UIs.
Datadog’s management thinks that LLMs (large language models) are getting better all the time; management sees 2 parts to Datadog’s defensibility against LLMs; the 1st part is Datadog understands how all the data fits together; the 2nd part is Datadog has the foundation to provide proactive, real-time anomaly detection and solutions as Datadog is embedded in an organisation’s data plane; management thinks that the world of observability is shifting towards one where it’s important for observability providers to provide proactive, real-time anomaly detection and solutions; management is developing Datadog’s ability to provide proactive, real-time anomaly detection and solutions; the data planes in a typical organisation Datadog works with are real time and many orders of magnitude larger in volume than what an LLM typically sees; management is not seeing any change in the intensity of competition for Datadog’s business from LLMs; management thinks it’s only rational for all AI native customers to use Datadog’s products
We definitely see that LLMs are getting better and better, and we’ll bet on them getting significantly better every few months as we’ve seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So if you feed a lot of data to an LLM and ask for an analysis, you’re very likely to get something that is very good and that is going to get even better.
So when you think of what we have that is fundamentally our moat here, there’s 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that’s how we aggregate all the data we get, we parse out the benefits. We understand how everything fits together and we can feed that into the LMM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so customers can recombine that in different ways using different intelligence tools.
But the other part that we think where the world is going for observability is that right now, we are — the SDLC [software development life cycle] is accelerating a lot, but it’s still somewhat slow. And so it’s okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling for them. Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that’s happening in your system. So you need to be proactive. You’ll need to run analysis in stream as all the data flows through, you’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the [ fact ] 10, 15 minutes later. And that’s what we’re uniquely positioned to do.
We are building that. We’re not quite there yet, but we think that a few years from now, that’s what the world is going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems…
…The data plates we’re talking about are very real time, and there are many orders of magnitude larger in terms of data flows, data volumes than what you typically feed into an LLM. So it’s a bit of a different problem to solve…
…[Question] I wanted to ask you about competition and how the LLM rise is impacting share shifts. Just talk about that and how Datadog will be impacted?
[Answer] There hasn’t been any particular change in competition in that we see the same kind of folks and the positions are relatively similar. And we are pulling away. We’re taking share from anybody who has scale. And I know there’s been noise. There were a couple of M&A deals that came up, and we got some questions about that. The companies in there were not particularly winning companies, nothing that we saw in deals, nothing that had a large market impact. And so we don’t see that as changing the competitive dynamics for us in the near future…
…At the end of the day, it should be irrational for customers — for all customers in the AI cohort not to use our product…
…I think as you look at being in-stream looking at 3, 4, 5 orders of magnitude, more data, looking at the data in real time, and passing judgment in real time on what’s normal, what’s anomalous and what might be going wrong doing that hundreds, thousands, millions of times per second. I think that’s what is going to be our advantage and where it’s going to be much harder for others to compete, especially general purpose AI platforms.
Datadog’s management thinks the best way to justify the existence of Datadog in an environment where observability bills are going up because of AI usage, is to prove the cost-savings to customers
[Question] Tell us a little bit about how some of those conversations evolve when the customer sees that in order to do observability for more AI usage, perhaps that Datadog bill is going up.
[Answer] There’s only 2 reasons people buy your product is to make more money or to save money. So whatever you do, when customers use a new product, they need to see a cost savings somewhere or they need to see that they’re going to get to customers they wouldn’t get to otherwise. So we have to prove that. We always prove that. Any time a customer buys a product, that’s what is happening behind the scenes. The — in general, when customers add to our platform as opposed to bringing another vendor in or another product in, they also spend less by doing it on our platform.
Datadog’s management is seeing great productivity gains when employing AI internally
In terms of AI, to date, we are using it in our internal operations. So far, it’s — with the first signs of what we’re seeing is productivity and adoption…
…We’re getting a lot — we see great productivity gains with AI there, but at this point of detail, it helps us build more faster and get to solve more problems for our customers. And — but we’re very busy adopting AI across the organization.
Paycom Software (NYSE: PAYC)
IWant allows anyone to become an expert in the system without training; Forrester found that organizations with more than 500 employees that use IWant experienced an ROI of over 400%; with IWant, managers up to 600 hours, executives 60 hours, HR teams 240 hours and employees 3,600 hours, on an annual basis; the leaders of organisations using IWant get immediate value out of the product without any training; IWant usage is up 80% in January 2026 from 2025 Q4; IWant’s functionality is continuously being improved
Our most advanced AI solution, IWant, is designed to accelerate the speed to value by allowing anyone to become an expert in the system without any training. Forrester’s recent analysis of a composite organization with more than 500 employees found that organizations using IWant experienced an ROI of over 400%, driven by productivity gains at every level. Managers save as many as 600 hours per year, executives up to 60 hours, HR teams up to 240 hours and employees across the organization collectively reclaim 3,600 hours annually.
Leaders describe IWant as a catalyst for deeper insight and one CEO remarked, I get immediate value. Without any training or knowledge of Paycom, I can go in and immediately understand more about my business…
…IWant usage is up 80% in January alone just based and that’s from fourth quarter…
…We continue to build out the IWant system. We continue to add more and more functionality to it. It continues to get stronger and stronger.
Paycom’s management thinks that AI is not a threat to Paycom; management thinks AI will give Paycom the opportunity to enter adjacent industries that it was not able to in the past
think there’s a little misjudgment about the AI thesis materializing as a threat weapon that will be used against us. I mean AI is our friend at Paycom. And I’ve worked very hard to ensure that the misunderstanding of AI’s impact on us isn’t on our end.
And I just believe as you look into the future, we have opportunities now that we didn’t have in the past, right? Like the speed of development has increased, the pace of the user buyer being able to digest it might lag a little bit, but we can develop a lot more today than what we’ve been able to in the past. We’re in this age of software development and in some instances, replacement of specific software. Paycom can get into every adjacent industry now within weeks or months. And I’ll remind everybody that I was the first Bob [ coater ] back in 1998. So there are several easy-to-displace industries that don’t just sit ancillary to our industry, but they’re dependent upon our industry of where the data starts. And so now that we can develop anything very quickly and use all these technologies to replace other industries in a matter of weeks or months, we’re excited about how that — what that looks like for our future as well.
Paycom’s management is currently not seeing any impact on overall employment from AI, but is not dismissing impacts in the future; management thinks that Paycom still has ample growth opportunities even if AI does lead to lower overall employment
[Question] The AI impact to overall employment. How do you see that impacting Paycom business?
[Answer] I’d say we’re not seeing it. I’m not going to dismiss potential impacts for us to the future. I would say that we are not overexposed to any one industry, any one client, client size. And again, we only have 5% of the market. And so you could do some calculations and we’re the most automated product in the industry and the best product for the best value that someone is going to achieve throughout the industry. And so when you look at that, I think that you could do some adjustments in employment, which again, we have not seen. But I mean, even if you did, I still think our opportunities intact for us.
Shopify (NASDAQ: SHOP)
Shopify has been building for AI shopping for some time; orders coming to Shopify stores from AI search has increased 15x since January 2025, albeit from a small base; management thinks AI shopping helps serve smaller merchants to the right buyers who might otherwise never have discovered the merchants; management thinks AI shopping benefits consumers because they gain access to a personal shopper; management thinks AI shopping will increase e-commerce penetration faster than it would have otherwise; management thinks it’s important that AI shopping is at least as good as shopping at a merchant’s digital storefront; Shopify has introduced Shopify Agentic Storefronts, which lets all major AI platforms access billions of products from Shopify merchants accurately and in an up-to-date way; AI platforms are plugged into the best commerce source of truth with Shopify, and this translates to better experiences for consumers; through the Agentic plan, brand not already using Shopify will soon be able to sell through the same AI platforms as Shopify merchants; Shopify built Universal Commerce Protocol (UCP) with Google as the common rails to support agentic commerce; UCP is payments agnostic and keeps merchant’ essential checkout logic intact; UCP is the only protocol that covers the full commerce journey end-to-end; leading retailers are already using UCP; agentic commerce does not bypass Shopify’s checkout; management has no opinion on which LLM platform will be the dominant one for agentic commerce and they just want to allow merchants to sell through agentic commerce; comanagement sees merchants’ economics remaining the same between agentic commerce and selling directly from their stores
We’ve been building for this new era of AI shopping for a long time, and it’s now here. In fact, since January 2025, orders coming to Shopify stores from AI search are up 15x. Now that’s on a small base, but that’s still a really big jump in 12 months. For our merchants, it matters because it powers the long tail of commerce, servicing smaller merchants to the right buyers who might otherwise have never discovered them. This is merit-based discovery at scale. For buyers, it matters because it’s like having a personal shopper in your pocket, someone who really understands them, their taste, their preference, their size…
…For Shopify, it matters because we believe it can bend the curve of e-commerce penetration by stripping out friction, pulling late adopters in and moving more everyday purchases online…
…It is critical that shopping in an AI conversation is at least as good as shopping at the merchant’s online store…
…Shopify Agentic Storefronts syndicates billions of products through our catalog to all major AI platforms, Google AI Mode and Gemini, ChatGPT, Microsoft Copilot, one click and our merchants get instant access to millions of potential buyers who are actively looking for their products. We’ve already seen huge brands like Vuori, Glossier, Steve Madden and SPANX sign up and start selling. Plus through the catalog, our partners get the most accurate up-to-date data for billions of products for millions of the best brands on the planet. And this is really important because when they tap into our catalog, they’re not just ingesting another feed, they’re plugging into the best commerce source of truth. And that source of truth means cleaner matching and fresher data, which translates directly into faster and more trustworthy experiences.
The new Agentic plan means that any brand not already using Shopify will soon be able to sell through the same AI platforms as our merchants as well as on the Shop app. Why? Because frankly, when commerce flows freely across agents, everybody wins…
…We built the Universal Commerce Protocol or UCP. UCP is infrastructure. It’s not a product. It’s the common rails Agentic commerce runs on. Shopify co-developed this with Google because we know commerce better than anyone. It’s an open standard for any agent to connect with any brand on the Internet. UCP is built to flex to the many ways commerce happens. It’s payment agnostic by design. It keeps the merchants essential checkout logic intact without forcing them to rebuild their customizations over and over again to fit our system. UCP is the only protocol that covers the full commerce journey end-to-end from search to cart, then checkout to post order, and it’s already being used by the world’s leading retailers…
…LLMs do not bypass Shopify’s Checkout. Checkout is really 2 parts. Think of it this way. You have a front end that’s the user interface that buyers interact with and the back end that processing everything server to server. So if you think about a Shopify store today, Shopify runs both the front end and the back end. And under UCP, Shopify still powers the overall experience, but the merchant gets to keep their own checkout system on the back end. Now with something like ChatGPT, for example, OpenAI will run the front end, which is sort of the screens and the forms that the buyer uses. But Shopify still runs the back end. And so things like order processing and payments through Shopify Payments, that all runs through Shopify’s infrastructure…
….We want to make sure that whatever surface, whatever permutation is the one that actually becomes the mainstay in Agentic that it reflects exactly the experience that the merchants want similar to what they have in the online store as well. And so the economics for Shopify merchants’ economics are the same as if the transaction happened in the online store as well. There should be no difference there.
Shopify’s on-platform AI assistant, Sidekick, proactively helps merchants prioritise and execute tasks; Sidekick’s usefulness is enhanced because Shopify powers a merchant’s store, checkout data, and apps; in the 3 weeks since Sidekick’s latest edition was released, it has generated almost 4,000 custom apps, created over 29,000 automations, built almost 355,000 task lists, and edited over 1.2 million photos; Sidekick Pulse is a new feature in Sidekick that surfaces tailored advice for merchants; Sidekick Pulse recently recommended a Shopify jewelry merchant to bundle 4 products because the Sidekick Pulse knew the 4 products were best sellers and bundles tend to convert better
Our on-platform AI assistant, Sidekick has come a long lane in a year. Sidekick is effectively a co-founder for our merchants. It uses everything it knows about your business, and it proactively tells you which task to prioritize. And it will even help you execute those tasks. Because Shopify powers the store, checkout data and apps, Sidekick can see the entire picture and do the work in one place…
…In just 3 weeks after our latest edition drop, Sidekick generated almost 4,000 custom apps, created over 29,000 automations with Shopify Flow, built almost 355,000 task lists and edited over 1.2 million photos. So it’s clear that Sidekick is doing real heavy lifting for our merchants…
…Sidekick Pulse is our new feature that proactively helps merchants grow their business. It works in the background to surface tailored advice that’s grounded in each merchant’s business, powered by over 2 decades of data…
…Last week, Sidekick Pulse made a recommendation to one of our jewelry brands. It suggested bundling 4 separate products and selling them together as a stack. Why? Because it knew that those 4 products were already best sellers, and it also knew that bundles tend to convert better and drive up cart value. Personalized data analysis paired with intelligence gained from hundreds of millions of other transactions. This is where our AI assistant really becomes the AI co-founder. It’s bespoke, it’s intuitive.
Shopify’s new app SimGym simulates real buyer behavior to provide feedback on store changes before they are shipped
Our new app SimGym simulates real buyer behavior to give you feedback on changes to your store before you even ship them.
0.5 million merchants have used AI within Shopify’s online store editor to create 6.5 million custom elements; Shopify’s online store editor allows anyone to design without code
Within our online store editor, more than 0.5 million merchants have used AI to create 6.5 million custom elements. Now anyone can design without code. This is really Shopify at its best. Massive complexity transform into a tool for anyone with imagination, no technical skills required.
Shopify’s management believes AI advances will make Shopify even more essential for merchants
As AI advances, Shopify becomes even more essential. AI transforms interfaces and accelerates the pace of change, but it doesn’t alter the underlying architecture of commerce. Commerce will always require speed, reliability and trust at a global scale. When I say scale, consider the billions of transactions that we facilitate. But it’s not just about the volume. It’s the comprehensive commerce experience we support. When an AI agent surfaces a product in any interface, merchants still need a reliable, secure and compliant path to purchase and post purchase. They still need our ecosystem of buyers, developers and partners. We help merchants be everything everywhere all at once, representing over 14% of U.S. e-commerce today and rapidly growing percentages in many geographies across the globe, we have an unparalleled view of commerce. Simply, we are the experts at commerce. AI will be a force multiplier. It will help us achieve our goals of democratizing entrepreneurship, inspiring more merchants, driving more transactions and creating more commerce channels.
Shopify was able to accelerate product development in 2025 without growing the size of the team because of the use of AI
Throughout 2025, we achieved operating leverage in each of R&D, sales and marketing and G&A, largely due to disciplined headcount management. By leveraging AI, automation and our proprietary project management and talent management systems, we’ve been able to accelerate our product development capabilities without growing the size of the team.
Shopify’s management sees Agentic Plan as an on-ramp for non-Shopify merchants to enter the Shopify ecosystem, similar to how Commerce Components works
The Agentic plans opens our infrastructure to all brands. And I think this idea that we’re bringing Agentic Commerce to every brand, whether or not they’re on Shopify, we think will be — I mean, it certainly has already been an incredible way for us to start conversations with brands who might not be ready to migrate or have not anticipated a full forklift migration just yet, but they don’t want to miss out on this incredible opportunity that might be this Agentic Commerce. And so in a similar vein to how we started — we created Commerce Components a couple of years ago where non-Shopify merchants can use things like Shop Pay or they can simply use Shopify Checkout as a component. That allowed us to start conversations with brands that we weren’t otherwise talking to. In some cases, some of those brands who came to us initially just for Shop Pay are now entirely on Shopify. So certainly, we think this could be an incredible on-ramp just like the Commerce Components play was.
The Catalog is important for Shopify’s agentic commerce ambitions because it is a source of truth for agents, and agents do not have to rely on scraping information from the internet
It is incredibly important that Tobi said something recently about Catalog. He said that everyone else has to scrape the Internet, we actually have the source of it. The fact that we have structured billions of products so agents can surface the most relevant items in seconds, the fact that products are going to be then surfaced based on relevance and sort of this merit-based discovery is going to happen. I think that every retailer and every merchant on the planet is thinking about how they can get in front of as many buyers and consumers on Agentic. If they continue down that path and do the math, more and more, they realize that Shopify is the company that is front and center.
Shopify’s management appears to see UCP (Universal Commerce Protocol) as being the significantly more important rails for agentic commerce compared to OpenAI’s ACP (Agentic Commerce Protocol)
[Question] Can you help us understand the UCP versus ACP, the other standard that OpenAI and Stripe are putting forward. Are these overlapping standards? Do they compete? Are they complementary in any way?
[Answer] Yes. Look, the goal is simple with UCP. It’s one common language for agents and retailers. The idea is that merchants can keep the brand, the attributions, buyers get these incredibly trustworthy experiences and Agentic Commerce can scale. UCP is specifically geared towards being a protocol that covers the full commerce journey end-to-end from search to cart, then checkout. It includes post order. It keeps the merchants essential checkout logic intact.
It doesn’t force them to rebuild customizations over and over again. It’s payment agnostic by design. It’s built to flex in many ways. I mentioned a couple of examples in my prepared remarks. I mean you think about ButcherBox or you think of AG1, for example, those — that subscription logic is really complex because sometimes you want to skip a month, sometimes you want to double up. If you’re on vacation, you want to do a hold or some of the larger furniture companies on Shopify that do this incredible white glove delivery where you can set the exact time and date for your couch being delivered.
These things need to be ported over into the Agentic world, and UCP does that. So in our view, UCP covers the full commerce journey end-to-end. And we think — we have 20 years of doing this. Commerce is very complex. It is easy to get it wrong. And I think that it’s more than just a transaction. It’s an entire experience and UCP covers all of that. And we’re really proud of what we did with our friends at Google. It was an incredible experience to work on it with them, but it works, and we think we’re already seeing incredible adoption from some of the largest retailers on the planet.
Shopify’s management is not seeing a competitive threat develop in terms of companies choosing to replace or bypass Shopify’s solutions with vibe-coded tools
[Question] About the feedback from merchants having discussions at the Board level about moving to Shop. Specifically, AI, the feedback that you’re getting from companies in terms of the AI road map, is that — I imagine it’s influencing decisions. Are you also seeing merchants evaluate custom solutions in light of what they can do with AI tools?
[Answer] I think a lot of the largest retailers, certainly the ones I’m meeting with, I mentioned brands like General Motors or L’Oreal or SuitSupply or Amer Sports, who runs Wilson and Salomon. What we hear from them is they’re looking — if they’re not on Shopify already, usually, they come to us with a particular problem. In some cases, it’s — we want to make sure we don’t miss out on Agentic. In other cases, they’re coming to us because they want to replace their homegrown system that they built many years ago for e-commerce. They don’t want to have 400 engineers anymore. They want to effectively come to Shopify because they want to go back to what they do best, which is they want to build furniture. They want to be a cosmetics company. They don’t necessarily want to have this massive engineering team… I think the days of let’s just build everything ourselves in-house is long gone. And I think that gives Shopify an incredible opportunity.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adyen, Datadog, Mastercard, Paycom Software, Shopify, and Visa. Holdings are subject to change at any time.