Last week, I published The Latest Thoughts From American Technology Companies On AI (2026 Q1). In it, I shared commentary in earnings conference calls for the first quarter of 2026, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large.
A few more technology companies I’m watching hosted earnings conference calls for 2026’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:
- 2023 Q1 – here and here
- 2023 Q2 – here and here
- 2023 Q3 – here and here
- 2023 Q4 – here and here
- 2024 Q1 – here and here
- 2024 Q2 – here and here
- 2024 Q3 – here and here
- 2024 Q4 – here, here, and here
- 2025 Q1 – here and here
- 2025 Q2 – here and here
- 2025 Q3 – here, here, and here
- 2025 Q4 – here, here, here, and here
With that, here are the latest commentary, in no particular order:
Airbnb (NASDAQ: ABNB)
AI is now writing nearly 60% of the code Airbnb’s engineers produce, 2x higher than the industry average; AI code-writing is helping Airbnb ship more features faster and deliver better experiences for guests and hosts; management thinks AI makes companies move faster; management thinks AI requires a company’s employees to be hands-on; management is seeing many of the company’s design managers and engineering managers return to coding with the help of Claude Code
Nearly 60% of the code our engineers produce is now written by AI, which we estimate is about twice the industry average. That means our teams are shipping more features and iterating more quickly. But it’s not just about speed, it’s about delivering a better experience for our guests and hosts…
…AI, I think we should think of as an accelerant to everything. And we can think of it as a disruptive technology. I actually think of it more as an accelerating technology. I think the #1 characteristic of AI is speed. It just speeds every single thing up.
I also think it makes — it requires everyone to be more hands-on and requires everyone to be more nimble and more adaptive to change. I think one of the benefits of the way Airbnb is run is that — and I think there was a term that was coined. Paul Graham, Founder Mode based on a talk I gave, but it’s really this notion that leaders should be hands on. I do not think there’s going to be as much of a role for pure people managers. Said differently, 30,000 feet hands-off managers. I think everyone is going to have to be much more hands-on, much more in the details of the company and all the data. I think now data inside a company is completely democratized. You don’t need to inquire with the data scientists to get data, we all have self-serve dashboards.
I’m seeing like many of our design managers and engineering managers going back to coding or using Claude Code.
Airbnb’s AI assistant now solves more than 40% of issues that guests face, up from 33% in 2025 Q4 and at a significantly faster pace; Airbnb’s AI assistant has helped to reduce cost per booking by 10% year-on-year in 2026 Q1; it’s really difficult to use AI for customer service; management believes that Airbnb’s 40% rate of using AI to solve customer issues is industry-leading
When guests contact us through our AI assistant, over 40% of issues are now resolved without a human agent. And this is up from about 1/3 in Q4 with significantly faster resolution time. We’ve seen the cost per booking decrease about 10% year-over-year in Q1, and we expect to see more of this as we improve AI customer support this year…
…We want to focus on the hardest problem in AI, which we thought was customer service. The reason why is the stakes are high, you have — you cannot hallucinate, you have to answer things very, very quickly because they are calling and they have problems. You have to be multilingual, often in the same conversation because sometimes guests and hosts don’t speak the same language. You have to adjudicate very difficult things. You have to escalate to human accurately, especially if it’s timely or there’s a trust and safety incident. And you have to deal with personally identifiable information that means that you have to be able to protect people’s data, you have to be able to read and train based on nearly 100 policies, tens of thousands of evolving conversations and look at like millions of data points of how a prior case was adjudicated to be able to answer correctly…
…Over 40% of people connect with our AI assistant self-solve. And I believe it’s, by far, the best AI self-solve in all of travel. I’m pretty confident of that.
Airbnb’s management thinks the ultimate search experience in Airbnb in the AI paradigm will be deep personalisation; Airbnb knows details about its users, which makes deep personalisation possible; management thinks this is similar to what all e-commerce sites will look like eventually; management’s AI strategy for search starts at the bottom of the funnel, unlike competitors; Airbnb now has AI summaries in its listings page; Airbnb is using AI for matching; management is currently testing AI search in Airbnb
I think the ultimate, like, paradigm is not this tab versus co-mingle inventory. I believe that’s a pre-AI paradigm. I think post an AI paradigm that we’re moving towards and this relates in a second to AI search is deep personalization, understanding every user, every member. And I just want to remind everyone listening that 100% of people who booked have an account, and they have to have a verified ID. You cannot book as a guest. You have to have account, you have to be a member of the community. Therefore, we know something about you. We can infer a lot, not only about what you’re clicking on the site, but all of your past booking activities…
…We have hundreds of millions of reviews on Airbnb. And one of the things our guests told us is when they get to an Airbnb, it’s great when they see like 100 reviews, it’s awesome, but they don’t have time to read all 100 reviews. So we now have AI summaries. And AI summaries are really great. We have filters, we have AI summaries. We’re now using AI for matching. AI is really helping our search ranking and our relevance…
…Finally, it’s top of funnel, which you would call AI search. This is top of funnel. And this is what we’re currently testing.
Airbnb’s management thinks that a company needs to be really good at technology, data, and infrastructure in order to be good at AI; management has been cleaning up Airbnb’s data for the last few years to prepare for AI
When you break AI under the hood, you realize that you need — in order to be good at AI, you need to be really good at technology, foundational. You need to be really good data and infrastructure. So what we have been doing over the last few years is really getting our data warehouse really, really clean because your AI is only as good as your data.
Airbnb has an AI-native executive running its technology stack, which management believes is the only example of its kind within the travel industry
I mentioned in our last earnings call, we hired Ahmad, our CTO, who was the leader of the Meta LLaMa model. So we are probably one of the only technology companies in the world certainly only in travel that has an AI-native person running as the entire technology stack.
Airbnb’s management is currently experimenting with the best ways to implement AI in the business; management thinks nobody has figured out AI for travel e-commerce yet, even though ChatGPT traffic converts on Airbnb at a higher rate than Google traffic, for 5 reasons, namely (1) travel e-commerce is photo-forward, whereas AI chatbots are text-based, (2) chatbots do not allow users to directly manipulate search results, (3) chatbots do not allow easy comparison between a wide variety of options, (4) chatbots are not multiplayer, and (5) chatbots are not map-native; management thinks AI is a risk to Airbnb, but it’s also an opportunity; management foresees a lot of AI-focused innovation from Airbnb in 2027
We are essentially piloting a variety of different ways to use AI, whether it’s in the search box, whether it’s once you search, interrupting on the search, it’s the filter panel, once you book a trip. So we’re trying a lot of different things. We’re really in the exploration, research development mode…
…I don’t think anyone figured out AI for travel or e-commerce yet. Let me use an example, ChatGPT. Last year, ChatGPT announced the creation or of third-party apps. And then this past March, they shut that project down. And one of the things we noticed is that while ChatGPT is — traffic converts higher than Google traffic when it’s sent to Airbnb, we think the design of a chatbot fundamentally as its currently constructed today does not work for travel e-commerce. There’s essentially four problems.
The first problem with the chatbot is there’s too much text. Chatbot are LLMs, large language models. They’re language. And most of e-commerce is not language forward, it’s photo forward. That’s the first problem. The second is there’s no direct manipulation. You can’t touch anything. You have to type everything. And that’s great for a conversation. But if you want to like move the price slider, that’s much easier than type, well show me X, Y and Z. The third problem is comparison. You go to Airbnb in Paris, there’s tens of thousands of homes, I think over 100,000 homes. Imagine trying to compare 100,000 homes in a chat bot, you get lost. And so it wants to show you just three options. You want to see more than three and pretty soon you get confused in a thread. And the fourth problem is that almost all bookings of Airbnb have multiple guests, what we call multiplayer. Chat bots are primarily single player. This doesn’t account for the fact that 85% of people booking Airbnb send a message, 100% have an account. And also chat bots are not map-native…
…AI is a risk to us and everyone. If it’s a risk to us, it’s a risk to everyone. So risk to everyone is an opportunity for us…
…I believe that over the next year, you can see a lot of innovation around AI search, AI-native interfaces.
Airbnb’s management buckets its alternative accommodations supply into 2 buckets, namely, the API (application programming interface) bucket, and the primary homes and vacation homes bucket; for the API bucket, management thinks AI enables Airbnb to build more tools to serve hosts; management thinks Airbnb has been lagging behind 3rd parties in building great tools for the API bucket; hosts within the API bucket have sounded out to Airbnb that they need better tools to manage their businesses, and Airbnb has struggled in the past for resources to build these tools, but now the company has a productivity-boost from AI in software development and so are able start building the tools; for the primary homes and vacation homes bucket, management thinks AI can make it much easier for primary homes hosts to list their properties
You can think about our core accommodations business of homes as a few different categories. So you have essentially hosts that connect via an API. You might call that host API partners. These are primarily property managers. That’s one category. Then we have primary homes, homes that people live in primarily, so typically more than 180 days a year. Then you have vacation homes, then you have things like private rooms. So you have to think about each. And I would break them into two, the API and the primary homes or vacation homes. These are two buckets.
I think with the host API partners, I think it’s more about AI enabling us to build more tools. I think we’ve been a little bit lagging behind third parties and building great tools for host API partners. And as a segment, the host API hosts are growing really, really fast, and we see a really big opportunity to better serve them. One of the things we found is that the more properties you manage at Airbnb, the lower your rating is. And so said differently, our customers have higher satisfaction with individual hosts over property managers. Now on the one hand, that’s encouraging because that inventory is more unique and exclusive to Airbnb. Other hand, we see that as opportunity. And one of the things those API partners say is, well, we want to be better host, but we need better tools. So AI is a like — maybe here’s an analogy. In the old world, you might need a team of 20 engineers. In a new world, an engineer can spin up 10 agents. And those agents can work 24/7. I mean I’m kind of exaggerating a little bit. You have to be there to prompt them and the amount of work they can do without supervision isn’t overnight, typically for most tasks, but you can see a huge amount of leverage. So the fact that we’re adopting AI tools is a way for us to get a lot more leverage around the software for most API partners…
…Originally, we didn’t have the resources to do all of the host API work we want to do. And now with AI, we’re reevaluating how much productivity we have, and we’re able to accelerate the development of this work…
…AI, especially though, can help the sourcing discovery in the listing of primary homes. So without, again, giving away some of the things we’ll show in 20 — May 20, we do find that AI can make it much easier to list your property. So right now, you have to type everything in, you type in your address, you type in your title, you have to type in your listed description. Eventually, I imagine a world where you can just say like, list my place, you put in your address, it can scrape information on the Internet. You can take photos. It can even write your description based on computer visioning of the photo. So it’s very, very difficult for a regular person to list a property.
Airbnb’s management thinks that AI agents still cannot work for long hours in an unsupervised manner
So AI is a like — maybe here’s an analogy. In the old world, you might need a team of 20 engineers. In a new world, an engineer can spin up 10 agents. And those agents can work 24/7. I mean I’m kind of exaggerating a little bit. You have to be there to prompt them and the amount of work they can do without supervision isn’t overnight, typically for most tasks.
Arista Networks (NYSE: ANET)
Arista Networks’ management sees AI workflow patterns as being different from typical cloud computing workflows; AI workflows have 2 main categories, namely, long-lived massive flows, and short-lived, unpredictable flows; the difference between AI workflows and typical cloud workflows mean the performance of a flow is important
Unlike typical workloads, AI workflow patterns can be long-lived elephant flows or short-lived and simply not predictable. This implies careful attention to performance where a flow can cause burstiness for a long duration of milliseconds. The intensity of a flow can determine the line weight throughput, the shifting traffic patterns to massive flows synchronized to all-in-all or all-reduce or burst with collective communication are all important for AI training and inference applications.
In the scale-up AI networking use case, Arista Networks’ management sees ESUN (Ethernet for Scale-Up Networking) paving the way for Ethernet technologies to increase and decrease computing power flexibly to match workload demands; Arista Networks will be entering the scale-up networking business in 2027; Arista Networks will be working with its customers to build AI racks with rapid interconnects for CPC (co-packaged copper) and CPO (co-packaged optics); management has no doubt that Arista Networks will have a number of scale-up use cases in 2027 and most of them will start with 1.6 terabit switches; the scale-up use cases in 2027 include 5-7 rack opportunities that Arista Networks is actively designing with customers; today’s scale-up AI networking products are mostly from NVIDIA’s NVLink and PCIe; CPOs are very much still science experiments in the eyes of Arista Networks’ management; management thinks scale-up racks would not be possible with XPO
In scale-up mode, we have familiar technologies such as NVLink and PCIe that have enabled vertical scaling of single compute nodes or racks. The advent of ESUN, Ethernet for Scale-Up Networking, specifications allows for increasing or decreasing computing power in a flexible manner with Ethernet to automatically adapt to workload demands. Scale-Up will be a new entry for Arista in 2027 and beyond, where we will be working closely with our customers to build AI racks with very fast interconnects for co-packaged copper, CPC, or open co-packaged optics, CPO, as well as supporting collectives and memory acceleration…
…there is no doubt in our minds that we will have a number of racks and number of scale-up use cases in 2027. Maybe some of them will be in early trials, but majority of them are looking at really starting with 1.6T, and 1.6T chips will really happen in 2027. There may be a few, a handful of them that tried some experimental stuff at 800 gig. But we continue to see at least 5 to 7 rack opportunities. Some of them are multiple racks with the same customer. We’re actively designing with them. There’s a huge amount of liquid cooling designs with very dense cabling options, acceleration of collectives and memory, features we have to work on for low latency. So I definitely feel we’re in active engineering phase with Ken and Hugh’s teams this year. But unlike the ODMs, I think we’re held to a higher bar, and we have to just make sure that this thing is production worthy and specification adhering to ESUN. So I would say today’s scale-up is mostly limited to NVLink from NVIDIA and maybe some PCIe switching. But majority of the Ethernet scale-up will only really happen in ’27 and ’28…
…While the industry has been talking a lot about co-packaged optics, these are still science experiments, and they’re very proprietary with individual vendors doing their own thing…
…We embrace open CPO a few years from now, but we think XPO has a 10-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity. So all the scale-up racks we’re talking about wouldn’t be possible without XPO or CPC or any one of those technologies.
In the scale-out AI networking use case, Arista Networks already has more than 100 cumulative customers to-date in 800 gigabit Ethernet deployments; management expects to see 1.6 terabit Ethernet solutions in 2027 at production scale
Scale-out or horizontal scaling involves adding more machines to a leaf-spine fabric, moving workloads across multiple servers or nodes or even connecting other elements like storage or CPUs. As you scale up or out with massive data sets, bottlenecks can be resolved with collective and protocol acceleration at L2, L3, cluster load balancing, all at wire rate. The system must deliver consistent performance without degradation as more nodes participate. Arista is a shining example here with greater than 100 cumulative customers to date in 800 gigabit Ethernet deployments, and we expect the addition of 1.6 terabit in 2027 at production scale.
In the scale-across AI networking use case, Arista Networks’ management thinks the company’s 7800 R3 and R4 series of products, which provides sophisticated traffic engineering, deep routing, encryption properties, and integrated optics atop its EOS (Extensible Operating System) stack, are a great solution; management sees the 7800 series as the premier scale-across product; scale-across AI networking was only a small part of Arista Networks’ business in 2025, but will contribute at least 1/3 of the company’s $3.5 billion in AI networking revenue in 2026; the presence of Alphabet’s TPUs and AMD’s GPUs has created a huge opportunity for Arista Networks in scale-across AI networking; management thinks scale-across is the most significant and differentiated opportunity in AI networking for Arista Networks
Scale across — drives across the cloud and AI as the AI accelerators in a location may need to be distributed to achieve the appropriate bandwidth capacity with the optimal power. As workloads become more complex and more distributed, the bi-sectional bandwidth must scale smoothly to avoid bottlenecks and preserve performance. This demands sophisticated traffic engineering, deep routing, encryption properties, and integrated optics based on Arista EOS stack, and using Arista’s flagship 7800R3 or R4 series. The 7800 has established itself in this category as the premier scale across choice…
…I think last year, on scale-across, we were just beginning. So I think they were small numbers. And majority of the numbers were really scale-out. That’s sort of our heritage and that’s where we excel. If I were to anticipate how it would be this year, again, scale-up is virtually 0 and nonexistent because it really only comes to play after the ESUN spec. So consider that more a’27, ’28 kind of number. So I think the number will be really shared between scale-across and scale-out. I don’t know if I can say it’s 50-50 or 70-30 or 60-40, but scale-across will definitely contribute at least 1/3 of our AI number…
…In general, we are seeing diverse accelerators. Last time I spoke about the AMD accelerators. This time, I will definitely give a nod to the TPUs because in particularly scale across use cases, we’re seeing multitenants connecting to different AI accelerators, including TPUs as well. So I think the diversity of accelerators is creating tremendous multiaccelerator opportunity and multiprotocol features that we can provide for them in our network…
…Scale-across is by far the most significant and differentiated opportunity that really highlights Arista’s prowess in both platforms and software.
Arista Networks’ management thinks the company’s Etherlink portfolio handles both massive synchronous flows for AI training, and low latency flows for real-time inference
Arista’s Etherlink portfolio addresses both the synchronous flows for massive training and the low latency for concurrent swarms of real-time inference in this era of trillions of tokens, terabits of performance, and terawatts of power.
Of Arista Networks’ 4 major AI customers that are deploying AI with Ethernet, 3 had deployed 100,000 GPUs each with Ethernet as of 2025 Q4; the last remaining customer has migrated from Infiniband to Ethernet at production scale; since 2024, Arista Networks has expanded to many more customers beyond the 4 major ones
In 2024, you may recall, we discussed 4 Ethernet-based AI training deployments. And of course, since then, we’ve expanded and exploded to countless others. This fourth customer from the group has officially moved from InfiniBand to Ethernet at production scale over the last 2 years.
Arista Networks’ management thinks the high-speed Ethernet AI leaf-spine architecture, with flexible air or liquid cooling, can overcome the constraints of power and space for AI workloads; management thinks the architecture can help build a low latency distributed AI supercomputer fabric globally
The high-speed Ethernet AI leaf-spine with flexible air or liquid cooled infrastructure overcomes the physical constraints of power and space for AI workloads. It results in a low latency distributed AI supercomputer fabric across global regions.
Arista Networks’ management recently introduced its extended pluggable optics, the XPO form factor; management thinks the company’s networking progress has been important for high-speed optics transmission; the XPO form factor is now endorsed by more than 100 vendors and delivers a record-breaking 12.8 terabits of throughput per pluggable module, and unprecedented rack density, among other traits; management thinks XPO will have a 10-year run; management thinks scale-up racks would not be possible with XPO; management thinks XPO is a very important innovation for the industry; management sees XO unlocking a standard multivendor way to obtain 4x the network density in liquid cooling, which is critical for AI use cases; management thinks XPO and OSFP (Octal Small Form-factor Pluggable) are partnering technologies, where XPO is more suitable for higher data speeds; management thinks XPO will be more suitable for scale-out and scale-across workloads compared to scale-up
What is clear to me and us is our networking progress with data, control and management, and multiplanar orchestration is not only central to our AI switching performance, but also important for high-speed optics transmission. At the recent Optical Fiber Conference, Arista unveiled its extended pluggable optics, XPO form factor, designed specifically for optics innovations at high speed. Now endorsed by greater than 100 vendors, salient features include record-breaking throughput, delivering 12.8 terabits per pluggable module, unprecedented rack density achieving 204.8 terabits per OCP rack unit, integrated cold plate capable of cooling up to 400 watts power per module, and the universality and flexibility across a range of pluggable optics, copper as well as linear halftime or retimed interfaces…
…We embrace open CPO a few years from now, but we think XPO has a 10-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity. So all the scale-up racks we’re talking about wouldn’t be possible without XPO or CPC or any one of those technologies…
…99% of the optical market today that we connect to is all pluggable optics. So this is a very crucial invention and innovation, not just for Arista, but the industry at large…
…What XPO unlocks is a standard, interoperable multivendor way to get to 4x the network density in liquid cooling, which is absolutely critical for these AI use cases. Without that, you’ve got this huge bottleneck at the front panel, the amount of extra rack space is required to get through OSFPs. It’s — so we’re really enabling the future growth of our industry this way, which we benefit and others benefit as well…
…You should look at XPO as a partner to OSFP. So at 400 gig and 800 gig you’ll be fine with OSFP. And as we go to higher speeds in ’27, ’28 or even beyond, OSFP will run out of steam, and this will be the new connector of choice. So the migration to higher speeds equals the migration to XPO, particularly for scale-out and scale-across. Within a rack and scale-up, there’s still a number of choices. I think within short distances of 2 to 3 meters, you’re still going to see a lot of co-packaged copper and I think XPO in terms of density will be another alternative. But I don’t rule out open CPO as well over there. They’re really looking to maximize the density in a minimum amount of space. So I think XPO will be particularly prevalent in scale-out and scale-across and will be one of the choices in scale-up.
Arista Networks recently won a neocloud as a customer for AI networking; the neocloud’s initial white box architecture could not handle massive scale-out requirements; Arista Networks was selected by the neocloud for its scale-out architecture, which could connect with AMD XPUs; the neocloud is also using AVD (Arista’s Validated Design framework) to automate networking provisioning and thus lower the total cost of ownership; Arista Networks’ management is seeing tremendous opportunity with neocloud and sovereign cloud customers; management thinks the neoclouds are a very important sector for AI networking because they do not have the resources to tackle networking, and so will rely on vendors such as Arista Networks
Our first highlighted win is a neocloud AI network. The customer was constrained by an incumbent white box architecture that simply could not keep pace with the massive scale-out requirements of AI. Arista was selected as a commercially proven and reliable scale-out architecture with unmatched stability of EOS and the ability to connect AMD MI Series XPUs. Arista’s AI leaf and spine Etherlink products were deployed at 800 gigabits to provide the incredible performance modern AI networks require. The AI fabric was tuned using Arista’s cluster load balancing to scale out to thousands of XPUs minimizing hotspots and congestion. On the software side, the customer leveraged AVD, Arista’s Validated Design framework, to automate network provisioning, which both reduces the total cost of ownership, but also provides an easy path to reliable network deployment at scale, where without AVD automation, a small mistake can cause precious days of debugging time. This was a strategic neocloud win with large potential for upside growth in an area where we are seeing enormous opportunity and velocity in both neocloud and sovereign cloud customers…
…It’s easy to talk about the titans because the numbers are so ginormous, right? But the neoclouds are a very important sector because they don’t always have the staff to do everything they want to do, and they really lean on Arista’s design expertise, EOS expertise, network design configurations we can provide them, a family of 22 products we have in AI.
Arista Networks’ management is seeing industry-wide supply shortages across the silicon board, which has led to higher supply costs and thus gross margin pressure; demand for Arista Networks’ networking products is outstripping supply; management hopes the supply shortages will ease in 1-2 years; despite the supply chain challenges, management has raised guidance for Arista Networks’ AI networking revenue for 2026 to $3.5 billion (previous guidance was for $3.26 billion); Arista Networks’ purchase commitments at the end of 2026 Q1 was $8.9 billion, up 31% sequentially; the sequential increase in purchase commitments was for chips related to new products and AI deployments; management is willing to hurt Arista Networks’ gross margin in order to meet demand for AI networking; management is seeing shortage of power in data center sites; management has chosen not to raise prices, which explains the gross margin pressure; Arista Networks’ purchase commitments extend to multiple years because the lead times for chips are that long
Our demand is actually the best I’ve ever seen in my Arista tenure. The supply, however, is a slightly different and opposite tale. We are experiencing industry-wide shortages across the board, be it wafers, silicon chips, CPUs, optics, and of course, memory that I referred to last quarter, coupled with elevated costs to procure these. Clearly, our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or 2, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multiyear purchase commitments. We anticipate gross margin pressure due to mix and trade-offs we are making to pay more to assure supply continuity to our customers. Nevertheless, it gives us confidence to increase our forecasted growth slightly to 27.7%, aiming now for $11.5 billion for 2026. We also increased our AI target now to $3.5 billion this year, thereby more than doubling our AI sales annually…
…Our purchase commitments at the end of the quarter were $8.9 billion, up from $6.8 billion at the end of Q4. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments…
…We see multiyear demand, and we are going to do everything, including hurt our gross margins to supply to that demand this year and next year because we believe that we certainly don’t want to keep GPUs idle and AI infrastructures underutilized because Arista didn’t supply the network…
…The other thing we’re seeing with a lot of these use cases is the lack of power in sites, and the ability and demand to distribute and get a more multitenant scale-across is very high in these 2 use cases…
….One thing to clarify also on gross margins. So we view this as a partnership with our customers. So while we would consider and have raised prices a little bit, unlike our competitors, we haven’t done 2 price increases. We haven’t done major price increases. And the price increases really come into play once our backlog starts to reduce, right? So you won’t see the impact of that. So our gross margins are a strong factor of cost going up and are still eating a lot of the costs and giving our customers the benefit and promise of the pricing we said we would give to them…
…I would just say our purchase commitments are multiyears because we’re having to deal with forecasts that are out multiple years so that we get them in time because the lead time of these chips is so long. So I think that’s the biggest hole, lead times.
Arista Networks continues to have a great relationship with its 2 largest customers, Microsoft and Meta Platforms, in both cloud and AI; management sees the potential for 1-2 new large customers for Arista Networks that use all 3 AI networking use cases – scale-up, scale-out, and scale-across
Microsoft and Meta, they’re our all-time favorites. They’ve been our 10% and greater customers for over a decade. And the partnership could never be stronger, and it continues to get better both in cloud and in AI. In terms of the new entrants, we still expect at least 1, maybe 2 — and maybe I should caveat this by saying, certainly, in demand, we see 1 or 2. We shall see, Todd, how we do on shipments to see if we can achieve the greater than 10%. The 2 of them have very interesting characteristics. They exhibit what I would call the 3 use cases I just alluded to, scale-up, scale-out and scale-across where we really have a fabric notion of creating — so far, we’ve been working with them a lot on the front end, and now we get to complement that on the back end, definitely for scale-out and scale-across and maybe even a little bit of scale-up in some of these use cases.
The biggest use case that Arista Networks’ management sees right now in agentic AI is training, but it will move to distributed inference; management thinks agentic AI will be moving into plenty of enterprise use cases; agentic AI has caused Arista Networks to see a lot more back-end activity now because the hyperscalers have to deal with billions of parameters and tokens, to the extent that the hyperscalers are ignoring the front end refresh; the rise of agentic AI has changed management’s view on the ratio of front-end deployments to back-end deployments from 2:1 to 1:1 or even less; Arista Networks has the same set of products in the same common operating system across the front-end and back-end, which management sees as lowering costs for customers; Arista Networks is the only vendor that has the same set of products in the same common operating system across the front-end and back-end
The biggest killer application we see in agentic AI right now is still training. And indeed, it’s going to move to more distributed inference. And we’d also like to see agentic AI move into a lot of enterprise use cases, all of which we’re seeing, by the way, but I would say large, medium, small. The largest killer agentic AI application is training, the medium is enterprise and the smallest — medium is inference, and the small is obviously enterprise. The — in terms of back end versus front end, we are now seeing way more back-end activity, particularly with our large AI titans and cloud titans because there is just so much scale they need to prepare for the billions of parameters and tokens, and this is where a lot of — so much so that I think the front end, they might come back and refresh, but they’re almost ignoring right now in favor of the back end…
…By virtue of the back-end deployments, I don’t know if we any more see a 2:1 to the front end, but we at least see a 1:1. And the 1:1 can be wide area, CPU, and storage. Those are probably the 3 common use cases. Not all the customers are up and lifting everything and doing all 3, although we’ve had cases where some of them did an upgrade at the front end before they went into the back end. But usually, they will have to come back to that because the minute you put that kind of performance pressure and scale on the back end, you almost have to do something in the front end. But at the moment, I would say it’s more one-to-one…
…The other thing I have to mention here is just how good it feels to be — have the same set of products in the same common operating system management suite and operating model across the front end and back end. This lowers cost for the customer, simplifies their design process to get that leverage, and we’re one of the few vendors who can do that…
…I think only.
When it comes to greenfield deployments of AI data centers, Arista Networks’ management has observed that customers think of both scale-out and scale-across solutions concurrently; Arista Networks has strong market share in both scale-out and scale-across in greenfield deployments; when it comes to brownfield deployments of AI data centers, Arista Networks now has the opportunity to offer scale-across solutions; the lack of power supply has resulted in data center operators having to distribute the centers, which gives Arista Networks the opportunity to participate in the build out
[Question] You said most of the cloud revenue near-term is going to be scale-out and scale-across as we wait for scale-up to ramp. How are you thinking about your market share when it comes to scale-out versus scale-across in the early days of scale-across? What are you seeing in terms of market share? And are you seeing customer decisions being led in scale-across by sort of the incumbent in scale-out? Or is it a different decision altogether in terms of how they’re designing vendors for scale-across?
[Answer] If it’s greenfield deployment, then they tend to think of it together because they’re not only building the sites, but they’re thinking of the interconnect across them. And therefore, market share is generally strong in both. In some cases, where Arista has not been a historical participant within the data center, we now have an opportunity to offer the scale-across multitenant even in a nongreenfield situation and let’s say, in a brownfield, where now they’ve got disparate data centers or AI clusters that we now have to bring in. And so once again, I think Arista is really fitting example to be in scale-across for both those use cases, but has the additional opportunity in a brand-new data center to be in all use cases, if that makes sense. So it’s giving us a chance to participate with different types of accelerators and different types of models because people aren’t getting the power and they’re having to distribute the data centers. And as a result of distribution, you need more traffic engineering, routing, multitenancy. So I would say scale-across is the common denominator in all our use cases and scale-up and scale-out maybe nice options in brand-new greenfields.
Arista Networks’ management currently sees AI training workloads dominating, but they also see an inference paradigm coming, where CPUs will become more important than GPUs; management is seeing customers wanting to deploy small-ish clusters, in the thousands of GPUs, for inference
While today we are in a training fever, that a more distributed AI — generative AI paradigm with inferences, which means you don’t always need the GPU. You’re going to have high-end CPUs and you’re going to have a smaller set of parameters and tokens to manage, and you’re going to have specific agentic AI use cases and applications. We’re seeing very, very early trials and stages. Nothing super big yet. But we are seeing — I mean, they’re not in the hundreds of thousands of GPUs like you see on the AI titans. But we are frequently seeing our customers in certain high-tech sectors want to deploy clusters that are 1,000 — few thousand, definitely not 10,000, but in hundreds of thousands. And they tend to be exactly, as you said, not training, but more inference based — more agentic AI edge inference based as well. So I think we’ll see more of that. This is the calm before the storm, if you will. And as we — as the AI gets more distributed, I think it doesn’t need GPUs alone, it’s going to need more high-performance compute.
Cloudflare (NYSE: NET)
A rapidly-growing technology company in the Asia Pacific region is experiencing explosive growth, driven by AI coding, and expanded its relationship with Cloudflare; the technology company chose Cloudflare over a hyperscaler
A rapidly growing technology company in APAC expanded their relationship with Cloudflare, signing a two-year $8.7 million contract for application services and our Workers developer platform. Driven by the boom in AI-powered live coding, this company has seen explosive growth, and Cloudflare has become core to their infrastructure, intelligently routing billions of daily requests across the globe. This customer chose Cloudflare over a competitive bid from a hyperscaler due to the strength of our unified platform and our seamless low-latency security.
A Fortune 100 technology company expanded its relationship with Cloudflare after facing an urgent need to handle massive user-initiated agentic traffic; the technology company was up and running with Cloudflare within a week
A Fortune 100 technology company expanded their relationship with Cloudflare, signing a two-year $8 million contract for our privacy proxy solution, the fifth privacy engagement with this customer, solidifying Cloudflare as their go-to privacy partner. They approached us with an urgent need to handle massive scale with precise geolocation accuracy for user-initiated agentic traffic. We delivered a fully operational solution within one week, demonstrating the speed, trust and engineering depth that continues to set us apart.
A leading AI company expanded its relationship with Cloudflare despite having a strong build-over–buy mentality; the AI company is a massive target for cyberattacks and needed a strong security layer to protect its infrastructure; the AI company is already testing Cloudflare’s AI gateway for AI workloads
A leading AI company expanded their relationship with Cloudflare, signing a one-year $4.1 million contract for application services. As one of the most visible targets for cyberattacks globally, this customer needed a security layer to protect their massive infrastructure build-out. Despite a strong build-over-buy mentality, they chose Cloudflare, trusting a battle-tested network that has proven its resilience against the largest attacks. This is a customer that moves fast and pushes boundaries, and they’re already testing our AI gateway for their AI workloads.
A leading AI company expanded its relationship with Cloudflare with a contract for Argo Smart Routing just one quarter after inking a Workers Developer Platform deal; the AI company used Cloudflare to lower its average global latency by 30%; the hyperscalers could not match Cloudflare’s speed
Another leading AI company expanded their relationship with Cloudflare, signing a 10-month $2 million contract for Argo Smart Routing coming just one quarter after signing a Workers developer platform deal. This customer wants to be the fastest and most reliable AI provider in the market, and Cloudflare is delivering. After deploying Argo, they immediately reduced their average global latency by 30%. In the AI space, that kind of speed is a real advantage that our hyperscaler competitors simply can’t match.
Cloudflare’s management is seeing agentic AI reshape how companies are structured, operate, and create value; Cloudflare itself is the first and most demanding customer of its own AI tools; prior to November 2025, management was cautious about deploying AI internally because management was unclear about the ROI from AI investments; from November 2025 onwards, Cloudflare started experiencing massive gains in productivity from the use of AI; in 2026 Q1, Cloudflare’s usage of AI increased by 600%; nearly all of Cloudflare’s R&D team are using AI coding tools powered by the company’s Workers Developer Platform; 100% of the code submitted by Cloudflare’s R&D team for production are now reviewed by autonomous AI agents; management thinks there will soon be a huge uptick in reliability in software development across the technology industry because AI can now be used to check code; in 2026 Q1, Cloudflare experienced an unprecedented increase in new code generated, solved bugs, and burn-down of technical backlog; employees across Cloudflare are running thousands of AI sessions daily, and these workflows rely on dozens of MCP (Model Context Protocol) servers; Cloudflare has built an agentic harness called Cloudflare OS for teams to get started quickly with agentic AI; Cloudflare’s newfound productivity from AI use has led management to reduce headcount by 20%, but growth is expected in the company’s sales team; Cloudflare has been able to keep the costs of internal AI deployment manageable by running the models on its own infrastructure when appropriate, instead of the model providers’ infrastructure; Cloudflare has been able to achieve significantly higher utilisation of GPU resources than hyperscalers and AI labs; Cloudflare’s AI Gateway enables it to route workloads to the right models, thereby achieving cost-efficency
In nearly every customer conversation, it’s clear. The emergence of generative and agentic AI is not just redefining the economics of the Internet and software companies, they’re redefining the business models of all companies, fundamentally reshaping how organizations are structured, operate and create value.
At Cloudflare, we don’t just build and sell AI tools and platforms. We are our own most demanding customer. AI and agents are no longer pilot projects at Cloudflare. They are now core parts of our workforce. It’s been an interesting journey. We’ve been selling picks and shovels in the AI gold rush for the last four years, but we ourselves were cautious users wanting to ensure there was real ROI before making significant investment. We avoided a lot of the performative AI some companies engaged in. Internally, the tipping point was last November. At that point, across our teams, we began to see massive productivity gains, team members that were 2x, 10x, even 100x more productive than they had been before. It was like going from a manual to an electric screwdriver. Cloudflare’s usage of AI has increased by more than 600% in the last three months alone. For team members in R&D, 97% use AI coding tools powered by the same Workers Developer Platform we ship to our customers and 100% of their contributions to our production code bases are now reviewed by autonomous AI agents.
I think across the industry, you’re about to see a massive uptick in reliability as every code or configuration change can now have a tireless and uncorrelated set of eyes trained on every incident from the last 10 years, checking to avoid problems. At the same time, the impact on developer velocity is clear. We’ve never seen a quarter-to-quarter increase in new code generated, bugs squashed and technical backlog burn down like we did last quarter…
…Employees across Cloudflare from HR to marketing run thousands of AI sessions each day to get their work done. Those agentic workflows rely on dozens of MCP servers to reach data in systems of record and use hundreds of centrally managed skill files as well as many more that have been created and shared within individual teams. The harness that we’ve built, which we call Cloudflare OS allows teams across the company to quickly get up and running…
…By fully embracing an agentic AI-first organizational structure and operating model as Cloudflare’s revenue scales, our efficiency and productivity will scale even faster. Unfortunately, this decision means parting ways with colleagues who have helped build the strong foundation Cloudflare stands on today, resulting in a reduction of the size of our team by approximately 20%. These reductions are across all functions and geographies and reflect how broadly AI is accelerating our operational velocity. Importantly, however, we continue to expect growth in the net capacity of our quota-carrying sales force to accelerate in 2026 with today’s actions compounding productivity to fuel our growth…
…[Question] How do you think about balancing R&D agentic coding adoption with the cost?
[Answer] We have seen as usage has gone up 600% in the last quarter, we have seen costs go up. But I don’t think it’s gone up nearly as much as some others. And that’s driven by a number of things… more importantly, though, is a lot of times, we’re able to run those models instead of on their infrastructure, on our own infrastructure. And so we have a fleet of GPUs, and we have all of the tools with Cloudflare Workers and Workers AI to be able to build and use those tools themselves. And so most of the use of various AI coding tools isn’t even leaving our network. It’s running on our infrastructure because we’re very good at routing to wherever there’s capacity, we’re able to get a lot out of that. And so I think that’s one of the reasons why we see significantly higher utilization across our GPU resources than some of — than any of the hyperscalers and then — than any even of the AI labs are able to drive.
And then when we’ve built what we call Cloudflare OS, we’ve paired that with our AI Gateway product. And that AI Gateway product allows you to route different requests based on what’s the right model for the right task. And so that means that if we have a task which we can evaluate as being relatively simple, then we can route that to a model that might be running on our own infrastructure and be able to be delivered at essentially no marginal cost to us. Whereas if we have something that is more important, we might send that off to one of the frontier models and pay more for that…
…Across most of the hyperscalers, you’re seeing utilization rates of their GPUs that are in the single digits, whereas we’re slowly getting our GPU utilization to approach what our CPU utilization is, which is up in the 70% to 80% range.
Cloudflare’s management thinks AI is the biggest tailwind for the company’s network and Workers Developer Platform in its history; management thinks Cloudflare got lucky by already having the right set of tools for agentic AI
AI is driving a fundamental replatforming of the Internet as well as a paradigm shift in how software is created and consumed, and it’s shaping up to be the biggest tailwind for both our network and our Workers Developer Platform that we’ve ever seen in Cloudflare’s history…
…In our workers platform, we have built a platform that allows you to build agents that are just significantly more efficient than anyone has before. And so across all of the parts of our business because even in the Zero Trust and SASE space, it turns out that having more fine-grained controls about data is exactly what you need if you have kind of these somewhat new agents running around doing things, you want to make sure that they only have access to the things they should. It’s — I wish I could say that we saw all this years ago and built Cloudflare for it. But I think that the reality is that we happen to have built exactly the right set of tools for this moment.
Cloudflare’s management is seeing hundreds of billions of agentic requests monthly, and the requests are growing
So today, literally, we’re seeing hundreds of billions of agentic requests per month, and that number is growing exponentially.
Cloudflare’s management thinks the predominant business model of the internet will be changing dramatically over the next 5 years because of AI, but the end-state is still an open question; management thinks Cloudflare could help define the new business model(s) for the internet; management thinks micro transactions for agentic traffic to websites will be one of the new business models, because agentic internet traffic could surpass human internet traffic in 2027; management thinks that nobody currently has the appropriate infrastructure to handle the potential volume of agentic micro transactions; because of unwanted agentic traffic on advertising-supported media websites, Cloudflare has gone from low penetration in the space to dominating it; media companies have been able to sign better deals with AI companies because of the tools Cloudflare has built; management is focused on making substantial progress with the internet’s new business models, but they are unsure when these will become meaningful
The business model of the Internet, which has historically been advertising and subscriptions, is about to change dramatically over the next five years. And exactly what it changes to I think it’s still an open question. And I think it might not be one thing. I think it might be several things. Because of how much of the Internet sits behind Cloudflare, we have a seat at the table of defining that…
…Some part of this is going to be some kind of micro transactions for any request that agents are making to website. It might be fractions of fractions of pennies. But if you think about the — I don’t know, about 500 billion requests that pass through Cloudflare in any given second, that some percentage of those we think that there’s going to be some ability to have some micro payment that is made for that because somebody has to pay for the infrastructure. And if you look at the growth in agentic traffic, if you look at the growth in sort of non-human traffic on the Internet, somewhere in 2027, we think it’s going to surpass human traffic, and it’s not going to slow down. And so we’ve got to figure out something else to build it…
…The challenge is like nobody can handle the volumes right now. And so we’re looking around to partner with people. We’re looking around for everything. But right now, the sort of transaction volumes that people are excited about like one million transactions per second, we need something that’s significantly larger than that…
…If you’re an ad-supported business, then your content being crawled is actually a threat. So I think we’re trying to provide tools on both sides of that. The side that you focused on is the folks that want to block it, the ad-supported folks that are out there. And I would say that the first milestone that we’ve seen is that we went from being relatively low in terms of our penetration in the media space to today dominating that space. And so I think that’s the first sign. And what I hear from media company execs is they are signing better deals with AI companies because we’ve given them the tools to be able to control who has their content…
…I don’t know exactly when that will come. But I do — I will say that when we listed what our top six priorities were for 2026, one of the six was making sure that we make real progress and see the first revenue that we can then pass back to that long tail of the Internet in order to help make sure that we continue to create a healthy ecosystem for content creators. And I’m pretty confident we’ll make that goal.
Cloudflare’s management thinks the company’s business is very different from that of the hyperscalers when it comes to providing AI compute infrastructure
The hyperscalers business is to buy a server and then to lease that server back ideally for 5x or more of what they paid for it. And so if they don’t have servers to lease, then they can’t grow their revenue. And so their CapEx has to invest ahead of whatever that demand is that’s out there. We focus on very different things. So the thing to watch for us is when you see us publish a blog post about how we figured out how to get more utilization across our fleet of GPUs or how to get more models loaded quickly across GPUs. That’s real IP that we’re inventing internally and the metaphor to think about is once upon a time when I was in college, I remember a new thing called the web was starting, and so we needed to have a web server. And so we literally — from Gateway, I remember ordering a box that came with cow prints on the outside of it. We bought a gateway server and we plugged it in because there was no idea of virtualization. And then VMware came along and then after that, you had Docker and containers and that was sort of the journey that everyone went on. We’re still at the stage with GPUs of buying the physical server and needing to use that for most of the industry.
Cloudflare has a recent product called Dynamic Workers which allow a company to stand up an AI workflow rapidly; a large AI studio went from zero Dynamic Workers to 1 million in 15 days
We launched something called Dynamic Workers, which allows you to very, very quickly stand up something which is significantly more efficient than a container. Containers are too slow and too heavy to actually be able to respond to these incredibly fast agentic workloads. And so what AI studios are doing is they’re looking at this and they’re seeing the opportunity. And so to give you a sense of — with the — I’m naming them, one of the large AI studios in just the last 15 days went from essentially zero Dynamic Workers to over one million Dynamic Workers running across the platform.
Cloudflare’s management thinks agentic AI will provide tailwinds for its legacy businesses
Every time an agent does something, like if you think about it, you just — if you type something into ChatGPT or any of the things, like to search — the number of sites that get searched, the amount of traffic that gets generated, if I’m looking for a digital camera as a human, I might visit five websites if I really care about it. My agent is going to visit 5,000. And so that’s going to just drive significantly more usage, which is the biggest driver of kind of our Act 1 revenue…
…For Act 2, again, as we talked about already, I think being able to very narrowly define what data an agent has access to and what data they don’t. We’re just seeing more and more of that usage, especially in the self-service category, which there really isn’t another sort of SASE, Zero Trust, self-serve competitor out there with any sort of scale. And so that’s with things like OpenClaw driving a lot of usage there. And what we found time and time again is as hobbyists or individuals adopt technology, they inevitably start to bring that technology more and more to work. And that’s what we’re seeing as we win more of the enterprise accounts across Act 2.
Coupang (NYSE: CPNG)
Automation and AI is improving Coupang’s service levels and lowering its cost to serve; management expects automation and AI to help Coupang improve its customer experience and margins in the years ahead
Automation and AI across our services, including our Fulfillment and Logistics network, continue to improve service levels and lower cost to serve in parallel, and we expect them to be meaningful contributors to both the customer experience and margin expansion in the years ahead.
Datadog (NASDAQ: DDOG)
Datadog engineers are equipped with the latest AI coding tools and they are building rapidly; management sees the company’s AI initiatives as being split into 2 buckets, namely (1) AI for Datadog, and (2) Datadog for AI; AI for Datadog is about making Datadog’s platform better with AI products and capabilities while Datadog for AI is about Datadog’s end-to-end observability and security capabilities across the AI stack; in AI for Datadog, the company launched MCP (model context protocol) Server for general availability recently and it allows developers to debug applications directly in their AI coding agents; in AI for Datadog, the company launched Bits AI Security Agent recently and it reduces investigations from hours to as little as 30 seconds; in AI for Datadog, the company launched Bits Assistant in preview recently and it allows users to search and act across Datadog with natural language; in Datadog for AI, the company recently launched GPU Monitoring for users to understand their GPU fleets’ performance and drive higher GPU ROI (return on investment)
Our engineers enabled with the latest AI coding tools are building rapidly to help our customers confidently and securely deploy their applications…
…As a reminder, we’re talking about our AI efforts in 2 buckets: AI for Datadog and Datadog for AI.
So first, AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for our customers. In March, we launched our MCP Server for general availability. With MCP Server, developers access live production data to debug their applications directly in their AI coding agent or IDE. We delivered Bits AI Security Agent, which autonomously triages Datadog Cloud SIEM signals, conduct in-depth investigations of potential threats and delivers actionable recommendations. We’ve seen Bits AI Security Agent reduce investigations that could take hours to as little as 30 seconds. We also shipped Bits Assistant now in preview, which helps customers search and act across Datadog using natural language prompts.
Moving on to Datadog for AI. This includes Datadog capabilities that deliver end-to-end observability and security across the AI stack. We launched GPU Monitoring, enabling teams to understand GPU fleet utilization, workload efficiency, thermal and power behavior and interconnect performance. This drives higher GPU ROI and operational reliability.
Datadog now has 6,500 customers sending data for their AI integrations (was 5,500 in 2025 Q4); these 6,500 customers are only 20% of Datadog’s total customer count, but represent 80% of the company’s ARR; customers’ usage of AI within Datadog is growing rapidly; Bits AI SRE agent investigations have increased by more than 100% from December 2025 to March 2026; the number of LLM spans customers are sending to Datadog is up 3x sequentially in 2026 Q1; the number of Datadog MCP Server tool calls is up 4x sequentially in 2026 Q1; the number of Bits Assistant messages is up 12x sequentially in 2026 Q1; some of the growing AI-related volume that Datadog is processing is because of enterprises’ adoption of AI coding tools; management is seeing an inflection point in AI consumption from customers, driven by a real move towards production-level AI workloads from both AI native and non-AI companies; management is seeing a massive increase in agent usage
We now have over 6,500 customers sending data for one or more of our AI integrations. Though this is only 20% of total customers, they represent about 80% of our ARR. And our customers’ usage of AI within Datadog platform continues to grow rapidly. Bits AI SRE agent investigations have more than doubled from December to March. The number of spans sent to our LLM observability product nearly tripled quarter-over-quarter. The number of Datadog MCP server tool calls quadrupled quarter-over-quarter and the number of Bits Assistant messages increased by a factor of 12 in that period…
…[Question] Is there any way to conceptualize the growth in the sheer raw volume of code that’s being produced in the world today due to adoption of code generators such as Claude Code and Codex and Cursor because they seem to be developing the capability to take on full projects?
[Answer] We definitely think and see that there’s many more applications being created. There’s going to be way more complexity in production. We see some of that happening already today. Some of those new applications are getting into production. They’re finding users. We see some signs of that at every layer of our platform. We quoted a few stats on the increasing data volumes we see in our AI products. That’s definitely a reflection of that. So we see an inflection point there in consumption from customers. We see a move to production that is very real, and we see that across both AI native and non-AI companies…
…We see both a stratospheric increase of agent usage. So we have a ton of usage on our MCP Server. We see customers trying to automate a lot with their own agents, using our agents, using a combination of those.
Example of a 7-figure and 8-figure land deals with the AI research divisions of 2 of the world’s largest technology companies (likely to be 2 of Meta Platforms, Microsoft, and Alphabet, with a likelier pairing of Meta and Microsoft because the deals included GPU monitoring for training workloads, and Alphabet trains on TPUs); the 2 technology companies are training advanced AI models and are relying on Datadog to reduce engineering friction and increase training velocity; the 2 technology companies will be using GPU Monitoring on large parallel GPU grids; the hyperscalers are the companies that make the most sense to pursue observability tools themselves, but they still choose Datadog to be efficient with their own resources; the hyperscalers are using Datadog for both traditional observability and GPU monitoring; it’s still early days for the hyperscalers in terms of their usage of Datadog, but Datadog’s management is optimistic that the 2 hyperscalers can be an example for other AI model builders in the future
We landed 2 large deals, a 7-figure and an 8-figure annualized deals with the AI research divisions at 2 of the world’s largest technology companies. These organizations are building and training the most advanced AI models in the world. It is critical for them to reduce engineering friction and increase training velocity, but fragmented internal and open source tooling made it harder to identify and solve issues and reduce engineering and research productivity. By using Datadog, both companies are accelerating their pace of innovation on their hyperscale AI training workloads. And this includes optimizing their workflows using GPU Monitoring on large parallel GPU grids…
…The thing that’s also interesting, in particular this quarter is that we also landed some large parts of hyperscalers. And hyperscalers typically have a culture of building everything themselves, and they certainly have the balance sheet and the human capital to support some of that build-out. Like if there was ever a set of companies for whom it makes sense to do it themselves, that would be those companies. And yet, we see that they have the same issues. When it comes to going as fast as they can and being as efficient as they can with their resources, like they come to us to replace some of the things that we were using before…
…[Question] About the hyperscalers because I thought that was particularly interesting. And the reason why is I don’t think you called them out previously before, and they are so prevalent in the modern tech stack. To your point, they could do this themselves. So I guess how are they using Datadog? Is it for more kind of traditional observability? Or is it for these newer areas like GPU monitoring that Datadog has performed so well of late?
[Answer] It’s both actually. When you look in general at the large AI customers, they use Datadog the way other companies are largely with a fairly broad set of our products to cover the full surface of observability. What’s new is we now have a product for GPU monitoring. It’s a very new product. And we see the hyperscalers that are coming to us for training workloads in particular, being very interested in that. So again, it’s too early in the product life cycle and the customer life cycle for these specific customers to call definitive victory there, but we see that as a very encouraging sign of where the market might go in the future because we think this might be a bellwether of what the next 10, 100, 500 companies that are going to start training workloads are going to want to do. We have some signs that go beyond the customers we signed this quarter that point that way too.
Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management is seeing democratisation of AI training and a growing variety of AI accelerators being used (in management’s words, “the heterogeneity of silicon”), and management thinks both trends are positive for Datadog; the heterogeneity of silicon currently applies to only a very small handful of companies, but management sees a growing opportunity; management was historically more optimistic for AI inference as a growth market for Datadog, but they are increasingly seeing AI training as also a growth market for the company too, driven by growing adoption by the hyperscalers; management is agnostic about the source of usage on Datadog, whether it’s humans or agents; AI training is becoming a growth market for Datadog because it has changed from something artisanal to something in production-mode that has scaled by orders of magnitude and that needs to be incredibly reliable; management is investing heavily into security for AI agents; management thinks there’s a chance a good portion of the market leans towards on-premise observability products
There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. But we now have an additional secular growth driver with AI as we help our customers deliver more value with this transformative new technology. Now more than ever, we feel ideally positioned to help customers of every size and every industry as well as all types of users, whether humans or AI agents, so they can transform, innovate and drive value through AI and cloud adoption…
…The broader market that’s interesting here is training, the training used to be something only 2 or 3 companies were doing or maybe 4 or 5 at a large scale. And it looks like training actually might democratize quite a bit more, and many companies will train models on a regular basis. So it becomes more of a viable category for service providers like us basically. I think the heterogeneity of the silicon is definitely a trend that plays in our favor there. The more heterogeneous, the more you need someone else to make sense of everything for you and tie it all together and also tie it all with the non-GPU aspects and the rest of the infrastructure and the applications and the users and the developers like basically everything we do for living…
…When you think of who is actually — who actually has heterogeneous environments today, that is still a very small number of companies, Google, barely just started selling their TPUs to the outside. So I think it’s still a small number of companies that are there, but we see a growing opportunity there.
Interestingly, last year, when we reported earnings, we said we’re mostly interested in inference workloads and training is not really a market for us yet. Now we actually see training becoming a market. We started landing customers that are actually hyperscalers that have a whole host of homegrown technologies and that are using us specifically in their super intelligence labs to help monitor their workloads, accelerate the training runs, monitor the GPUs also. So we see that as a point of validation that there’s going to be a great market for us…
…We don’t care whether most of the usage is humans, most of the usage is agents. Our business model lends itself to it pretty well, like we’re usage-based, and it doesn’t really matter where the usage is coming from, from that perspective…
…Training was very new a couple of years ago. It was something that was only done by very few companies, and it was, in a way, very artisanal. Like, it was not a production workload. It was something that researchers were building and that was very one-off and homegrown in ways. And now it’s turning into production. It’s turning into something that many more companies are doing. It’s scaling by orders of magnitude. And it’s becoming something that has to be on all the time, reliable and every minute you lose is — or rather every failure you have in your training runs is a week you give away to the competition. And so as a result, it becomes way more interesting as a market for us. And we see some signs of that. Again, we didn’t have a lot of it. We didn’t see a lot of it last year. Now all of a sudden, we’re starting to see quite a bit of activity there and demand…
…On the security of agents, we interface with that in 2 ways. So first, there’s the agents we build ourselves because we are building a lot of automation inside of our product for our customers and agents that automatically identify but also resolve issues without you having to do anything. And there, a lot of it has to do with understanding what permissions to apply, what kind of guardrails to apply, what kind of — how to interface with the humans and how to make that trustworthy and visible in the right way. And so that’s pretty much the whole product surface is to [indiscernible] data. The automation itself actually kind of works already. So you should expect to hear more about that at our conference. This is definitely one big area of investment for us…
…There was a question earlier on data residency and living in customers’ environments. We definitely see a great opportunity there. There is a chance that a good portion of the market leans this way in the future. Today, it’s not the largest part of the market, but we definitely see a potential for that. So we’re investing heavily in that sort of our product.
Datadog experienced adoption growth in AI native customers in 2026 Q1 that significantly outpaced non-AI customers; the AI native cohort continues to diversify and grow; 22 customers in the AI native cohort now spend more than $1 million annually, with 5 spending more than $10 million annually
Our AI native customer growth continues to significantly outpace the rest of the business. This group continues to diversify and grow, including 22 customers spending more than $1 million annually and 5 spending more than $10 million annually. This group includes the leading companies in foundational models, code-gen tools and vertical-specific AI solutions.
MercadoLibre (NASDAQ: MELI)
MercadoLibre’s management rolled out the company’s 1st AI-powered search experience in the marketplace business in 2026 Q1; the new search experience, which involved LLMs (large language models), has led to uplifts in conversion and click-through rates for sponsored listings in Brazil and Mexico; daily active users of MercadoLibre’s Seller Assistant grew 40% month-on-month in March 2026; an AI assistant has increased the productivity of MercadoLibre’s fulfillment network; the new search experience is able to better understand users’ intent
We rolled out our first AI-powered search experience in our marketplace in Q1’26, shifting the architecture away from keywords and rebuilding it around LLMs. In Brazil and Mexico, the improvement in product relevance led to uplifts in conversion and click-through-rate for sponsored listings, both of which represent incremental revenue. These are early results, which we believe have the potential to transform how our customers search and discover products on our platform. Engagement with our Seller Assistant is strengthening, with daily active users growing more than 40% MoM in March. In shipping, an AI-powered assistant that provides reps with real-time process information and performance challenges has increased productivity across our fulfillment network…
…I think it’s worth highlighting the fact that we deployed LLMs in search in commerce for the first time this quarter. And basically, that is live in Brazil, Mexico and Argentina. So now we are using this technology to better understand users’ intent, combining both knowledge on the user behind the query and better interpretation of the query itself.
MercadoLibre’s AI Assistant in MercadoPago is now automatically alert users about negative balances and also identifying opportunities for users to earn higher yields on their savings; AI tools are helping MercadoLibre’s sales force for the Acquiring business to be more productive
In Fintech, our AI Assistant is becoming more proactive. In Brazil, it now alerts users to negative balances in accounts connected via Open Finance and identifies funds held elsewhere that could be earning a higher yield with Mercado Pago — and crucially, it can act on these opportunities instantly, moving balances between accounts within seconds. This is a meaningful step beyond a traditional assistant: it is not just surfacing information, it is helping users take action. In Acquiring, AI tools continue to drive significant improvements in sales force productivity, contributing to the strong market share gains we are seeing across the region.
Through AI, MercadoLibre’s productivity KPIs were up 56%-80% year-on-year in 2026 Q1 even though headcount was up by just 8%; senior engineers now spend time building code instead of reviewing code; MercadoLibre is rolling out Claude CoWork to its 31,000 employees
Headcount grew 8% YoY in Q1’26 – a carryover effect of 2025 hiring – but productivity KPIs are growing 7-10x faster. Many of our most senior engineers that were previously spending most of their time reviewing code are now also building code because of the productivity gains enabled by AI tools. Rollbacks – code that is returned to its developer due to errors – are materially lower YoY. More broadly, we have rolled out Claude Cowork to 31,000 employees, making Mercado Libre one of the earliest, large-scale enterprise adopters globally.
Shopify (NASDAQ: SHOP)
Shopify’s management had bet early on AI and now AI is embedded in everything the company does; Shopify shipped 300 new products and features in 2025 while keeping headcount flat; Shopify has an AI coding partner built right into Slack
In 2026, AI is now Shopify’s native language. We bet early on AI and forced its adoption. It’s embedded in everything we do, the products we build, the channels we power, the way every single person on the team operates. AI has become an exoskeleton for everyone at Shopify, giving them a virtual team of agents and that makes room for rapid experimentation. It allows them to pursue multiple ideas at the same time and then double down on the winners…
…We shipped over 300 new products and features last year alone. We kept our flat head count, which we’re very proud of. And that’s only possible because something has changed fundamentally. And I know Tobi has been talking a bit about river, which is a perfect example of it, but it’s this AI coding partner built right into Slack for the entire team where they can pull into any threat, any conversation and do, frankly, a remarkable amount of the engineering work. And we built it because we needed it, and now it’s deeply embedded in how we operate.
Shopify’s management believes that entrepreneurs will benefit deeply from AI because AI-powered shopping democratises discovery, and this in turn benefits Shopify; each time the world gets more complex, Shopify becomes more valuable for merchants because the company absorbs the complexity into its systems; management sees 3 reasons why Shopify is in a very strong position in the AI age, namely, the company’s (1) data on millions of merchants, hundreds of millions of buyers and billions of products, that enables it to build products informed by the insights developed from the data, such as Sidekick mentioned, (2) demand conversion flywheel, and (3) ability to absorb complexity for merchants; Shopify’s structural advantage is that it gives merchants everything they need, and the company is shipping products even faster now through AI
No group benefits more from AI than entrepreneurs. The logic is simple. AI is making entrepreneurship dramatically more accessible and in fact accelerated. That means we’re going to see more entrepreneurs, and they’re going to scale more easily. AI-powered shopping democratizes discovery. Reach is not just influenced by budget anymore, it is influenced by relevance, which benefits both merchant and buyer. And the right products find the right shopper at the right moment. And this is enormous potential for new and scaling merchants. And because we win when they win, it also has enormous potential for Shopify…
…Every single time the world gets more complex, Shopify gets more valuable. We absorb more of that complexity into our systems and become more valuable to merchants. So when we look at this new era of commerce that we’re in, there are really 3 core principles that explain why Shopify is in such a strong position…
…The first principle, Shopify has a huge advantage that is about to compound. We have 20 years of commerce data. We have data on purchase intent across millions of merchants, hundreds of millions of buyers and billions of products. And in a world where real-time information is now table stakes, the edge is the insight beneath it. And that requires depth, not just access, but experience. We’ve seen merchants start, stall, pivot and scale millions of times across every category and geography. It allows us to build on the real behavior of commerce and to keep shipping products grounded in insights only we have, deep experience applied at speed. That is very hard to replicate and it compounds…
…The second principle, which is the demand conversion flywheel. It should be getting more obvious that every quarter that Shopify is no longer just the platform to convert demand, we are becoming the platform to create it too. And that end-to-end position is a major advantage for merchants…
…The third principle I’ll leave you with is what I call invisible complexity. Here’s the thing. The hardest parts of commerce are the parts that nobody sees, and this is where Shopify thrives…
…That’s the structural advantage of Shopify. We give you everything you need by operating across the entire commerce stack. It’s not the power of any one element of the platform. It’s how they all work together to help merchants accelerate their success. It’s the knowledge and expertise readily available through Sidekick. It’s the speed, context and simplified complexity behind checkout. It’s the ability to sell across every channel, every surface and every geography from day 1. Internally, we are making every function faster, sharper and more productive, and output per employee is improving through deliberate AI usage. The result is that we are building more, shipping more and serving more merchants.
Sidekick is Shopify’s intelligent assistant for merchants that is trained on the company’s knowledge base; the number of weekly active shops using Sidekick grew 385% year-on-year in 2026 Q1; 12,000 custom apps were created with Sidekick in 2026 Q1, up 200% sequentially; half of all Shopify Flows (Shopify’s workflow builder) generated in 2026 Q1 were built with Sidekick; theme edits with Sidekick was in the multimillions in 2026 Q1, up 1,000%; Sidekick has a smart suggestions feature called Pulse; Pulse recently suggested to an accessory brand to create a social proof page and when the accessory brand agreed, the page was created in minutes at no incremental cost to the accessory brand; in the past, the accessory brand would have required a team and several weeks to build the page; merchants that use Sidekick become power users very quickly; Sidekick is used internally at Shopify; management sees Sidekick as a complement to Shopify’s App Store, not a replacement; Sidekick is enabling merchants to build individualised apps rapidly, and thus, move much faster
Sidekick is the perfect example of this. As a reminder, this is our intelligent assistant, which is trained on our knowledge base, paired with completely personalized intel, it has about each merchant’s particular business…
…The number of weekly active shops using Sidekick in Q1 was up 4x year-over-year. We saw over 12,000 custom apps created in Q1 alone using Sidekick. And nearly half of all Shopify flows generated in Q1 were built with Sidekick. And theme edits just from last quarter are in the multimillions, growing over 1,000% in a single quarter. And theme edits just from last quarter are in the multimillions, growing over 1,000% in a single quarter…
…And then there’s Pulse. Sidekick’s smart suggestions feature, which proactively delivers personalized recommendations for merchants using market trends and data from their store, which Sidekick then executes on the merchant’s behalf. And I’ll give you a great example that I just saw the other day. It was an accessory brand, and Pulse noticed that this brand was getting attention in the right places. Its products were being endorsed by fashion publications and showing up on celebrities’ Instagram profiles. So it proactively suggested that the merchant create a social proof page on their website to build trust and validation. And once the merchant agreed, Sidekick created that page on the merchant’s behalf, and it was already all within minutes. Now just a few months ago, that process multiple specialists, marketing, UX design, copywriting and often an incremental cost to the merchant and likely several weeks from start to finish. And now it is happening autonomously in minutes at 0 incremental costs to the merchant. And that is just one of the smart recommendations being served up to that merchant as part of their daily operations…
…Weekly active shops are up 385% using Sidekick. We saw 12,000 custom apps built in Q1, which is up like over 200% quarter-over-quarter…
…Merchants that are just starting to play with it really become power users very, very quickly…
…The impact that we’re seeing not only in terms of how our merchants are using Sidekick, but how we’re using it internally has been super impactful…
…Some of them have actually discovered this incredible tooling, they’re building for their own business and then put in the App Store as well. But in terms of what Sidekick is doing, like Sidekick actually, we see as a real supplement to the App Store, not a replacement…
…The applications that are being built by Sidekick are really very specific nuanced feature sets for particular merchant businesses. And so for most of them, it really is just for the individual merchant. We see them — we see those — the opportunity for the app developers just to continue. That being said, though, what is happening that is super interesting is that now merchants who may have had to spend weeks or even months building a feature either internally or hiring an agency to do so, they’re able to do so much more work themselves using Sidekick, and that means they’re able to go much faster.
Shopify’s management thinks that emerging AI channels for shopping, such as ChatGPT, Microsoft Copilot, and Google and Meta’s AI services, will be a tailwind for e-commerce; Shopify is the only platform enabling discovery and selling inside ChatGPT, Copilot, and Google from a single system of record; AI-driven traffic to Shopify stores is up 8x year-on-year in 2026 Q1; orders from AI-powered searches are up 13x year-on-year in 2026 Q1; new buyer orders from AI-channels are happening at 2x the rate of other channels; Shopify’s Catalog feature provides the necessary information on 1 billion products for AI agents to surface the most relevant products in seconds; traffic from Catalog-powered AI searches converts 2x more traffic than general AI searches; usage of Shopify’s Sign In With Shop user verification tool is up 3x year-on-year in 2026 Q1; Sign In With Shop is important for agentic commerce because it enables agents to know who they are buying for; agents are not bypassing Shopify; Shopify is the storefront within ChatGPT’s recent move to having in-app browsers for checkouts; Shopify recently introduced an agentic plan that allowed brands to sell in AI channels through Shopify Catalog with no Shopify stores required; non-Shopify merchants are realising that Shopify Catalog is enabling their products to surface on agentic surfaces much better than web-scraping, and it is leading these merchants to join the Shopify ecosystem; OpenAI and Microsoft are already using Catalog
We believe that new and emerging AI channels, places like ChatGPT, Microsoft Copilot, Google AI Services and Meta will be a tailwind to driving e-commerce growth and penetration over time…
…We are the only platform that enables discovery and selling inside ChatGPT, Copilot and Google, all from one single system of record. And the early signals on AI channels are really compelling. And in the first quarter, AI-driven traffic to Shopify stores has grown 8x year-over-year, while orders from AI-powered searches have increased nearly 13x. And within this, new buyer orders are occurring at nearly twice the rate of other channels…
…Let’s talk about Shopify’s catalog because this really, really matters. To date, we’ve structured more than 1 billion products with clean attributes, real-time pricing and accurate inventory so AI agents can surface the most relevant products in seconds, and the results speak for themselves. Traffic from catalog-powered AI searches converts 2x more than traffic from general AI searches where the agent is working from scraped or often outdated information from across the web…
…Sign in with Shop is our user verification tool, which recognizes buyers across devices, stores and surfaces with no sign-in friction. And usage is growing steadily. We are up 3x year-over-year, and it is now enabled across nearly our entire merchant storefront base. In an agentic world, this really matters. Agents need to know who they are buying for and we are ready…
…Agents do not bypass Shopify, just the opposite. In fact, they write right into Shopify. I mean, I think you saw in sort of recent headlines that merchant storefronts really matter. You saw ChatGPT move to in-app browsers for their checkouts. So it’s literally the Shopify storefront within the chat. And again, when a buyer is shopping in ChatGPT, they’re browsing Shopify’s incredible catalog. So the momentum on agentic has been amazing…
…In terms of some of the stuff we’re doing with the agentic plan, for example, again, that rolled out early March. That means that any brand on any platform can now sell across AI channels via Shopify Catalog and no Shopify stores required…
…The big thing, though, with catalog is that I think a lot of non-Shopify merchants are seeing that catalog is actually doing a much better job of organizing and syndicating their products across every agentic surface versus sort of the old scraping thing that was happening prior to catalog. So it’s doing 2 things. One, it is unequivocally getting Shopify connected with a lot more non-Shopify merchants per se and beginning those conversations, which, again, may lead to them joining the agentic plan or ultimately may lead them to come into Shopify for their entire migration, which obviously is our plan and our hope. But even if they just want to be part of catalog and just be part of the agentic plan on its own, that already is a massive lift to them relative to everything else…
…OpenAI and Microsoft are already using the Catalog power discovery.
Shopify co-developed the open Universal Commerce Protocol (UCP) with Google; UCP enables the full commerce journey from product discovery to post-purchase support; management built UCP because they believe that agentic commerce should be based on open standards; management has created the UCP Tech Council, which recently saw Amazon, Meta, Microsoft, Salesforce, and Stripe become members
You might have seen with the latest news on the Universal Commerce Protocol, or UCP, which we co-developed with Google. UCP is an open protocol that makes Agentic commerce work at scale. It enables the full commerce journey, product discovery, checkout, payment, post purchase across any platform with any payment processor.
We co-developed UCP because we believe the future of commerce runs in open standards, not closed systems. And then we created the UCP Tech Council, the technical body that steers the protocol’s direction to ensure it evolves to meet the needs of businesses, platforms, developers and consumers. We are now seeing the biggest and most innovative companies across essentially the entire industry coming together around UCP to help push Agentic commerce forward. And last month, Amazon, Meta, Microsoft, Salesforce and Stripe all joined the council, committing their expertise in Internet scale transaction processing to build one universal protocol for commerce.
Gross margin for Subscription Solutions was similar to a year ago, as economies of scale and efficiencies in support were partially offset by increased LLM costs from growing usage of Shopify’s AI products; management expects pressure on the gross margin from usage of Shopify’s AI products to continue
Gross profit for Subscription Solutions grew 21%, with gross margin coming in at 80%, in line with Q1 2025. Economies of scale and efficiencies in support were partially offset by increased LLM costs, driven by growing merchant usage of our AI products, most notably Sidekick. We expect this dynamic to continue.
AI is writing about 50% of Shopify’s code today; there are more app developers building for Shopify’s ecosystem than ever before, and Shopify is using AI to speed up the app approval process
AI right now writes well over 50% of our code today, and that number is going up significantly, not down…
…You’re seeing more app developers build for Shopify’s ecosystem than ever before. In fact, we’ve now put the app approval process on rails using incredible AI testing so that we can get more apps into the app store faster.
Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Coupang, Datadog, MercadoLibre, Meta Platforms, Microsoft, Salesforce, and Shopify. Holdings are subject to change at any time.