All articles

What We’re Reading (Week Ending 14 December 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 December 2025:

1. When Mountains Become Cages: Lessons from the Sichuan Basin – Eugene Ng

The Sichuan Basin (四川盆地) is surrounded by mountains on all sides and is drained by the upper Yangtze River and its tributaries. The basin is anchored by Chengdu, the capital of Sichuan province, in the west, with the Chengdu Plain and Chongqing in the east…

…The Tibetan Plateau contains the headwaters of most of the streams and rivers in its surrounding regions. This includes the three longest rivers in Asia (the Yellow River, the Yangtze River, and the Mekong River).

The upper tributaries of the Yangtze River (长江 or 扬子江) flow through the Sichuan Basin, providing water for irrigation to grow crops, and for civilisation…

…Because of its relative flatness and fertile soils, the Sichuan Basin can support a high population density, providing staples such as rice, wheat, and barley…

…The Sichuan Basin was the strategic fortress that shaped the Three Kingdoms era (220-280 AD), following the collapse of the Han Dynasty. Wei (in the north) was led by Cao Cao, his son Cao Pi, and strategist Sima Yi. Shu Han (in the southwest) was led by Liu Bei, with strategist Zhuge Liang, and warriors Guan Yu, Zhang Fei, and Zhao Yun. Wu (in the southeast) was led by Sun Quan, strategist, Zhou Yu, and Sun Ce.

Surrounded by mountains and accessed through treacherous gorges, Sichuan was nature’s citadel. Easy to defend, nearly impossible to invade. Emperor Liu Bei built his entire kingdom in Sichuan. When he lost the battle for central China, Sichuan became his refuge and his power base.

However, the Sichuan Basin was both a blessing and a curse. It kept Shu Han alive for decades against stronger rivals, but the same isolation made it nearly impossible to project power outward after decades of failed northern campaigns.

The same mountains that kept enemies out also kept Shu Han’s armies in. Zhuge Liang launched five major northern expeditions against Wei, and all sputtered out for the same core reasons:

  1. Geography was brutal. To attack Wei, Shu had to march through mountain passes and supply armies across hostile terrain. Wei just had to defend chokepoints. Offense is always harder; offense uphill through mountains is nearly impossible.
  2. Economics didn’t add up. Shu was the smallest, poorest kingdom—one province against Wei’s nine. Every campaign drained resources Shu couldn’t replenish. Wei could lose battles and recover; Shu couldn’t afford to lose anything.
  3. Talent ran thin. Zhuge Liang was brilliant, but he couldn’t be everywhere. When he died in 234 AD, Shu’s brain died with him. Wei had depth; Shu had dependence.
  4. Strategic logic was flawed. The campaigns weren’t really about conquering Wei—they were about survival through offense, keeping Wei preoccupied so they wouldn’t invade Shu. Defense disguised as attack. It bought time but burned treasure…

…That is why Shu Han, despite having brilliant strategists like Zhuge Liang, could never quite break through to challenge Wei’s dominance in the heartland of the North China plains (华北平原). They were trying to play offense from the strongest defensive position in China…

…Shu Han’s mountains kept enemies out but armies in. Companies build defensive moats: loyal customers, proprietary technology, high switching costs, and then discover that those same moats prevent them from expanding into new markets. The thing that protects them eventually confines them. Ask BlackBerry how their keyboard moat worked out. Ask Intel if their x86 architecture saved them from irrelevance. Defense becomes offense becomes history…

…The North China Plain birthed Chinese civilization because the flat land, water, and soil aligned. In investing, today’s geography is market size, secular tailwinds, and competitive position. Invest in businesses riding massive currents, the Yangtze Rivers of commerce, not isolated mountain kingdoms. Find the disruptors and top dogs commanding vast plains of opportunity (i.e., large total addressable markets), where continued expansion is possible, and resources flow abundantly. The best investments are not defensive fortresses. They are empires with still abundant room to build and grow…

…The Yangtze River still flows through Sichuan. The mountains still stand. But Shu Han is gone. Geography endures. Dynasties do not. Companies do not last forever. Similarly, management does not, as they have to pass the torch on.

Niche businesses prosper, then calcify, then fade. Without access to vast markets, even genius becomes a footnote. The question is not whether you are smart. It is whether your terrain allows for growth or just survival.

2. Horses – Andy Jones

Engines, steam engines, were invented in 1700.

And what followed was 200 years of steady improvement, with engines getting 20% better a decade.

For the first 120 years of that steady improvement, horses didn’t notice at all.

Then, between 1930 and 1950, 90% of the horses in the US disappeared…

…I was one of the first researchers hired at Anthropic.

This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.

Back then, me and other old-timers were answering about 4,000 new-hire questions a month.

Then in December, Claude finally got good enough to answer some of those questions for us.

In December, it was some of those questions. Six months later, 80% of the questions I’d been being asked had disappeared.

Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did…

…But while it took horses decades to be overcome, and chess masters years, it took me all of six months to be surpassed.

Surpassed by a system that costs one thousand times less than I do.

A system that costs less, per word thought or written, than it’d cost to hire the cheapest human labor on the face of the planet.

And so I find myself thinking a lot about horses, nowadays.

3. Energy Predictions 2025 – Casey Handmer

In 2025, headlines scream that datacenters are pushing prices up and consuming all the power. I think datacenters are exposing the rot in a moribund power generation and delivery industry which has proven unable to meet demand in recent years. But it is a moot point.

Datacenters are already building their own captive power plants. As AI demand outstrips production of gas turbines, hyperscalers will turn to offgrid solar+battery power systems, which are already competitive with pure gas or gas+solar in the sunnier parts of Earth.

Depending on location, 10x overbuild of solar and batteries are sufficient to hit >99.5% uptime for the GPUs…

…On the flip side, these captive solar power plants will be curtailing approximately 75% of their generated power and will be able to provide net power on all but a few days per year. That is, 99% of the time, which is substantially higher utilization than any conventional thermal power plant.

Within the next five years, market power between utilities and datacenters will flip, with DCs becoming the preferred load growth power generation partner.

To spell out the implications, this means that consumers will get access to extremely competitive (cheap) power most of the time, and some combination of utility-owned and privately owned batteries will be needed to smooth out the gaps, as they would be anyway…

…If SpaceX or a competitor can ship inference compute to a 560 km unshaded sun-synchronous orbit which is 80% 1 kg/m^2 solar arrays by mass and 80% compute by cost, then it should be possible to make money. Otherwise, we can expect to see compute being developed on the ground…

…At Terraform Industries, we’re pioneering the technology to convert cheap solar power, air, and water into synthetic natural gas and other hydrocarbons. Within the next five years, solar cost reductions will drive our process to be cost-preferred in all hydrocarbon import markets, and geological sources of oil and gas will never again be able to compete. Our grandchildren will be swimming in copious cheap energy and wondering what all that drilling was for.

We believe that the path forward is lime-calcite captured CO2 + electrolyzed H2 to make CH4 and CH3OH (methanol). Methanol can be upgraded via a wide variety of existing petrochemical processes to make DME, ethylene, propane, gasoline, kerosene, and almost anything else you can imagine…

…In 2025, most gas is used for electricity generation, while most oil is used for cars, trucks, ships, and aircraft.

Solar is going to continue to displace all other primary electricity generators. And electric cars and trucks will continue to dominate growth in ground transportation.

By 2045, natural gas will be used as LNG primarily for high performance supersonic aviation, shipping, and industrial heat.

Methanol will be used as the universal industrial chemical precursor for plastics, paints, fertilizers, adhesives, as well as specialty fuels. Kerosene will service the legacy aviation fleet. Internal combustion piston engines will ultimately go the way of the piston steam engine…

…They don’t want you to know this, but rocks are made of metal oxides, and infinitely abundant commonly occurring rocks such as basalt contain basically every metal you could ever want.

With sufficiently cheap power, we no longer need to travel to the ends of the Earth to build mines. Instead, build a solar powered rock refinery at your local gravel pit…

…But much of the coast of Australia, Chile, Peru, Namibia, South Africa, Mexico, Saudi Arabia and other gulf states have essentially infinite quantities of cheap land, free solar power, and sea water. Democratized solar desalination technology can turn any and all these areas into arbitrarily lush paradises with <1% of the available land under solar arrays.

4. Why AGI Will Not Happen – Tim Dettmers

One of the most common misconceptions I see is that people assume hardware keeps improving and improving. This is an important misconception that explains a lot of the poor thinking around AI progress. The efficiency of GPUs has driven almost all innovation in AI. AlexNet was only possible by developing one of the first CUDA implementations that could compute convolutions over networked GPUs. Further innovation was mostly possible through improved GPUs and using more GPUs. Almost everybody sees this pattern — GPUs improve, AI performance improves — and it is easy to think that GPUs will improve further and will continue to improve AI outcomes. Every generation of GPUs has been better, and it would seem foolish to think that it will stop. But actually, it is foolish to think that GPUs will continue to improve. In fact, GPUs will no longer improve meaningfully. We have essentially seen the last generation of significant GPU improvements. GPUs maxed out in performance per cost around 2018 — after that, we added one-off features that exhaust quickly.

The first of these one-off features was 16-bit precision, then Tensor Cores, or the equivalent, then high-bandwidth memory (HBM),then the TMA or equivalent,  then 8-bit precision, then 4-bit precision. And now we are at the end, both in the physical and the idea space. I have shown in my paper about k-bit inference scaling laws what data types with particular block sizes and computational arrangements are optimal. This has already been adopted by hardware manufacturers. Any further improvement will lead not to straightforward improvements but to trade-offs: either better memory footprint at lower computational efficiency or higher computational throughput at higher memory footprint. Even if you can innovate – linear improvements, need exponential resources – further improvements will be trivial and will not add any meaningful advancement.

While GPUs can no longer improve meaningfully, rack-level optimizations are still critically important. Efficient shuttling of key-value caches is one of the most important problems in AI infrastructure. The current solution to this problem, however, is also relatively straightforward. Companies like OpenAI boast about their AI infrastructure, but it is relatively simple to design because there is essentially only one optimal way to design it. And while it is complex to implement, it just needs clear thinking and mostly hard, time-intensive engineering. But the overall system design is not particularly novel. OpenAI – or other frontier labs – have no fundamental advantage in their inference and infrastructure stacks. The only way to gain an advantage is by having slightly better rack-level hardware optimizations or data-center-level hardware optimizations. But these will also run out quickly – maybe 2026, maybe 2027…

…I believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs. That would not be a problem on its own, but it sets a clear physical limit on scaling that is rapidly approaching. We have maybe one, maybe two more years of scaling left because further improvements become physically infeasible. The scaling improvements in 2025 were not impressive. Scaling in 2026 and 2027 better work out better.

Despite these exponential costs, the current infrastructure build-out is reasonable, particularly with the growth of inference use, but it still creates a very precarious balance. The biggest problem is this: if scaling does not provide much larger improvements than research/software innovations, then hardware becomes a liability and not an asset…

…The key value of AI is that it is useful and increases productivity. That makes it beneficial. It is clear that, similarly to computers or the internet, AI will be used everywhere. The problem is that if AI were just used for coding and engineering, it would have a very limited impact. While a lot of economic activity is supported by digital programs, these also have diminishing returns, and producing more software will not improve outcomes significantly if existing software is already good enough (just look at the SAAS failure in China). This makes wide-spread economic integration absolutely vital for AI effectiveness.

So in order to provide real value, AI needs to be used in ways that provide new benefits, not just improvements to what already exists. This is a difficult problem, but the right answer is to integrate AI into everything to squeeze out non-linear improvements, see what works and what does not, then keep what is working. China is taking this approach by subsidizing applications that use AI to encourage adoption. The Chinese population is very receptive to innovation, which facilitates this process. It is nothing unusual in China to see an 80-year-old grandma use AI to help her with their daily life. The US, on the other hand, bets on ideas like AGI and superintelligence, which I believe are fundamentally flawed concepts that have little relevance to future AI progress. This becomes clear when you think carefully about what these terms actually mean in physical reality…

…The concept of superintelligence is built on a flawed premise. The idea is that once you have an intelligence that is as good or better than humans — in other words, AGI — then that intelligence can improve itself, leading to a runaway effect. This idea comes from Oxford-based philosophers who brought these concepts to the Bay Area. It is a deeply flawed idea that is harmful for the field. The main flaw is that this idea treats intelligence as purely abstract and not grounded in physical reality. To improve any system, you need resources. And even if a superintelligence uses these resources more effectively than humans to improve itself, it is still bound by the scaling of improvements I mentioned before — linear improvements need exponential resources. Diminishing returns can be avoided by switching to more independent problems – like adding one-off features to GPUs – but these quickly hit their own diminishing returns. So, superintelligence can be thought of as filling gaps in capability, not extending the frontier. Filling gaps can be useful, but it does not lead to runaway effects — it leads to incremental improvements.

5.The cure for FOMO is…time – Josh Brown

Strategy, formerly known as Microstrategy. This is a publicly traded company that once sold software but now serves as the largest publicly traded “digital asset trust” or DAT. It created and defines the category. For those who haven’t been paying close attention, the idea behind these stocks is that the company sets out to accumulate as much of a crypto asset as it can (in the case of Strategy they’re buying Bitcoin) and the shareholders benefit as the underlying asset (BTC) appreciates. Why not just buy the asset itself or a spot price ETF? Because the digital asset treasury is accumulating the asset at a faster pace using the money it raises via taking on debt or secondary stock sales or preferred stock sales or all three at once.

MicroStrategy currently holds roughly 649,870 bitcoin, acquired at a total purchase cost of about $48.37 billion, which works out to an average price of approximately $74,433 per BTC. Based on the fixed 21 million-coin bitcoin supply, the company controls about 3.0%–3.1% of all bitcoin that will ever exist. Saylor’s going to continue to dilute his shareholders in his quest to accumulate even more of it so, the thinking goes, if you are bullish on the Bitcoin asset itself, you buy his stock and take the ride to even faster gains than you would otherwise get with the ETFs. In this way, he has convinced the faithful that dilution is actually good, not bad. It’s helping the cause.

I never could wrap my head around it. I get the theory, I think, but it hasn’t clicked in terms of why it would work. Maybe this is because I don’t have a mental price target of $1 million per Bitcoin or something like that. I don’t know. I sold all my Bitcoin and bought the BlackRock ETF IBIT a while back to replace it and that’s pretty much the extent of my involvement in the asset class. The appeal of Microstrategy as an investment is mystifying to me still.

But, I must confess, for a long while I was wondering what was wrong with me. Was I missing something? Was there some aspect to this I wasn’t getting? My uncertainty stemmed from the performance of the stock, which was stratospheric…

…Between August 10th, 2020 and last Thanksgiving, MSTR returned 3,050%. An investment of $10,000 would have become worth over $300,000. No other publicly traded company I can find did anything even close to that in the same timeframe. Nvidia, for example, merely 10x’d in the period.

On Wall Street, price is validation, even if price is only temporary. Saylor was validated for the time being. He knew what he was talking about. After all, millions of investors had agreed with him and those who did not had been rendered wrong by what Jeffrey Gundlach often refers to as “the bloodless verdict of the market.” I was dumbfounded…

…And then a funny thing happened. Time went by. Things changed. We got a dozen ETFs listed that could serve the same purpose MSTR had served for the stock market investor – a way to own Bitcoin exposure in a traditional brokerage account. Additionally, Fidelity and Schwab, Robinhood and Public, all became legitimate venues in which to buy, sell and hold the underlying asset. This was a tremendous unlock. Where once MSTR was the only game in town, now there were many options, none of which required people to pay a premium or remember a seed phrase or transact with Coinbase or get involved with cold storage wallets and the like. Bitcoin became as accessible as running water, everywhere and to everyone. Even in an IRA. That was the beginning of the reckoning for investors in MSTR. One year later and we see the result…

…Warren Buffett once famously said the stock market is not a game where the guy with the 160 IQ beats the guy with the 130 IQ every time. He says temperament is much more important than intelligence. Temperament keeps you from acting on impulse. It’s an innate sense that things might look different in the future than they do today. The cure for FOMO doesn’t come in a can or a bottle or a box. Sometimes it pays to just stick around awhile and watch.

The cure is time.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Even More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

Last month, I published More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s nanagement has continued to develop the 1st-party Firefly models, while also expanding partnerships with other GenAI models; the new Firefly Image 5 model is performing really well; Firefly is the only app with Adobe’s own commercially safe models and over 25 leading partner models, including those from Google and OpenAI; monetisation of the usage of Adobe’s and 3rd-party models is through Generative Credits; different models and different media types consume different quantities of Generative Credits; consumption of Generative Credits increased 3x sequentially in 2025 Q3 (FY2025 Q4); subscribers who consume more Generative Credits can move to higher-value offerings or add credits; Adobe is attracting new creators through the Firefly application; management sees Firefly as a one-stop shop for accessing industry-leading models integrated into rich creative workflow at an affordable price; Creative Cloud customers are adopting Firefly, with 2x sequential growth in first-time subscriptions of Firefly in 2025 Q3 (FY2025 Q4); management has announced the general availability of Firefly Boards, a new ideation surface that is integrated with industry-leading models from both Adobe and 3rd parties; Firefly Services can now perform automated content production including video resizing, video reframing, image composition, image harmonization, digital-twin generation and more; more than 100 new deals for Firefly Services were signed in 2025 Q3 (FY2025 Q4) by enterprises

We have continued to develop our own commercially safe Firefly Models, while dramatically expanding our ecosystem of GenAI model partnerships. The new Firefly Image 5 model is performing incredibly well with generation quality, native 4- megapixel resolution and industry-leading prompt-based editing capabilities. At Adobe MAX in October, we significantly expanded Firefly to become the only app with our own commercially safe models and over 25 leading partner models including Google, OpenAI, Black Forest Labs, Luma, Runway, Topaz Labs and ElevenLabs. These models are now integrated into our Firefly, Express and Creative Cloud applications. We also announced advanced model capabilities including custom model support for Firefly and Creative Cloud customers. 

Usage and monetization of new Adobe and third-party models is measured and charged through Generative Credits. Different models (Firefly, Gemini or Flux, for example) and different media types (video and high-resolution images, for example), consume different quantities of Generative Credits. Generative Credits are a great indicator of high-value usage and credit consumption increased 3x quarter over quarter. As subscribers consume more generative credits, they have the choice of moving to higher value Creative Cloud offerings or acquiring Firefly Credit Add-ons…

…We are attracting new creators to Adobe through the Firefly application, which can be purchased through our Firefly Standard, Pro and Premium subscription plans. Firefly has a rich set of generative AI capabilities that allow users to generate with Adobe and partner models, ideate with Firefly Boards and create and edit videos and images. Simply put, Firefly is a one-stop shop for accessing industry-leading models integrated into rich creative workflows, at an affordable price.  In addition, we’re seeing strong adoption of Firefly from Creative Cloud customers, as they embrace the growing breadth of AI models and tools, seamlessly integrated into creative workflows. We drove 2x quarter-over-quarter growth in first-time subscriptions of Firefly…

…We also announced the general availability of Firefly Boards, a new ideation surface that brings together everything creative professionals need to explore visual and design concepts with stakeholders using industry-leading models, from Adobe and our partners…

…As part of the overall content supply chain solution for marketing use cases, we continue to advance automated content production with Firefly Services that include video resizing, video reframing, image composition, image harmonization, digital-twin generation and more…

…Accelerating adoption of Firefly Services within enterprises with over 100 new deals signed in Q4.

Adobe’s management has atomised Photoshop, Express and Acrobat capabilities as Model Context Protocol (MCP) endpoints; management sees LLMs (large language models) as a great top of funnel for customer acquisition for Adobe; the MCPs are important for Adobe because they allow LLMs to work with Adobe’s models and APIs, which helps Adobe reach more customers 

We also took a huge step forward in Q4 as we showcased the work we’ve been doing to atomize Photoshop, Express and Acrobat capabilities as Model Context Protocol (MCP) endpoints at Adobe MAX…

…Our focus has always been around sort of meeting customers where they are. And that used to predominantly be focused on search and the web, and now we’re seeing this incredible growth with LLMs. And so we are taking all of our technology and making sure that it can run in these LLMs. They represent, in our mind, a great fop of funnel. They let us reach new users that we typically wouldn’t have reached with some of the traditional markets that we go through, and we can engage them in new ways…

…Maybe the more important elements and moments of why this is such a critical moment for us is that as LLMs start embracing these model context protocols, these MCP endpoints, it’s no longer that these LLMs are about a prompt to a model and a response. It now gives us the opportunity to have the LLMs actually work with models and APIs, and that plays to a really strong strength that we have and durable differentiator given the incredible APIs we have across creativity and productivity. So it lets us reach a lot more customers, it lets us atomize the capabilities, double down on the freemium experience that we’ve been putting in place.

Adobe’s management is providing imaging, video, and productivity functionality in AI conversational platforms to monetize Adobe’s GenAI capabilities

In addition to delivering applications, we are providing imaging, video, and productivity functionality in ChatGPT, Copilot and other conversational platforms in order to deliver and monetize creative and PDF functionality in new surfaces. Users will be able to work conversationally while still benefiting from the power and precision of Adobe’s industry-leading features and direct-manipulation tools, making it easier than ever to go from intent to outcome, whether editing a PDF, refining an image, or generating a design.

Usage of AI features inside Acrobat and Reader is up 4x year-on-year in 2025 Q3 (FY2025 Q4); management introduced an AI Assistant into Adobe Express in 2025 Q3 (FY2025 Q4) that can generate content and perform complex editing; the AI Assistant has led to significant MAU (monthly active users) growth in Adobe Express; Adobe Acrobat Studio combines conversational, comprehension, and generative capabilities and the customer reception to Adobe Acrobat Studio has been strong

We revolutionized how users consume and comprehend documents by introducing Acrobat AI Assistant in FY24 and recently added PDF Spaces, allowing individuals and teams to create knowledge hubs to collaborate across multiple documents. Users package multiple documents – not just PDFs, but other file types and web links – into a single workspace they can share with others, enabling a collaborative conversational experience. Usage of these AI features inside Acrobat and Reader has grown more than 4x year over year, as users increasingly turn to Acrobat to help them discover insights, synthesize new ideas and share knowledge. 

Adobe Express made significant advances in Q4 with the introduction of an AI Assistant capable of generative content creation and complex editing. Express now supports generative presentations and designs, moving the industry into a post-template world. Express AI Assistant is capable of conversationally editing images, flyers, presentations, infographics and more. Innovations like these have contributed to significant Express MAU growth. 

Adobe Acrobat Studio brings together the conversational consumption and comprehension capabilities of AI Assistant and PDF Spaces with the generative creation power of Express, alongside the PDF tools people know and rely on into a unified offering. Customer reception of Acrobat Studio has been strong, with nearly 50% of Acrobat commercial ETLA’s renewed in Q4 already upgrading to this offering, reflecting user enthusiasm for unified document comprehension and content generation. 

Adobe’s management recently released Premier Mobile, a next-generation AI video editing tool; Adobe is partnering with Google and Youtube to introduce AI-driven audio and video tools to help creators remix YouTube Shorts

The release of Premiere Mobile in Q4 marks an important milestone in next-generation AI video editing. In partnership with Google and YouTube, we are introducing AI-driven audio and video tools to streamline how creators remix YouTube Shorts, which receive 200 billion daily views.

Creative Cloud recently released a number of new AI capabilities including new models for Generative Fill, and upscaling and prompt editing in Photoshop; management has announced the general availability of Firefly Boards, a new ideation surface that is integrated with industry-leading models from both Adobe and 3rd parties; usage of AI in Creative Cloud applications continues to accelerate; Generative Credit consumption in Creative Cloud, Firefly, and Express in the Creator & Creative Professional category is up 3x sequentially in 2025 Q3 (FY2025 Q4)

Creative Cloud delivered massive new value at Adobe MAX including the release of new models for Generative Fill, upscaling and prompt editing in Photoshop, reflection removal in Lightroom, Turntable in Illustrator and smart masking in Premiere. We also announced the general availability of Firefly Boards, a new ideation surface that brings together everything creative professionals need to explore visual and design concepts with stakeholders using industry-leading models, from Adobe and our partners. Use of AI in these applications continues to accelerate, underscoring the impact AI is having on what creative professionals can produce…

…[Creator & Creative Professional] Accelerating Generative Credit consumption in Creative Cloud, Firefly and Express by individuals and enterprises, which grew approximately 3x quarter over quarter

The MAU (monthly active users) of creative users across Firefly, Express, Premiere Mobile and other freemium offerings was up 35% year-on-year in 2025 Q3 (FY2026 Q4) to over 70 million

Growing our base of creative users across Firefly, Express, Premiere Mobile and other freemium offerings. MAU for these offerings surpassed 70 million in Q4, growing over 35% year over year.

Adobe’s management sees the Adobe Experience Platform (AEP) as a customer data platform that brings together new AI-powered apps and agents to drive customer engagement and loyalty, as well as reduce costs; AEP evaluates 35 trillion segments and activates 70 billion profiles daily; management has released 6 new AI agents powered by AEP Agent Orchestrator

Adobe Experience Platform (AEP) is a leading customer data platform that serves as the foundation in enterprises for digital customer engagement and brings together new AI-powered apps and agents to drive engagement and loyalty, as well as to reduce costs. Our platform operates at scale with over 35 trillion segment evaluations and more than 70 billion profile activations per day. We released six new AI agents powered by AEP Agent Orchestrator to transform how businesses build, deliver and optimize marketing campaigns and customer experiences.

Generative AI traffic to retail sites is up 760% in the 2025 holiday season; management is seeing AI-powered traffic to retail sites from LLMs and agentic browsers rising and this requires different approaches for conversion; the Adobe Experience Manager helps solves retailers’ needs in the agentic web; management thinks SemRush, which Adobe recently announced the acquisition of, has important assets that addresses marketers’ growing need for sustained brand relevance in AI search

Our most recent Adobe Digital Index data, which is based on online transactions across more than 1 trillion visits to U.S. retail sites, shows that generative AI traffic is up 760% thus far in the 2025 holiday season. Our data shows that AI-powered traffic from LLMs and agentic browsers is rising and requires different approaches to conversion, underscoring the growing importance of the agentic web and our opportunity to provide insights and automation to marketers.

Brand visibility is critical to success in this new agentic web, and Adobe solves customer needs through solutions like Adobe Experience Manager, Adobe Analytics and the newly available Adobe LLM Optimizer. The pending acquisition of Semrush, which we announced a few weeks ago, brings complementary assets to help us address marketers’ growing need for sustained brand relevance in AI search. Over the past decade, Semrush’s data-driven search engine optimization and generative engine optimization solutions have earned the trust of industry leaders like Amazon, JPMorganChase and TikTok. Together, Adobe and Semrush will deliver a comprehensive solution to enable marketers to shape how their brands appear across owned channels, LLMs, traditional search and the wider web.

Adobe’s management launched Adobe Brand Concierge in 2025 Q3 (FY2025 Q4), an AI-first application for businesses to manage AI agents for agentic commerce; management is seeing significant customer interest in Adobe Brand Concierge

Adobe Brand Concierge, which was launched in Q4, is an AI-first application enabling businesses to configure and manage AI agents that guide consumers from exploration to purchase decisions, using immersive and conversational experiences. By uniting data, content and agentic AI in a single experience, Brand Concierge gives businesses ownership of the critical discovery and consideration phase. We’re pleased with the significant customer interest and the wins we had for Brand Concierge in Q4.

Adobe GenStudio’s ending ARR grew 25% year-on-year in 2025 Q3 (FY2025 Q4); management sees GenStudio as the product that takes care of every aspect of content production for enterprises

GenStudio is our comprehensive offering spanning content ideation, creation, production, and activation. At MAX, we introduced new scaled content production capabilities through Firefly Services, enhanced model customization with Adobe Firefly Foundry, and integration with a growing ecosystem of ad networks. Ending ARR for the Adobe GenStudio solution grew over 25% year over year as the world’s leading brands increasingly turn to Adobe to power their content supply chain…

…GenStudio is really the offering that we want to provide that takes care of every aspect of their content production, whether it’s the creation part of the campaign, whether it’s then creating custom models, whether it’s training it at the back end, whether it’s automating it and then certainly delivery

Adobe’s new agentic web offerings, Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge, had over 50 customers in 2025 Q3 (FY2025 Q4)

Strong customer demand for our newly introduced agentic web offerings with over 50 customers in Q4 for Adobe LLM Optimizer, Sites Optimizer, and Brand Concierge

Adobe’s new AI-influenced ARR is now more than one-third of overall business, or more than $2 billion

Total new AI-influenced ARR now exceeds one-third of our overall business as we integrate AI deeply into our solutions and continue to launch new AI-first offerings which are now included as part of the AI-influenced metric.

Adobe’s management has announced Adobe Firefly Foundry, which provides enterprises with proprietary foundation models trained on their own intellectual property; interest in Firefly Foundry is strong; Firefly Foundry is operated as a managed service; a media and entertainment company that’s an existing customer of Adobe with $10 million of ARR signed for Firefly Services and Firefly Foundry for an additional $7 million; the media and entertainment company was able to train its proprietary model in 2-3 months, and is already seeing increased efficiency in content production; management’s vision for Firefly Foundry is to have a Foundry be created specifically for every single franchise

We also announced Adobe Firefly Foundry at MAX, which delivers enterprises with proprietary foundation models trained on their own content, data and brand catalogs. Interest in Firefly Foundry has been strong from enterprise marketing teams and media and entertainment companies, where there is increasing desire to produce content faster and more cost-effectively…

…We introduced Foundry, like you mentioned, at MAX. And the core value is that we train on their content, their data and their brand guidelines. We’re able to generate images, videos, audio and 3D models and we operate it as a managed service. So marketing teams can then train on product shots, environment styles, brand guidelines and media companies actually train on their individual franchises, whether it’s a movie or whether it’s a series, they’ll train their characters, their sets, their props, their locations so they can generate the whole thing…

…Let’s take a media and entertainment company we’re working on, and I’m rounding the numbers here, but to give you a little bit of context, let’s say, that organization was spending $10 million with us ARR on our core creative products that we’ve been selling with them. We ran a sales process with them, engagement with them for about 6 months. We were able to sell them Firefly Services and Firefly Foundry for about $7 million, so a pretty significant step up in terms of the engagement that we have with the customer. We were able to train models within 2 or 3 months, and now we’re running some of those models specifically as managed services for them for ideation and production processes. They’re already seeing increased efficiency in content production. They’re able to generate more production content, and they’re now getting into opportunities that are revenue-bearing opportunities like increasing the types of content they produce for social shorts and personalizing more of it for fan engagement with integration with our real-time CDP…

…The vision clearly is that for every single brand, if you’re a consumer company or for every single TV show or a movie, we can create a Foundry specifically for that particular franchise, as David said, because the ability to help with the automation of that content and production is massive.

Adobe’s management sees Adobe as the only company that can close the loop from the creation of an AI-powered advertising campaign, the execution of that campaign, to the commerce impact of the campaign

There are trillions of dollars that are spent in marketing and our opportunity is to really say we can help you make sure that, that content is more personalized. I’ll have Anil also add after this and deliver it. And the fact is that since we can deliver that content through an ad network and then we understand through our analytics where that is resulting in traffic, where that is resulting in conversion, where that’s not, we’re the only company that can close the loop from the creation of a campaign, the execution of that campaign as well as then actually looking at what that causes in terms of commerce. And so I think our real value proposition in all of this is that as increasingly people are saying, “Hey, I want to use AI to create more.” We can not only optimize and accelerate the amount of content that they’re producing, but we’re the only company that can then help them say, “Hey, this caused so much traffic.”

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks that the AI wave has yet to meaningfully impact MongoDB’s results; it’s still early days, but management is already seeing AI startups building applications on MongoDB; management is seeing large enterprises develop AI agents on MongoDB, but these agents are pilot projects and there are currently no AI agents running in production that can fundamentally transform businesses or serve customers better; management thinks there’s a lot of work needed to change an AI application prototype into one that is enterprise-ready; management is seeing that industries that are regulated have very different requirements for an AI agent to be in production compared to being in prototype; management is seeing enterprises try out and churning from many different AI coding agents

All of this momentum in the core business is happening before the AI wave has meaningfully impacted our results. We are still early, but the signs are encouraging from AI-native start-ups building intelligent applications on MongoDB to large enterprises developing AI agents that will reshape how they operate…

…There are various co-pilots when it comes to productivity types of applications that are happening inside of an organization, whether it’s a bank or a health care organization or a manufacturing organization. But what I have not seen is truly AI agents running in production that fundamentally transform the business or serve customers better. There are many, many pilots still going on…

…We’re clearly seeing a lot of, I would say, prototyping and iteration. I would say the enterprise requirements still have a pretty strong and stringent requirements around security and durability and performance. So while there’s a big difference between coming out with the prototype and having a production-grade system that an enterprise can truly rely on trust. And so there is still a lot of work required to make those applications enterprise class…

…When I speak to customers who I’ve been speaking for a long time, in regulated industries, which is financial services, which is health care, which is public sector, the requirement for an AI agent to be in production versus prototype are vastly different, and they are looking for governance, auditability, this and that, while the innovation and the need for the speed is very high. So I have not seen — like customers will tell me, CJ I have 10 agents in production, 15 agents in production. And when I really asked them, I say, are they really customer-facing? Can they be audited on the probabilistic outcome they derive? The answer is, oh, we are still working through that…

…Even the environment on which they are building agents, they are telling me they try one, it doesn’t work, they move on to the next one. So the churn for some of these AI companies that deliver these tools is also very real.

MongoDB’s management thinks that AI applications must connect LLMs (large language models) with companies’ proprietary data, and this connection is an information retrieval problem that requires a very different architecture from the rigid tabular stores that traditional software depended on; management thinks that MongoDB’s document database model has a structural advantage with the architecture that AI applications require; the Voyage MongoDB models is #1 on the Hugging Face retrieval embedding benchmark; MongoDB’s database is the #1 vector database on DB-Engines; MongoDB’s improvement of its embedding and reranking models have driven meaningful accuracy gains and lowered LLM hallucinations; management is hearing from AI native companies that alternatives to MongoDB and relational databases do not scale for AI workloads 

AI applications must connect what LLMs know with what companies know, which is their proprietary data, systems and real-time context. This is fundamentally an information retrieval problem, and it requires a very different architecture than the last generation of software. Rapidly evolving AI models uncover new complex properties about entities and rigid tabular stores cannot deliver the real-time high accuracy performance that AI systems require. At the same time, AI is dramatically increasing the speed at which applications are built and iterated and fixed database schemas simply cannot keep pace.

This is where MongoDB has a structural advantage. Our document model, natively, JSON is built for diverse class changing and interdependent data. Our integrated search, vector search and Voyage embeddings removed the need for brittle bolt-ons, and we are seeing industry-leading results. Number one on the Hugging Face retrieval embedding benchmark with Voyage MongoDB models and the #1 vector database on DB-Engines. Advances in our embedding and reranking models drive meaningful accuracy gains, enabling AI applications to deliver more grounded responses with fewer LLM hallucinations, while lowering storage cost and query cost through smaller, more efficient embeddings…

…Speaking to my network in Silicon Valley with AI-native companies or digital-native companies, what I hear from them is that certain alternatives on relational database just do not scale because AI workloads are fundamentally around unstructured and semi-structured data.

The AI-powered hiring startup Mercor is using MongoDB Atlas to store AI data behind its platform that directly connects professionals to AI model training and evaluation roles; Mercor is also using Voyage embeddings and Atlas Vector Search; MongoDB Atlas is able to support Mercor’s 50% monthly growth, whereas Mercor’s previous solution, Postgres, was not able to

Mercor, which is redefining hiring with its fully automated platform that uses AI to assess and match talent with the opportunities they are best suited for. Mercor uses MongoDB Atlas to store the AI data behind its platform that directly connects professionals to AI model training and evaluation roles. Originally, a self-serve customer, the company is also utilizing Voyage embeddings and Atlas Vector Search. Atlas has scale to support Mercor’s 50% month-over-month growth, allowing the company to keep its software engineering team lean and agile as it expands to over $10 billion in value…

…In my remarks, I shared that there is a super high growth AI company that is doing very, very well and will become a very large company. I have absolutely no doubts about that. They were not able to scale with Postgres and few other technologies, Redis and so on that they were using, and they moved completely to MongoDB and seeing that week-over-week and month-over-month growth is super inspiring. And I spoke to the hyperscaler where this workload is running and they are seeing the same that, wow, this company is doing really well. So that’s built on MongoDB because Postgres had scaling issue.

A global media company running multi-modal content recommendation workloads switched from Elastic Search to MongoDB Atlas and MongoDB Atlas Search after hitting a performance wall with Elastic Search; the media company was able to integrate Voyage AI models in just weeks; MongoDB has helped the media company to cut latency by 90%, reduce operational spend by 65%, and increase click-through rates by 35% 

A highly influential global media company aim to increase engagement via enhanced content recommendation for its vast repository of multimodal assets across its 70-plus websites. That existing stack powered by Elasticsearch hit a performance wall struggling with the complexity of new embedding models. Recognizing that [ rigid ] systems stifle innovation, the engineering team re-architected on MongoDB Atlas and MongoDB Atlas Vector search. Working with MongoDB experts to deliver a proof of concept in just weeks, they integrated Voyage AI models directly alongside their data. The solution scale effortlessly, cutting latency by 90% and reducing operational spend by 65% and driving a 35% increase in click-through rates, ultimately providing millions of global readers with a seamless, deeply personalized discovery journey.

MongoDB’s management is seeing the multi-cloud or public cloud transformation trend continue to happen, and will do so for the next 5-7 years; management thinks it’s possible that the emergence of AI is driving higher demand for application modernisation

The modernization effort, whether it’s a workload that may be just running on-prem, in a large enterprise or a workload that is moving to cloud or sometimes to multiple clouds for resiliency that transformation in speaking to a large telecommunications company, a large health care company, a large tech company, and I can cite you many other examples. I was pretty overwhelmed to understand that those transformations are still going on. There is just a recent conversation I had with CTO of a large telecommunications company who said that they are moving 1,300-plus applications to another hyperscaler and trying to determine which workloads are best suited for MongoDB. So the whole multi-cloud or a public cloud transformation is still going on. And just my intuitive sense in speaking to these customers will be going on for at least next 5 to 7 years…

…This is my personal experience in building AI technologies in the past. That the AI team is typically a separate team from the core data team. And AI team relies on the core data team. And if the core data team moves slow, then AI teams get really frustrated because innovation velocity is how they measure themselves on. So my personal experience was, hey, when the core team is not agile there schemas are not flexible, it actually slows AI down. So that is definitely some facts behind your theory that it is potentially the AI revolution, which we are still in the early stages, is driving modernization in the other part of the enterprise.

A fast-growing AI startup that built its own vector database decided to give Voyage’s AI embedding models a try; if the startup sees good results with Voyage, it will switch from its in-house vector database to MongoDB; a very large customer of MongoDB has deep appreciation for Voyage AI embedding models and are already running 2 big workloads on it

I spoke to a fairly successful AI native company that is doing decent ARR, growing very fast. And when I said, hey, have you considered MongoDB to the founder, CEO, who is very technical. And he said, CJ, we didn’t, we built our own vector database and so on. And while I was speaking to him Alex, about 10 days ago, he basically said, once he looked at the portfolio, he said, let me start with embeddings first. So we are going to try. Of course, we have to prove it to him why our embeddings improves his accuracy on search and so on and improve the performance. So he said, let’s start with embedding models first from Voyage AI once that works CJ, I’m willing to replace my vector DB that we have homegrown created it with MongoDB and oh, by the way, if that works well, eventually, I’m willing to swap out my operational database as well and use MongoDB…

…I’m also seeing in a very large customer of MongoDB, I spoke to somebody who is running the AI initiatives, and they love the Voyage AI embeddings and reranking model, and they’ve already approved it for 2 big workloads.

MongoDB’s management sees AI coding tools as a tailwind for MongoDB because it increases the pace of software creation, and hence, drives demand for databases

Clearly, with the advent of co-gen tools, the rate and pace of software development is only going to increase. And as I think we said in the past, that’s one of the big reasons why we think AI is a tailwind. It’s just that the ability to produce more software, more database, and more and more strategies has been encapsulated in software. So from that point of view, we think that’s all good news for us.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s management is seeing AI going everywhere, doing everything, all at once

AI is going everywhere, doing everything, all at once.

NVIDIA’s management is seeing off the charts demand for Blackwell; management has visibility to $0.5 trillion of revenue for its Blackwell and Rubin platforms from the start of 2025 through to 2026, with about $0.34 trillion over the next 14 months; management sees opportunities for Blackwell and Rubin to have more than $0.5 trillion of revenue from the start of 2025 through to 2026

Blackwell sales are off the charts, and cloud GPUs are sold out…

…We currently have visibility to $0.5 trillion in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026…

…[Question] You talked about the $500 billion of revenue for Blackwell plus Rubin in ’25 and ’26 at GTC. At that time, you talked about $150 billion of that already having been shipped. So as the quarter is wrapped up, are those still kind of the general parameters that there’s $350 billion in the next kind of 14 months or so?

[Answer] Yes, that’s correct. We are working into our $500 billion forecast. And we are on track for that as we have finished some of the quarters, and now we have several quarters now in front of us to take us through the end of calendar year ’26. The number will grow. And we will achieve, I’m sure, additional needs for compute that will be shippable by fiscal year ’26. So we shipped $50 billion this quarter, but we would be not finished if we didn’t say that we’ll probably be taking more orders… There’s definitely an opportunity for us to have more on top of the $500 billion that we announced.

NVIDIA’s management sees $3 trillion to $4 trillion of annual AI infrastructure build by 2030, with NVIDIA’s platforms being the superior choice; demand for AI infrastructure continues to exceed management’s expectations; management thinks the hyperscalers’ workload transitions would be half of the company’s long-term opportunity; management thinks the other half of NVIDIA’s long-term opportunity would come from higher compute spend by foundation model builders; the dollar-content of NVIDIA chips in AI data centers has been increasing with each successive generation

By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA will be the superior choice for the $3 trillion to $4 trillion in annual AI infrastructure build we estimate by the end of the decade. Demand for AI infrastructure continues to exceed our expectations…

…We see the transition to accelerate computing and generative AI across current hyper workloads contributing toward roughly half of our long-term opportunity. Another growth pillar is the ongoing increase in compute spend driven by foundation model builders such as Anthropic, Mistral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab and xAI, all scaling, compute aggressively to scale intelligence…

…[Question] What assumptions are you making on NVIDIA content per gigawatt in that $500 billion number? Because we have heard numbers as low as $25 billion per gigawatt of content to as high as $30 billion or $40 billion per gigawatt. So I’m curious what power and what dollar per gig assumptions you are making as part of that $500 billion number.

[Answer] In each generation, from Ampere to Hopper, from Hopper to Blackwell, Blackwell to Rubin, our part of the data center increases. And Hopper generation was probably something along the lines of 20-some-odd, 20 to 25. Blackwell generation, Grace Blackwell particularly is probably 30 to 30 to say, 30 plus or minus and then Rubin is probably higher than that.

The installed base of NVIDIA GPUs, including the older generation Hopper and Ampere families, are fully utilised; NVIDIA’s GPUs have long useful lives, which gives them a significant TCO (total cost of ownership) advantage over competing chips; the long useful lives of NVIDIA’s GPUs is the result of the company’s CUDA software stack; NVIDIA’s 6-year-old A100 GPUs are still fully utilised today

Our GPU installed base, both new and previous generations, including Blackwell, Hopper and Ampere is fully utilized…

…The long useful life of NVIDIA’s CUDA GPUs is a significant TCO advantage over accelerators. CUDA’s compatibility in our massive installed base, extend the life NVIDIA Systems well beyond their original estimated useful life…

…Most accelerators without CUDA and NVIDIA’s time-tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the A100 GPUs we shipped 6 years ago are still running at full utilization today, powered by vastly improved software stack.

NVIDIA’s Data Center revenue again had very strong growth in 2025 Q3 (FY2026 Q3), driven partly by the GB300 chip from the Blackwell family; GB300 was 2/3 of total Blackwell revenue in 2025 Q3 (FY2026 Q3); the Blackwell Ultra chip delivers 5x faster time to train than Hopper; Blackwell had the highest performance and lowest total cost of ownership across every model and use case under the InferenceMAX benchmark; Blackwell delivers a 10x higher performance per watt and 10x lower cost per token compared to H200 on the Deepseek-R1 model; TSMC delivered the first US-produced Blackwell chip in October 2025; 

Record Q3 data center revenue of $51 billion (sic) [ $51.2 billion ] increased 66% year-over-year, a significant feat at our scale. Compute grew 56% year-over-year, driven primarily by the GB300 ramp, while networking more than doubled, given the onset of NVLink scale up and robust double-digit growth across Spectrum-X Ethernet and Quantum-X InfiniBand…

…GB300 crossed over GB200 and contributed roughly 2/3 of the total Blackwell revenue. The transition to GB300 has been seamless with production shipments to the majority — to the major cloud service providers, hyperscalers and [ GP clouds ] and is already driving their growth…

…In the latest MLPerf training results, Blackwell Ultra delivered 5x faster time to train than Hopper. NVIDIA swept every benchmark. Notably, NVIDIA is the only training platform to leverage FP4 while meeting the MLPerf’s strict accuracy standards. In Semi Analysis’s, InferenceMAX benchmark, Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell’s NVLinks performance on a mixture of experts, the architecture for the world’s most popular reasoning models. On DeepSeek-R1 Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus H200, a huge generational leap fueled by our extreme co-design approach…

…Last month, in partnership with TSMC, we celebrated the first Blackwell wafer produced on U.S. soil.

NVIDIA’s management is seeing the hyperscalers transitioning their workloads from classical machine learning to generative AI; management thinks NVIDIA’s CUDA software stack excels at both classical machine learning and generative AI; management is seeing the hyperscalers’ capex-expectations for 2026 increase by $200 billion since the start of the year to $600 billion; management thinks the hyperscalers’ workload transitions would be half of the company’s long-term opportunity

The world hyperscalers, a trillion-dollar industry are transforming search, recommendations and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition, driving infrastructure investment measured in hundreds of billions of dollars…

…Expectations for the top CSPs and hyperscalers in 2026, aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher relative to the start of the year…

…We see the transition to accelerate computing and generative AI across current hyper workloads contributing toward roughly half of our long-term opportunity.

NVIDIA’s management is seeing Meta Platforms increase users’ time spent on Facebook and Threads because its AI recommendation systems are showing up better content; when Meta Platforms’ generative AI foundation model, GEM, drove a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed in 2025 Q2

At Meta, AI recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Threads…

…Meta’s GEM, a foundation model for ad recommendations trained on large-scale GPU clusters exemplifies this shift. In Q2, Meta reported over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed driven by generative AI-based GEM. Transitioning to generative AI represents substantial revenue gains for hyperscalers.

NVIDIA’s management is seeing the 3 scaling laws of pre-training, post-training, and inference, to be intact

The 3 scaling laws, pretraining post training and inference remain intact. In fact, we see a positive virtuous cycle emerging whereby the 3 scaling laws and access to compute are generating better intelligence and in turn, increasing adoption and profits…

…Just today, I was reading a text from Demis. And he was saying that pre-training and post training are fully intact. And Gemini 3 takes advantage of the scaling laws and got to receive a huge jump in quality performance — model performance.

NVIDIA’s management is observing a proliferation of AI agents; RBC is using agentic AI to reduce report generation time from hours to minutes

We are also witnessing a proliferation of agentic AI across various industries and tasks. Companies such as Cursor, Anthropic, OpenEvidence, Epic and Abridge are experiencing a surge in user growth as they supercharge the existing workforce, delivering unquestionable ROI for coders and health care professionals…

…RBC is leveraging agentic AI to drive significant analyst productivity, slashing report generation time from hours to minutes.

NVIDIA’s management continues to engage the US and China governments on the sale of American chips into China

While we were disappointed in the current state that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the U.S. and China governments and will continue to advocate for America’s ability to compete around the world. To establish a sustainable leadership position in AI computing, America must win the support of every developer and be the platform of choice for every commercial business, including those in China.

NVIDIA’s next generation of chips, the Rubin family, are on track for volume production in 2026 H2; 7 different chips go into the Vera Rubin platform; management sees Rubin delivering much better performance than Blackwell; Rubin’s manufacturing is compatible with Blackwell, and the manufacturing ecosystem is ready to ramp Rubin

The Rubin platform is on track to ramp in the second half of 2026. Powered by 7 chips, the Vera Rubin platform will once again deliver an X-factor improvement in performance relative to Blackwell…

…Rubin is our third-generation rack-scale system substantially redefined the manufacturability while remaining compatible with Grace Blackwell. Our supply chain data center ecosystem and cloud partners have now mastered the build to installation process of NVIDIA’s rack architecture. Our ecosystem will be ready for a fast Rubin ramp.

NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q3 (FY2026 Q3), driven by strong demand across Spectrum-X Ethernet, InfiniBand and NVLink (networking revenue was $7.3 billion in 2025 Q2); the majority of AI deployments now include NVIDIA networking switches; NVIDIA Ethernet attach rates are now roughly on par with Infiniband; major AI players are building gigawatt AI data centers with Spectrum-X Ethernet; management recently introduced Spectrum-XGS, a scale across technology; NVIDIA is the only company with AI networking solutions for scale up, scale out, and scale across; NVIDIA recently announced a collaboration to link Fujitsu’s CPUs and NVIDIA GPUs via NVLink Fusion; NVIDIA has a partnership with Intel to connect Intel’s CPUs and NVIDIA GPUs with NVLink; Arm recently announced that it will be using NVLink IP for customers to connect Arm CPU designs with NVIDIA’s platforms; management sees NVLink as the only proven scale up networking solution in the market today

Our networking business purpose built for AI and now the largest in the world, generated revenue of $8.2 billion, up 162% year-over-year with NVLink, InfiniBand and Spectrum-X Ethernet, all contributing to growth. We are winning in data center networking, as the majority of AI deployments now include our switches with Ethernet GPU attach rates roughly on par with InfiniBand. Meta, Microsoft, Oracle and xAI are building gigawatt AI factories with Spectrum-X Ethernet switches and each will run its operating system of choice, highlighting the flexibility and openness of our platform.

We recently introduced Spectrum-XGS, a scale across technology that enables gigascale AI factories. NVIDIA is the only company with AI scale up, scale out and scale across platforms, reinforcing our unique position in the market as the AI infrastructure provider.

Customer interest in NVLink Fusion continues to grow. We announced a strategic collaboration with Fujitsu in October, where we will integrate Fujitsu’s CPUs and NVIDIA GPUs via and NVLink Fusion, connecting our large ecosystems. We also announced a collaboration with Intel to develop multiple generations of custom data center and PC products, connecting NVIDIA and Intel’s ecosystems using NVLink. This week at Supercomputing ’25, Arm announced that it will be integrating NVLink IP for customers to build CPU SoCs that connect with NVIDIA. Currently on its fifth generation, NVLink is the only proven scale up technology available on the market today.

NVIDIA’s open source inference framework, NVIDIA Dynamo, has now been adopted by every major cloud service provider

NVIDIA Dynamo, an open source, low latency modular inference framework has now been adopted by every major cloud service provider, leveraging Dynamo’s enablement and disaggregated inference. The resulting increase in performance of complex AI models, such as MoE models, AWS, Google Cloud, Microsoft Azure and OCI have boosted AI inference performance for enterprise cloud customers.

NVIDIA’s management is working on a strategic partnership with OpenAI to deploy AI data centers and for NVIDIA to invest in OpenAI; NVIDIA is serving OpenAI through Microsoft Azure, Oracle Cloud Infrastructure (OCI), and CoreWeave and will continue to do so in the future; management is happy to support OpenAI’s self-build AI infrastructure; management recently inked a partnership with Anthropic that will see Anthropic use NVIDIA for the first time; management will optimise Anthropic’s models for CUDA, and optimise future NVIDIA chips for Anthropic workloads; Anthropic’s initial commitment to NVIDIA is for up to 1 gigawatt of compute capacity; the investments NVIDIA has been making in the AI ecosystem is to expand the reach of CUDA; management expects NVIDIA’s investment in OpenAi to generate extraordinary returns

We are working on a strategic partnership with OpenAI, focused on helping them build and deploy at least 10 gigawatts of AI data centers. In addition, we have the opportunity to invest in the company. We serve OpenAI through their cloud partners, Microsoft Azure, OCI and CoreWeave. We will continue to do so for the foreseeable future. As they continue to scale, we are delighted to support the company to add self-build infrastructure, and we are working towards a definitive agreement and are excited to support OpenAI’s growth.

Yesterday, we celebrated an announcement with Anthropic. For the first time, Anthropic is adopting NVIDIA, and we are establishing a deep technology partnership to support Anthropic’s fast growth. We will collaborate to optimize Anthropic models for CUDA and deliver the best possible performance, efficiency and TCO. We will also optimize future NVIDIA architectures for Anthropic workloads. Anthropic’s compute commitment is initially including up to 1 gigawatt of compute capacity with Grace Blackwell and Vera Rubin Systems…

…All of the investments that we’ve done so far, all the period, is associated with expanding the reach of CUDA expanding the ecosystem…

…That relationship we’ve had since 2016, I delivered the first AI supercomputer ever made to OpenAI. And so we’ve had a close and wonderful relationship with OpenAI since then. And everything that OpenAI does runs on NVIDIA today. So all the clouds that they deploy in, whether it’s training and inference runs NVIDIA and we love working with them. The partnership that we have with them is one, so that we could work even deeper from a technical perspective so that we could support their accelerated growth. This is a company that’s growing incredibly fast. And don’t just look at what is said in the press, look at all the ecosystem partners and all the developers that are connected to OpenAI, and they’re all driving consumption of it. and the quality of the AI that’s being produced, huge step-up since a year ago. And so the quality of response is extraordinary. So we invest in OpenAI for a deep partnership in co-development to expand our ecosystem and support their growth. And of course, rather than giving up a share of our company, we get a share of their company. And we invested in them, in one of the most consequential once-in-a-generation companies that we have a share of. And so I fully expect that investment to translate to extraordinary returns.

NVIDIA’s management sees physical AI as a multi-trillion dollar opportunity; physical AI is already a multi-billion business for NVIDIA; leading US robotics companies are using NVIDIA’s products, including Omniverse; many enterprises, including TSMC, are building Omniverse digital twin factories; robotics companies, including Amazon Robotics, are using NVIDIA Cosmos World Foundation Models, Omniverse, and Jetson, to develop their robots

Physical AI is already a multibillion-dollar business addressing a multitrillion dollar opportunity on the next leg of growth for NVIDIA. Leading U.S. manufacturers and robotics innovators are leveraging NVIDIA’s 3 computer architecture to train on NVIDIA, test on Omniverse’s computer and deploy real-world AI. And just in robotic computers, PTC and Siemens introduced new services that bring Omniverse powered digital twin workflows to their extensive installed base of customers. Companies, including Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC and Wistron are building Omniverse digital twin factories to accelerate AI-driven manufacturing and automation. Agility Robotics, Amazon Robotics, Figure and Skild at AI are building our platform, tapping offerings such as NVIDIA Cosmos World Foundation Models for development, Omniverse for simulation and validation and Jetson to power next-generation intelligent robots.

NVIDIA is partnering with Uber for the world’s largest Level 4 ready autonomous fleet

We are partnering with Uber to scale the world’s largest Level 4 ready autonomous fleet built on the new NVIDIA Hyperion L4 robotaxi reference architecture.

NVIDIA’s management is not seeing an AI bubble; management sees 3 computing transformations happening in the world simultaneously and NVIDIA is addressing all of them; the 3 transformations are (1) the transition from CPUs to GPUs, (2) transformation of existing applications by AI, and (3) AI agents

There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different…

…The world is going — is undergoing 3 massive platform shifts at once. The first time since the dawn of Moore’s Law, NVIDIA is uniquely addressing each of the 3 transformations.

The first transition is from CPU general purpose computing to GPU accelerated computing and Moore’s Law slows. The world has a massive investment in non-AI software from data processing to science and engineering simulations, representing hundreds of billions of dollars in compute — cloud computing spend each year. Many of these applications, which ran once exclusively on CPUs are now rapidly shifting to CUDA GPUs. Accelerated computing has reached a tipping point. 

Secondly, AI has also reached a tipping point and is transforming existing applications while enabling entirely new ones. For existing applications, generative AI is replacing classical machine learning in search ranking, recommender systems, ad targeting, click-through prediction to content moderation. The very foundations of hyperscale infrastructure… 

…Now a new wave is rising, agentic AI systems capable of reasoning, planning and using tools from coding assistance like Cursor and Claude Code to radiology tools like Aidoc, legal assistants like Harvey and AI chauffeurs like Tesla FSD and Waymo.  

NVIDIA’s management thinks the company excels at each phase of AI, from pre-training to inference

NVIDIA is unlike any other accelerator. We excel at every phase of AI from pre-training and post training to inference.

The pioneers of agentic AI that management is seeing are all startups

These systems mark the next frontier of computing, the fastest-growing companies in the world today, OpenAI, Anthropic, xAI, Google, Cursor, Lovable, Replit, Cognition AI, OpenEvidence, Abridge, Tesla are pioneering agentic AI.

The fastest-growing applications in history are AI-powered coding applications

The fastest-growing application in history, a combination of Cursor and Claude Code and code — OpenAI’s Codex and GitHub CoPilot. These applications are the fastest-growing in history. And it’s not just used for software engineers, it’s used by — because of wide coding is used by engineers and marketeers all over companies, supply chain planners, all over companies.

NVIDIA’s platform is the only one in the world that runs every AI model

NVIDIA’s architecture, NVIDIA’s platform is the singular platform in the world that runs every AI model. We run OpenAI, we run Anthropic, we run xAI because of our deep partnership with Elon and xAI, we were able to bring that opportunity to Saudi Arabia to the KSA so that HUMAIN could also be hosting opportunity for xAI. We run xAI, we run Gemini, we run Thinking Machines, let’s see, what else do we run? We’ve run them all. And so not to mention, we run the science models, the biology models, DNA models, gene models, chemical models and all the different fields around the world. It’s not just cognitive AI that the world uses, AI is impacting every single industry.

NVIDIA’s management hopes inference will become a large portion of the use case for NVIDIA GPUs because that will suggest that people are using AI in more applications

[Question] In the past, you’ve talked about roughly 40% of your shipments tied to AI inference. I’m wondering, as you look forward into next year, where do you expect that percentage could go in, say, a year’s time?

[Answer] Inference because of chain of thought, because of reasoning capabilities, AIs are essentially reading, thinking before it answers. And the amount of computation necessary as a result of those 3 things has gone completely exponential. I think that it’s hard to know exactly what the percentage of it will be at any given point in time and who. But of course, our hope is that inference is a very large part of the market because if inference is large, then what it suggests is that people are using it in more applications and they’re using it more frequently. And that’s — we should all hope for inference to be very large.

NVIDIA’s management sees a number of important constraints on the growth of the AI ecosystem, namely, power and financing, but they are all solvable problems 

[Question] Many of your customers are pursuing behind-the-meter power, but like what’s the single biggest bottleneck that worries you that could constrain your growth? Is it power? Or maybe it’s financing or maybe it’s something else like memory or even foundry?

[Answer] These are all issues and they’re all constraints. And the reason for that, when you’re growing at the rate that we are and the scale that we are, how could anything be easy?… Now on the one hand, we are transitioning computing from general purpose and classical or traditional computing to accelerated computing and AI. That’s on the one hand. On the other hand, we created a whole new industry called AI factories. The idea that in order for software to run, you need these factories to generate it, generate every single token instead of retrieving information that was pre-created. And so I think this whole transition requires extraordinary scale. And all the way from the supply chain. Of course, the supply chain, we have much better visibility and control over because obviously, we’re incredibly good at managing our supply chain. We have great partners that we’ve worked with for 33 years. And so the supply chain part of it, we’re quite confident. Now looking down our supply chain, we’ve now established partnerships with so many players in land and power and shell. And of course, financing. These things — none of these things are easy, but they’re all attractable and they’re all solvable things.

NVIDIA’s management thinks it’s incredibly hard for ASICs (application specific integrated circuits) for AI workloads to compete against NVIDIA GPUs because NVIDIA’s GPU systems (1) are now incredibly complex and (2) can run every AI model

[Question] I’m curious if your thoughts around the role that AI ASICs or dedicated XPUs play in these architecture build-outs has changed at all? Have you seen, I think you’ve been fairly adamant in the past that some of these programs never really see deployments. But I’m curious if we’re at a point where maybe that’s even changed more in favor of just GPU architecture.

[Answer] Back in the Hopper day and the Ampere days, we would build one GPU. That’s the definition of an accelerated AI system. But today, we’ve got to build entire racks entire — 3 different types of switches, scale up, scale out and scale across switch. And it takes a lot more than 1 chip to build a compute node anymore. Everything about that computing system because AI needs to have memory, AI didn’t use to have memory at all. Now it has to remember things, the amount of memory and context it has is gigantic. The memory architecture implication is incredible. The diversity of models from mixture of experts to dense models, to diffusion models that are aggressive not to mention biological models that are based on the laws of physics, the list of different types of models have exploded in the last several years. And so the challenge is the complexity of the problem is much higher…

…We’re now the only architecture in the world that runs every AI model, every frontier AI model, we run open source AI models incredibly well. We run science models, biology models, robotics models. We run every single model. We’re the only architecture in the world that can claim that. It doesn’t matter whether you’re auto regressive or diffusion based. We run everything and we run it for every major platform, as I just mentioned. So we run every model.

Okta (NASDAQ: OKTA)

Okta’s products help customers build more secure AI agents and manage their AI agents in a secure and scalable way; management thinks AI agents will redefine the identity security landscape; AI agents are also vulnerable without proper security governance, so it’s also essential for enterprises to secure AI agents; Okta has been focusing on securing AI agents (it is the company’s #1 priority now) and management thinks the space will be the next growth leg in identity security; management thinks Okta is the best-positioned to be the identity layer for AI agents; management recently launched Auth0 for AI agents, which allows customers to build secure agents; management has seen a recent surge in inbound interest for Okta’s solutions to manage the security of AI agents; it’s still early days for Okta in securing AI agents, but the company is already working with 100 current customers; management thinks Okta is the only company that is able to secure AI with a modern and neutral platform; the amount of interest in Okta’s solutions for securing AI agents is unlike anything management has seen; management is seeing a large number of enterprises getting stuck with AI projects because they’re unable to give the right level of access to AI agents; management thinks the market opportunity for the identity layer for AI agents is even bigger than Okta’s current opportunity set; only 10% of companies with AI agents in production think their agents are secured

The simple way to think about it is that Okta is helping customers both build more secure AI agents and manage their AI agents in a secure and scalable way. The emergence of agentic technology is redefining the identity security landscape. AI security is identity security. AI agents represent a new powerful identity type. However, without proper security governance, they are also highly vulnerable. Securing AI agents and nonhuman identities is not a feature. It’s essential for any businesses looking to safely scale their adoption and deployment of AI. If an organization does not secure its agents today, they risk undoing years of security improvements and leaving themselves vulnerable to new identity-based attacks.

Okta has prioritized our efforts to focus on helping customers solve this business imperative and capture what we believe will be the next catalyst for growth and meaningful market within the identity security space. Okta’s neutral and unified platform, coupled with our installed base of over 20,000 customers, positions us best to become the identity layer for AI agents. That’s why we’re so excited about the recent launch of Auth0 for AI agents. Auth0 for AI agents allows customers to build secure agents, APIs and users more effortlessly across their B2B, B2C and internal app ecosystem…

…Over just the past few months, we have experienced a surge in inbound interest for our Agentic Security solutions to manage agents, Okta for AI agents. These organizations are looking for a single control plane to observe and manage agents of all types in a way that offers flexibility as the technology continues to evolve. They also want a solution that gives them control like the ability to embed fine-grain access into every agent. Okta is here to deliver…

…It’s very early days on this front, but we have already been engaged with over 100 of our current customers, which combined represent over $200 million in existing ARR…

…Okta is the essential identity layer to help customers build, observe and manage AI agents. We’re the only company that is able to secure AI with a modern and neutral platform, allowing us to deliver even greater value to our customers…

…[Question] When you think about the full deployment of this, how do I think about the dollar potential here when you have customers that are spending $100,000 with you by how much can AI truly elevate that total bill for them?

[Answer] I’ve been personally and the entire company is blown away by how interested customers and prospects are in this capability. I haven’t seen anything like this in my experience at Okta with a new capability or a new product set. So it’s very, very exciting…

…You take all the company’s data and you show it in a big data warehouse like Snowflake or Databricks or Palantir and then the agents have way too much access. They can just see everything and they do unintended things. And so people are stuck and they’re pause and they’re saying, wait a minute, we’re not going to roll these things out. And there’s a huge, huge cohort of companies that are trying to do something AI and they’re stuck…

…Longer term, if you look at our market, we have a $50 billion TAM for workforce identity, a $30 billion TAM for customer identity. Owning and governing the agentic identity layer and securing AI can be a bigger TAM than both of those…

…The company’s #1 priority now is to take advantage of this opportunity. So we’re very clear in our R&D and our go-to-market, we’re going to focus on this opportunity…

…We shared a survey that we had run of a few hundred enterprise customers reporting that 91% of them had agents in production and only 10% of them were confident they had them secured.

A financial services company that is an existing Okta customer selected Okta for AI agents when it was deploying AI agents across its operations; the financial services company deals with sensitive data, so the security of its AI agents is critical; the addition of Okta for AI agents represented a significant ACV (annual contract value) uplift for Okta compared to the prior contract

A great early win with Okta for AI agents. It’s with a financial services customer that is in the midst of deploying AI agents across their operations. Given the sensitive nature of their data and the need to remain compliant with the regulatory environment, securing these agents was not optional. It was critical. They selected Okta for AI agents to secure their AI footprint and provide them with enhanced visibility and remediation capabilities for the agent identities, enforce access control, identity governance and threat detection. It was a great win-win. Okta is helping the customer to safely deploy AI across their business and the addition of Okta for AI agents represented a significant ACV uplift compared to their prior contract.

Okta’s management recently introduced a new open standard, Cross App Access, that helps with securing AI; Cross App Access enables AI agents to safely connect with other technologies; Cross App Access is now an extension of a model context protocol (MCP); customers of Auth0 for AI agents to build agents are getting Cross App Access out of the box

Last quarter, you heard me talk about Okta’s role in the development of cross-app access, which brings visibility and control to both agent-driven and app-to-app interactions. This allows IT teams to decide what apps are connecting and what information AI agents can access. I’m excited to share that as of last week, cross-app Access is now an extension of a model context protocol known as MCP, which helps validate that identity providers like Okta will act as the indispensable control plane for the AI enterprise…

…Customers that are using Auth0 for AI agents to build agents will get support for Cross App Access out of the box, meaning any agents that they build with Auth0 for AI agents will be discoverable by an IDP that also supports the model context protocol. And Okta’s IDP also supports cross-app access and the model context protocol. So customers developing agents with our technology will be producing agents that any company can secure more precisely. And the Okta platform will help customers discover agents that have been deployed and then manage those agents as well.

Okta’s management thinks that a key driver for customers to consolidate onto Okta is technological change; past technological changes have been cloud and mobile, but the recent change driving consolidation towards Okta has been AI; management has been working with a Fortune 50 customer on replacing a multitude of competing products with Okta, because (1) the consolidation will save costs for the Fortune 50 company, and (2) the Fortune 50 company is using 5,500 applications but only 1,500 are able to be hooked up to its central identity system with its existing solutions and this is not feasible for the Fortune 50 company’s agentic projects

[Question] From your perspective, what gets customers over the hump and convinces them to consolidate IAM, governance, PAM, customer identity and any other components to Okta?

[Answer] It’s always wrapped up in some other technological change. If you’re not changing your data center, if you’re not changing your apps, if you’re not investing in AI, you’re not going to change identity. So in all the customers I work with, it’s about some other catalyzing technological change. For many years, it was cloud and building mobile apps and still cloud transformation. And — but what we’re seeing more and more is companies are trying to move technology so they could take advantage of AI. They’re modernizing apps. They’re modernizing their security stack so they can give AI agents access to all of their data resources, and that’s been a catalyzer…

…We’re working with one of the largest Fortune 50 customer of ours on a wholesale replacement of Ping Identity, SailPoint, CyberArk, and several other identity vendors across their whole stack to standardize on Okta products. And the driver there is 2 things. It’s cost. They wanted to have less cost in their environment, and they want to have more better functioning and greater products. That’s part of the driver. But the bigger driver was actually something very simple, which is this company has 5,500 applications. And only all these years with these legacy vendors, they only had 1,500 of them hooked up to their central identity system. And so they’re thinking about agentic future where they want to give their agents and their agent infrastructure access to every application that they have, and they only had a paved path for 1,500 of them because they only were able to get that many on their identity platform with the old technology. So when they think about standardizing, they think about moving all 5,500 applications to Okta.

Okta’s management thinks agentic commerce will be a very big deal; management thinks Okta’s Auth0 for AI agent product is the right solution to secure agentic commerce

I think it’s a big deal. I think Agentic Commerce and if you have a website and you — that’s doing customer support or e-commerce commerce, you’re going to have some version of agents on there very quickly if you don’t already. And if you’re building those agents, Auth0 for AI agents is the right solution. It shortcuts the ability to have those agents connect to multiple systems on the back end. It helps you put Fine Grained Authorization inside of your agentic flow. So it’s purpose-built, and we’re — I think it’s a big trend we’re talking about here.

Companies have a few identity and security challenges when deploying AI agents, namely, (1) their agents need to be discovered, (2) ensuring agents are only authorised to do very specific things, and (3) knowing what agents have been deployed in their environments; Okta helps companies solve all the identity and security challenges that come with deploying AI agents

Builders of agents, they need to solve for at least 2 distinct challenges.

One is ensuring their agents can be discovered. And the second is ensuring that agents are only authorized to do specific things that they have access to specific corporate assets and not others. And Auth0 provides the capabilities to solve both of that with support for Cross App Access and model context protocol, agents built through Auth0 can be discovered and managed properly. And Auth0’s Fine Grained Authorization allows agents to be built in a way that their privileges can be very finely tuned, which is hugely important to our customers in that space.

But the second part of that challenge that our customers have is they don’t know. They tell us they don’t know what agents are deployed in their environment. They don’t know what their users have turned on and what their users’ agents don’t have access to. And this is the challenge of discoverability and being able to discover agents. So our — on the Okta platform side, our Identity Security Posture Management product scans corporate networks to find service accounts and the privileges of those service accounts, but it will also now help discover agents that are implemented and deployed as long as they support the Cross App Access protocol, the extension to MCP.

So the problem of discoverability is something they need help with, and we’re well positioned to help them with that. And the other related challenge is not only knowing that they exist, but then protecting the identity of those agents to ensure the agents can’t themselves be impersonated by a threat actor and to ensure that those agents are properly authorized to take the actions that they’re attempting to access.

So the Auth0 platform on the build side is hugely important for our customers and the Okta platform on the Discover and manage side is important for them as well. That also includes things like privileged access, allowing the agents to have tokens that are appropriately vaulted and governance, having them provisioned and be provisioned based on just-in-time requirements.

Okta’s management is seeing that most of the agentic projects companies are taking on involve agents that are built in-house; the deployment of agents by software vendors is a little slower

I would say that the actual most concrete implementations are agents they built themselves. I think that the deployment from the — some of the packaged application vendors you talked about are maybe a little bit more behind in terms of deployments.

Okta’s management is currently pricing Okta’s agentic products similarly to the company’s other products; the agentic products are priced on a per-agent basis; management is open to changing the pricing model based on what they learn, as the agentic model is still something new

The agentic products are priced similarly to our current products. Our current products are priced per user, the agentic products are priced per agent. So sometimes that can be a one-to-many relationship. You might have a few agents for a person. Sometimes they might be agents on their own. So I think we’re set up in a way that gives us flexibility as these things evolve in terms of how companies want to deploy agents to augment headcount, what they want to — how they want to deploy agents at the front end of processes before it ever gets to a person. And this is one of the advantages we have with all these customers and all this interest, we can figure this out quickly. And we can iterate on this quickly, and that’s how we’ve gotten to this pricing model because this is a new thing.

Okta’s management is currently not seeing major seat reductions at companies because of AI-related reductions in workforce; management is confident that Okta’s customer identity and agentic identity businesses will more than offset any reductions in its workforce identity business from AI-related reductions in workforce if it were to happen; management thinks a human employee will typically be bound to 5-10 AI agents

We are not — like everyone, we’re looking at what changes will happen in the global workforce at companies as they lean more on AI and technology to run their businesses. We’re not yet feeling a material headwind from — you mentioned seat reductions in the business. But were we to see that, we’re confident in our customer identity business offsetting that. We’re confident in our agentic identity business offsetting that. So in the aggregate, we view this shift in the industry as net upside for Okta…

…I think a lot of companies think about agents as like software engineering is a great example. As a software engineer, you’re going to have 10 of these agents working for you all the time. They’re going to be reviewing code. They’re going to be doing security reviews. They’re going to be checking code in. They’re going to be running tests. And that — all those agents are going to be working on your behalf in some cases and have their own identity and others and it’s just having the flexibility to support all those different use cases in addition to agents that would just run on their own. Your customer support agents or your agents sitting on your website accepting commerce are going to be on their own. They’re going to need access control, but they’re not bound to a user until maybe it gets lowered down in the workflow…

…[Question] What’s that relationship being like in the example that we’ve seen so far, what is it like 1 to 10, 1 to 20?

[Answer] I think it’s like 5 to 10 per person.

Okta’s Auth0 and Workforce agentic products are both experiencing similar traction from customers; the customer-profile of the Auth0 and Workforce agentic products are different

[Question] What’s getting more traction? Is it the Auth0 solution or the workforce side? And then what do you think represents the larger opportunity and why?

[Answer] They’re both getting about the same amount of traction. I think the — it’s a little bit different. I think a lot of the interest in the Auth0 for AI agents, it’s more online, people find out AI developers, right? So they find out about it on the website. They do self-service, upgrade to enterprise. It’s a little bit of a different motion. The Okta for AI agents, which is for IT and security, it’s very much have an enterprise architecture with a CISO or security influence buyer or an IT influence buyer.

Salesforce (NYSE: CRM)

Agentforce has delivered 3.2 trillion tokens to customers so far, exceeding management’s expectations; Agentforce and Data reached nearly $1.4 billion in ARR (annual recurring revenue) in 2025 Q3 (FY2026 Q3), up 114% year-on-year; Agentforce ARR reached $540 million in ARR in 2025 Q3 (FY2026 Q3), up 330% year-on-year; Agentforce is Salesforce’s fastest-growing product ever; management has integrated Agentforce into every Salesforce product; all of Salesforce’s data is unified for use in Agentforce; when an LLM is interacting with Agentforce, it’s getting strategic context from Salesforce’s data on customers, service, sales, marketing etc, and this data is unique because it makes business more valuable; 6 of Salesforce’s top 10 deals in 2025 Q3 (FY2026 Q3) are driven by companies who want to use Agentforce; Agentforce is only a year old, butSalesforce has already closed 18,500 Agentforce deals, 9,500 of which are paid; paid Agentforce deals are up 50% sequentially in 2025 Q3 (FY2026 Q3); Agentforce can rope in humans when necessary; Agentforce is using many different LLMs, including those from OpenAI, and will go with the lowest cost option; Agentforce can control AI costs by knowing when to invoke LLMs for tasks; Salesforce is itself an Agentforce customer; customers in production with Agentforce are up 70% sequentially in 2025 Q3 (FY2026 Q3); more than 50% of new Agentforce bookings in 2025 Q3 (FY2026 Q3) came from existing Agenforce customers; management launched Agentforce IT Service in November 2025; management thinks Agentforce is uniquely positioned for the agentic era partly because of its scale; new bookings for the most premium SKU management has within Agentforce doubled sequentially in 2025 Q3 (FY2026 Q3); customers leveraging Salesfroce’s forward-deployed engineers for Agentforce in 2025 Q3 (FY2026 Q3) saw 33% faster deployment times; 3 customers refilled their Agentforce tank in 2025 Q1, but 362 did so in 2025 Q3 (FY2026 Q3)

We have delivered incredible results with Agentforce. It’s really exceeding our expectations. You’re going to hear all the details, but I think that you could see 3.2 trillion tokens delivered for our customers…

…Agentforce and Data reached nearly $1.4 billion in ARR in the quarter, up 114% year-over-year, including Agentforce ARR of about $540 million, 330% year-over-year…

…This is our fastest-growing product ever…

…Every Salesforce app now not just sales, service, marketing, commerce, all of them, Tableau, Slack, our new ITSM, supply chain products, they’ve all been rebuilt, and Sreeni’s here, he’s going to talk about what we’ve done to bring Agentforce into every product we have and we transform Agentforce from being a product to a platform so that all of our apps can reason, learn, take action, collaborate with users, but it’s really about humans and apps and the AI and the data all working together. And that is what’s so exciting that every part of our platform is now so deeply integrated and because all of the data is unified. And every app shares the same metadata…

…When an LLM is interacting with Agentforce, it’s getting that strategic context from our data, from the data on the Internet as well, from the data that it’s been trained in. And then how you — knows how your business operates, it’s really able to give you that. And that’s because Salesforce is unique in that we have data that makes business more valuable. It’s that customer data, the service data, the sales data, the marketing data and then we’re able to deliver it in a tremendously friendly way…

…6 of our top 10 deals in the quarter are now driven by companies that just want to transform with Agentforce…

…A year since we introduced Agentforce, we’ve closed over 18,500 Agentforce deals. 9,500 of them are paid transactions, it’s up 50% quarter-over-quarter…

…Across the apps, you’ve seen the omnichannel supervisor like built into the service cloud, where all of a sudden, I’m a customer. I’m coming into the website even like Salesforce to help.salesforce.com or any of our customers’ websites. And I’m in there, and I’m working and then all of a sudden, I’ve hit kind of the limit of what the LLM can do, I can escalate immediately, also write to a human. And that’s where the humans and the agents and the AI and the data all have to work together…

…We use all of the large language models. The — they’re all great. We love all of them. We love all of our children, but they’re also all just commodities, and we can have the choice of choosing whatever one we want, whether it’s Open AI or Gemini or anthropic or what there’s other open source ones, they’re all very good at this point. So we can swap them in and out. The lowest cost the best one for us, making us basically the top user of these foundation models…

…As customers put Agentforce to work across their business, but not every task or step in the workflow needs to call the LLM, we call that determinism. And determinism is really important because for those of us who grew up in software, we used to call it if then statements, but now we call it determinism. But determinism is that, hey, if I need to do this, go to the LLM, but I probably don’t need to go to the LLM, just do that. So that is going to even reduce our costs further and not hit the LLM as much as we do. And that’s why we built hybrid reasoning and agent script and our AI teams are just crushing it on that. And we’re getting customers the best of both worlds, combining LLM driven reasoning and deterministic precision…

…We had strong performance across Agentforce Service, Agentforce Sales and Slack. And those 3 apps are just a powerful combination for [indiscernible] Salesforce. We use those every single day, we live on them. It is really the hat trick for Salesforce with large customers to say, “Let us show you what we’re doing in service. Let us show you what we’re doing in sales. Let us show what we’re doing in Slack. And it’s a Wow experience right now. It’s only going to get better…

…Customers in production with Agentforce have jumped now 70% year quarter-over-quarter…

…In the quarter, more than 50% of new Agentforce bookings as well as 50% of Data 360 in bookings came from existing customers, expanding their investment, which was awesome and really showed adoption…

…Last month, we launched Agentforce IT Service or Agentforce ITSM or you know that what company that we’re targeting…

…We’re delivering this capability to a global customer base, more than 150,000 Salesforce customers and 1 million companies are now on Slack, now have the immediate opportunity to work side-by-side with agents and Agentforce and the apps are already using every day to become elevated. And that’s why we’re uniquely positioned for this new area. We have the strategy of the platform, the global scale…

…New bookings for Agentforce One Edition and A for X or as we call it, Agentforce for Apps, our most premium SKU doubled quarter-over-quarter…

…Our top priority is accelerating Agentforce and Data 360 adoption. We are relentlessly reallocating our resources to high-growth areas and it’s paying off. Q3 was one of our biggest pipeline generation quarters ever and customers leveraging our forward-deployed engineers are seeing 33% faster deployment times…

…I don’t know if you remember 2 quarters ago, I was super excited. I had to dig very deep to find that 3 customers came and refill the tank in Q1. In Q3, 362 customers refill the tank. That’s an incredible testimony of the success that Agentforce is having in a very short time frame.

Salesforce’s management has delivered employee agents through Slack and it is called Slackbot; Slackbot is able to go through all of Salesforce’s customers’ data and do so in a secured way; Slackbot is able to deliver analysis of a customer and recommendations for interactions with the customer; management sees Slack as a conversational interface for every app, agent, and workflow; Slackbot is currently only available for a small number of customers; Slackbot is built on Agentforce

Some of you have seen it, but probably a lot of you haven’t as employee agents. And we’ve really delivered an incredible new framework deeply integrated into our Slack product. Every Salesforce employee already uses it every day I do, and it’s the core of every demonstration we give to our customers to show how we have unleashed with Slack, something new called Slackbot, which is really the heart of our employee agent strategy, and you’re going to see that. It’s incredible. It is able to go not only through Slack, but and not only through the whole Internet, but also through all of our customers’ data that they have basically provisioned in a secure way through Salesforce as well and deliver a context…

…I was with a really good friend of mine. Just this weekend, I had lunch with him, and he’s a top venture capitalist and he had been a huge investor in the Coinbase. And I’ll tell you that we’re just sitting there, just talking about, hey, tell me about everything with your venture capital company, tell me everything about this venture capitalist and then also tell me everything about Coinbase and the company and our relationship. And then it’s able to deliver to me an absolute and complete not only analysis, not only a summarization, not only all of the detail, but next steps, how to sell, what I should do exactly for the customer. And I love demoing this to customers because they don’t think it’s possible. And then when they see it, they say, “Wow, this is what AI was meant to be.”…

…And Slack is now where it’s coming all together, and that is this incredible conversational interface for every app, every agent, every workflow…

…They may not have Slackbot yet because we’ve only turned it on for a small number of customers who are about to hit the switch and everybody is going to see this employee agent power. So that most people have seen that customer agent power. Now they’re going to see the employee agent power. And they’re going to see how it’s built on Agentforce, how it’s built on the apps and how it’s built on the data.

Williams-Sonoma used Agentforce to build a digital sous chef on its website; there are no hallucinations with Williams-Sonoma’s agent; Williams Sonoma will be building voice agents soon; ride-hailing company Uber and consumer packaged food products company Conagra are customers of Agentforce; CVS Health, Telecom Argentina, TD Bank, the US IRS (Inland Revenue Service), and Costco have become Agentforce customers; General Motors is now an Agentforce customer and is using Agentforce to speed up case resolution for its call centers; PenFed has become a customer of Agentforce ITSM; Agentforce will help PenFed reduce operational expenses by 30%, and produce $2 million in savings; the UK police force recently launched Bobby, an Agentforce Service agent; Bobby is the UK public’s first point of contact for nonemergency calls and can provide instant responses; Bobby has already reduced nonemergency demand by 20%; Salesforce used Agentforce for STR Agent, which has generated tens of millions in incremental pipeline; Agentforce passed 2 million conversations in 2025 Q3 (FY2026 Q3) on Salesforce’s customer-help website; Agentforce took 9 months to reach the first million conversations on Salesforce’s customer-help website and 4.5 months for the next million

Williams-Sonoma’s version of Agentforce, which they call all of [ Olive ]. And if you haven’t been on the Williams Support — Williams-Sonoma’s website and seen the sous chef that they call Olive and used it, I think the quality is what I’m most impressed with that it’s really very, very good. You don’t see hallucinations. You see really kind of the customer personality, the quality, the ability to deliver value, and they are saying that’s about 60% of their chats. We’ve got a whole another level to go with them with voice, which is coming, which is very exciting…

…Great companies like Uber, like Conagra, like LY, like Williams Sonoma, like all these great companies that we’ve been talking about and the consumption flywheel is gaining traction…

…We had incredible wins this quarter, Miguel is going to talk about CVS Health and Telecom Argentina and TD Bank and the IRS, somebody who’s going to be getting a big check from all of us, they are all now on Agentforce. So your IRS agents or Agentforce agents and [ NG ] and so many more are becoming agentic enterprises. And Costco, we love Costco…

…We know General Motors, we love Mary, amazing, how one of her new Escalade IQ, she’s tired of me telling her how much I love it. Expanding Salesforce across the automotive cloud, Data 360, MuleSoft, Agentforce Sales, Agentforce service. But really cool Agentforce tossed their other collaborative product. We won’t talk — tell you what it is, you probably know the name. And they’re now using Slack…

…With Agentforce, Mary’s speeding up case resolution for her call centers. Slack is now the company’s primary communications hub, scaling to 96,000 employees in just 9 months…

…PenFed went live with ITSM with agents for IT service…

…You look at PenFed, I think they went live with agents for IT service as well as member service and collections, they’re projecting a 30% reduction in operational expenses and $2 million in savings with this product is killer…

…This week, we launched the U.K.’s first AI police officer. We work with multiple police departments to roll out Bobby. Everybody loves Bobby, it’s the Agentforce Service agent that is the public’s first point of contact for nonemergency calls and Bobby autonomously provides instant responses on more than 90 topics and police departments have already seen a 20% reduction in nonemergency demand, and they are just getting started, and this is what real enterprise adoption looks like…

…As Customer Zero, our STR agent, has worked hundreds of thousands of leads, generating tens of millions in incremental pipeline. We see that same velocity with Agentforce on help.salesforce.com, which passed 2 million conversations this quarter. It took 9 months to reach the first million and just half that time to double it, another clear example of our internal consumption flywheel taking off.

Salesforce’s management thinks that LLMs (large language models) are basically commodities

We use all of the large language models. The — they’re all great. We love all of them. We love all of our children, but they’re also all just commodities, and we can have the choice of choosing whatever one we want, whether it’s Open AI or Gemini or anthropic or what there’s other open source ones, they’re all very good at this point. So we can swap them in and out. The lowest cost the best one for us, making us basically the top user of these foundation models. 

90% of Forbes’ top 50 AI companies are using Salesforce, including the high profile AI companies such as Anthropic and OpenAI; the Forbes’ top 50 AI companies that use Salesforce average 4 clouds each; 80% of the Forbes’ top 50 AI companies that use Salesforce are using Slack

but nearly 90% now of all of the Forbes top 50 AI companies are using Salesforce. Let’s just think about that for a second. 90% of all the Forbes top 50 AI companies, those are the Anthropics and Open AIs and the [ blah, blah, blah ] companies, okay, that is our Cognition, Cursor, Figure AI, okay. They all average about 4 clouds each already. And 80% of them are using Slack to run their business.

Agentforce has powered 1.2 billion LLM (large language model) calls to-date, with 200 million calls in 2025 Q3 (FY2026 Q3); Agentforce is on track to power 2 billion LLM calls in 2026 (FY2027); Agentforce’s weekly actions have risen 140% quarter-on-quarter; Agentforce token usage in October 2025 was 540 billion, up 25% month-on-month

Agentforce has powered 1.2 billion large language model calls, that’s interactions when agents invoke a model to understand contacts and decide the next best action…

…More than 200 million Agentforce LLM calls in Q3 alone, on track to power another 2 billion over the next year. And those LMs now are calling these agent force actions such as updating the opportunities, creating a case, handling service inquiry and the number of average weekly actions has now risen about 140% Q-ver-Q…

…In October alone, token usage was nearly 540 billion, up 25% month-over-month.

Data 360 was formerly known as Data Cloud; Data 360 is the foundation for Agentforce; Data 360 ingested 32 trillion records in 2025 Q3 (FY2026 Q3), up 119% year-on-year; the 32 trillion records included 15 trillion zero-copy data integrations, up 341% year-on-year; traditional enterprises and technology companies alike are using Data 360; Data 360’s ingestion of records in 2025 Q3 (FY2026 Q3) was up 38% sequentially; Data 360’s zero-copy data integrations in 2025 Q3 (FY2026 Q3) was up 52% sequentially

Data 360 is the foundation for every Agentforce deployment, and it’s accelerating in Q3. Data 360, the product formerly known as Data Cloud. In Q3, Data 360 ingested 32 trillion records. 32 trillion records, up 119% year-over-year, and that includes 15 trillion through zero-copy data integration up 341% year-over-year. So Dentsu, Moody’s, KPMG, Ferguson, Zoom and dozens more invested in Data 360 in the quarter…

…In quarter-over-quarter on Data 360, people have built their lake, just in Data Cloud, our ingest has increased by 38%, and zero-copy has increased by 52% growth in terms of records.

Salesforce’s management sees the agentic enterprise as a new, very large, secular trend, after meeting many customers; management thinks that companies are finding it hard to build their own agentic solutions, so they need to vendors such as Salesforce; management thinks the agentic trend will lead to customers using Salesforce in a different way; management thinks the monetisation opportunity for Salesforce in the agentic enterprise trend is 3x-4x higher than before; Salesforce has already seen AOV (annual order value) with some customers increase by 2x-5x because of the agentic opportunity; the companies that are really turning to Salesforce’s agentic solutions are the visionary ones who started building their own agents 2 years ago, and they turn to Salesforce because they realised how bad the pain points were; the visionaries 2 years ago were concerned with what LLMs Salesforce was using, but now they no longer care

This past quarter, I was in 3 continents, 12 countries, I talk to 400 customers, many one-on-ones, many one to two several dinners. And the reality is very different. There is something very large, very important, and I want to emphasize this, I don’t think we’ve made Marc and Robin enough justice to what is happening right now in front of us. This is — there is a new very large secular demand trend, which is the agentic enterprise. Every single company in the world, small, medium, large wants to become an agentic enterprise…

…The problem is they’ve been experimenting. They’ve been experimenting for 2 years. They’ve gone from experimentation now to frustration a little bit. And now they are all saying, you know what, this is hard. This is much harder than we thought. They all want to go to scale because the opportunities, which is a multitrillion market cap opportunity, it’s in front of us. The TAM is a multitrillion for us, and they want to go all in, they know it’s hard because LLM cannot do this alone. And now to answer your question, the last mile is hard. And last mile is hard because companies need the context. For enterprise AI to be successful and accurate in the enterprise, you need the context, you need the data, you need the metadata, you need deterministic workflows. You don’t want the agents to be essentially executing based on what they found in an LLM, you want the agents to execute in a deterministic way the same workflows that that company had already qualified the apps for the years that humans are already using. And they need AI that is embedded where the humans are. That’s why it’s so important to have the data with the context to have the apps, the deterministic workflows to have the AI where the humans are and only Salesforce can do that…

…The Agentic enterprise is a new paradigm. Customers will have — we’ll use Salesforce in a totally different way. They will use Salesforce to be the platform for detailed labor for sales, for service, for marketing and the impact on the way we can monetize those relationships is exponential. It’s not linear growth. It’s exponential. Robin alluded to that at Investor Day, [ we were ] talking about 3x, 4 times the ability to multiply the monetization on customers because, by the way, they’re getting 3 or 4x or 10x more value from our products…

…The bookings that we do with them, the AOV had doubled, tripled, in some cases, multiplied by 4 and 5, and we are just getting started…

…When I talk to CIOs, I see 2 types. People who are really advanced who are visionaries who started 2 years back, do it yourself. they really understand the pain point. They are the ones who are moving fast to the platform…

…One more right thing is our customers 2 years back, they would ask me, what model are you supporting, where is it, what hyperscale you run. They don’t ask me any of those things now because we abstract all that complexity for them. That’s the original promise of Salesforce when we said no software.

Salesforce is not building data centers for AI, so its gross margin and cash flow is preserved

I just want to make sure everybody realizes we’re not building data centers at Salesforce. We’re preserving our gross margins and our cash flow.

Salesforce’s management thinks they have nailed down Agentforce’s pricing model by having a range of per-seat and per-consumption models; the per-seat Agentforce SKU doubled year-on-year in 2025 Q3 (FY2026 Q3)

The other thing that we’ve learned is pricing matters. It’s very complex. We’ve gone long ways. We’ve had different ways of pricing the product. And now I think we have the whole portfolio of different commercial frameworks to meet customers where they are where they want to be…

…You and me came up with the [ Agentic Enterprise License Agreement ] concept when we visited a few customers in Europe from Unilever to P&I We had great conversations. And we realized that they wanted to move. They wanted to transform, but they were afraid about all these metrics, consumption, et cetera. So we — what we’re doing now is very simple. We are putting the whole menu of options to them. We also have a very successful SKUs that we launched, which are Agentforce for sales or Agentforce for service that are seat-based SKU. People talk about seat versus consumption-based pricing. The reality is there are a lot of customers that want to seat based because seat-based gives you the predictability. So we’ve sold a lot of seat-based licenses for Agentforce and data cloud in Q3. In fact, that SKU has doubled year-on-year. It’s very massive success there. And — but we also have customers from the beginning that they want to just pay per conversation or per agentic actions. So we have the whole portfolio.

Salesforce is seeing both the number of seats and pricing increase

I think you guys always ask the same thing on whether the number of seats is increasing, the price is increasing. Well, for our clouds, we are seeing both increasing, which is exciting.

Veeva Systems (NYSE: VEEV)

The first AI agents under Veeva AI, an initiative launched in April 2025 that will see the company build industry-specific AI agents within its applications, is on track for a December 2025 launch; the first AI agents are for Vault CRM and commercial content; Veeva is on track for more agents in 2026, and these agents will be in all of Veeva’s software applications; early results of Veeva’s agents with early adopters have been very promising; management sees a lot of interest in Veeva AI from customers because they find value in specialised AI agents that fit seamlessly into their workflow; management thinks Veeva AI can be transformative for safety-related applications; management thinks AI agents will be transformative in clinical operations

The first Veeva AI agents will be available as planned in early December for CRM and commercial content. And we are on track for R&D, quality, and additional commercial agents in 2026. We started working with our first early adopters over the past few months, and early results are very promising…

…There’s a lot of interest in Veeva AI because of the clear business value in specialized AI agents working seamlessly in the user’s workflow. Customers are looking for practical solutions that address the specific needs of their functional areas and we are very excited about Veeva AI and what it can do for the industry…

…We are very pleased with our momentum in safety and the transformative potential of Veeva AI as applied to the safety area…

…We’re going to have agents in literally all of our software applications as we get through 2026. We started this year; we’ll have them in commercial and CRM and Commercial Content. Next year, in roughly the first quarter, April, it will be in Safety and Quality. And then through the end of the year, we’ll have agents in clinical operations and then by the end of the year, Clinical Data Management. We think it’s one of those potentially transformative areas in clinicals. It’s our largest single opportunity, the clinical business. There’s a lot of potential to just streamline a lot of core processes, eTMF, when you just intake a document and scanning through that and making sense of that with an agent as an example, just replacing core human labor with agents. So a lot of potential for productivity. That’s just one example, but I think we see that pretty consistently across the broader clinical area.

Veeva’s management thinks AI can change Vault CRM dramatically over the next few years, and customers are excited about it

Now we are entering the age of AI, probabilistic computing to really drive and change what a CRM system can do. So that’s giving people a lot of excitement. This — the Vault CRM of ’26 and ’27 and ’28, that’s not going to be like the Veeva CRM of 2022 and 2023. So that’s where the real excitement is.

Veeva’s management is seeing customers choosing AI partners based on where they think a particular partner can help them; management thinks Veeva can help customers to automate industry-specific applications with AI; customers want Veeva to go faster in AI, but the direction is very aligned; management thinks Veeva’s customers will require change management work to implement AI and this is where Veeva’s business consulting team can help; management thinks customers want an AI partner that can provide a one-stop-shop service for consulting, software, and AI

They want to use partners where partners can help them. So they want to use Microsoft where Microsoft can help them. They want to use Anthropic where Anthropic can help them. And they know where Veeva can help them is helping to automate industry-specific applications with AI, that deep domain knowledge and the business process consulting around it. So how do you enable insight generation in CRM through your field team by the use of compliant free text, okay? That’s a very specific thing. How do you dramatically increase the efficiency of Safety case processing for adverse events, okay? That’s very specific. So that’s what they’re looking to us for, and that’s what we deliver…

…They just want us to go faster, but there’s really rampant alignment on directions…

…Customers also have to be able to adopt and do that change management work, which is that’s not easy either. That’s not going to happen overnight. That’s one of our advantages is we have a great business consulting team…

…The customers are not going to want to knit together consulting over here and software over there and AI over here. They’re not going to want to do that over the long term.

AI’s impact on reduction in sales reps in the pharma industry has been lower than what management predicted; management thinks sales headcount in the pharma industry is going to be stable for a few years

[Question] I think there’s been some debate broadly on AI and how that may impact sales reps or like how efficient sales reps could be. Like as you talk to some of your customers, like how are they thinking about the size of their sales force with the implementation of AI?

[Answer] We have seen some of the reductions that have played out over the past couple of years that we have talked about. We kind of predicted roughly about 10%. It ended up being a little bit less than that. The way to think about it is the customers that they’re calling on the HCPs, number of doctors hasn’t fundamentally changed. You still need people. You need a base level of sales reps to build those relationships, cover those doctors, deliver the information, the service that they need. So I think the industry is cautious and thoughtful about making significant changes or adjustments. So I think there is a lot of potential for productivity gains and effectiveness gains. But I think it will likely be stable, at least for the next couple of years. We’re not hearing of any AI-related reductions.

Wix (NASDAQ: WIX)

Wix’s management thinks of vibe coding as having 2 spheres, one where developers live in, and the other where non-developers live in; vibe coding allows non-developers to create software; management sees parallels between Wix’s important role in website creation in the past, and nascent role in vibe coding in the present; management sees the vibe coding market as being much bigger than the website creation market; management has seen the vibe coding market grow exponentially over the past year, with Wix taking a bigger piece of the pie

When I think about vibe coding, I try to simplify things by breaking the world apart into 2 categories. One is the developer sphere. This is Claude Code, Cursor Windsurf and all these tools, which are great for engineers. These tools integrate directly on the source code of a project, enabling complex technical programming, which require significant user expertise. The second sphere is where everyone else lives, the majority of humanity who don’t code or even think they can code. Suddenly, with vibe coding, they can create pieces of software that improve their personal lives or help to build their businesses, all by simply using natural language. For example, a school teacher can create a custom app to track attendance and post grades. A neighborhood restaurant can build an application to handle their staff schedule, another to manage vendors, another to sort inventory and so on and so forth…

…This story sounds exactly like Wix’ story back in 2006. We didn’t invent websites back then. They were already widely available but only to big companies with engineering budgets. There was an absolute barrier for the average person. We knew there was a way to enable an online presence for everyone. This was and still is the mission of Wix. We intend to do for software, what we did for websites, enabling everybody to build applications without any need for a developer…

…The software application market is many, many times bigger than the website creation market. Think about it. That same neighborhood restaurant needs only one website, which they likely built on Wix, but they may need many applications to successfully run their business…

…The AI-powered app building space has grown exponentially over the past year, and we are taking a bigger and bigger piece of this pie.

Wix acquired Base44 in June 2025 (Base44 is an AI-powered platform that allows users to build web applications using natural language prompts); Base44’s share of audience traffic has increased from almost nothing in June to more than 10% in October; Base44’s capabilities are getting better fast, driven by a fundamental architectural advancement towards an agentic coding environment; Base44’s business has done better than expected since being acquired; the growth in Base44’s share of audience traffic was partly the result of the application of Wix’s proven strategic playbook; the returns on management’s initial marketing investments for Base44 meaningfully exceeded expectations; Base44’s userbase has increased 7x from June 2025 to 2 million today; Base44 has 1,000 new paying subscribers joining daily; management now expects Base44’s ARR (annual recurring revenue) to be at least $50 million by end-2025, higher than before; management expects Base44 to have similar operating and free cash flow margins as Wix in the long-term; management thinks many vibe coded projects are currently only prototypes, but they are already seeing some users build production-grade software with Base44 today; management thinks there’s still some way to go before vibe coding can be used to build production-grade websites

BASE44’s share of audience traffic increased from almost nothing to more than 10% in October. Among local tools, BASE44 is quickly proving to be a leader and the best solution on the market today with enormous white space, still ahead. BASE44 is also getting better, fast. We recently launched our new builder transitioning BASE44 from a predominantly user-reliant tool to an expert developer partner for everyone. The new builder represents a fundamental architectural advancement moving to an agentic coding environment. With multi-agent layers, BASE44 can now validate, debug, refactor for performance and fix its own work, making app creation faster, smarter and more powerful than before…

…We also welcomed our first full quarter of new BASE44 cohorts under the Wix banner in Q3, which performed better than anticipated. As the vibe coding market has exploded this year, BASE44 has meaningfully outgrown most peers. We now estimate our share of audience traffic to AI-powered application builders to be more than 10%, up from low single digits in June. This growth in a matter of just months is a result of a fantastic product with organic reach supercharged by our expertise and investments as well as application of Wix’ proven strategic playbook to BASE44.

In addition to establishing a dedicated customer care team and expanding BASE44’s R&D capabilities, we focused on building up a comprehensive full-scale brand and marketing function. Remember, BASE44 did not have any marketing motion when we acquired it in June. On day 1 after the deal closed, we started to apply a marketing plan that has been fine-tuned and tested over the past 2 decades, a key competitive differentiator for Wix to BASE44. This included refining the company identity, messaging and visual system to better reflect our market ambition. We also launched campaigns in key channels and core geographies, compelling branding and effective marketing is crucial to growing BASE44’s reach beyond just early adopters and capturing the huge white space Avishai spoke about. Returns on our initial marketing investments meaningfully exceeded expectations as demand ramped through the quarter. As a result, we were able to confidently scale marketing efforts above our initial August plan.

Today, BASE44 serves over 2 million users around the world. This is more than 7x more users than we had at the end of June. Impressively, this translates into more than 1,000 new paying subscribers joining daily. We now anticipate BASE44 to achieve at least $50 million of ARR by year-end, an increase from our previous expectations…

…In the long term, I expect BASE44 to have similar operating and free cash flow margins to Wix…

…You’re right, when you say a lot of it is just used for prototyping, right? And that’s great for people to actually build an application that is just for demo for a few people, and then the prototype is the application, right? It doesn’t need scale. It’s okay if it kind of like a tiny bugs. But we are getting to a place that today with BASE44, you can really build more full applications. There’s still quite a way to go on what we can do there and how to make it even better. But we are getting to a place. And of course, we have some users that already built really large applications that have been deployed and we can see that. So if a year ago, you couldn’t do by vibe coding for anything real. And a few months ago, you could do vibe coding for multi prototypes for applications, I think today we are starting to see more applications that are real and are being used in the commercial level.

For website, it’s still different. I think for website, there’s still a gap that needs to be closed, vibe coding to build real websites that are Google-friendly, that are LLM friendly, that are — the privacy rules that are required by law and a bunch of other things. And there’s still quite a distance to go, but we hope to close that early next year.

Wix’s management is already seeing AI costs decrease, and expects the trend to continue or even accelerate, as LLMs improve and competition ramps; management thinks there’s a lot Wix can do to lower AI costs, but it’s not a priority at the moment; the AI costs of new Base44 users is much higher compared to older users

Today, we’re already beginning to see AI cost decrease as LLMs improve and competition continues to ramp. I expect this to continue, if not accelerate…

…[Question] On the gross margins and the AI compute, is there anything that you can do within your control outside of LLM costs coming down to keep costs down, for example, using your own internal data to help build versus relying on third-party LLMs as much?

[Answer] I’m not going to go into all the details here, but yes, there’s a lot we can do on cost, okay? It’s not a priority at this stage, right? It’s something that we’re also investigating. I think the priority now is to build a better product and capture more market share. But I think that long term and log is not multiple years, we can dramatically improve the cost of AI for BASE44. There’s so much we can do from training our own models to do part of it from partnerships with the different vendors, from the fact the simple reality that cost is always declining. And so I think there’s going to be a tremendous amount of opportunities for us to reduce the cost of the AI for BASE44…

…New users coming to BASE, they are obviously consuming more AI tokens, right, more bandwidth as they build their apps. But what we see is a big difference between, obviously, the cost of newcomers. So the one that actually continue because they might modify, do some changes, but it’s really not the same.

Wix’s management thinks Base44 subscriptions will trend towards annual as users gain more trust; Base44’s churn rate is higher than core Wix at the moment, but management is optimistic this will improve with time; management sees Base44 monthly subscriptions performing similarly to core Wix monthly subscriptions; Base44 monthly subscribers are currently performing better than core Wix monthly subscribers in Wix’s early days

[Question] On BASE44. Can we just dive into the dynamics of monthly subs versus the sort of more traditional annual subs that you get for core Wix? What are you seeing there in terms of churn and those subscription dynamics? And as people sign up monthly, can you get them to sign up annually more often over time?

[Answer] At this stage, lean a lot more towards a monthly subscription than annual subscription. And we’ve also seen it in Wix in the beginning. It takes time for people to trust the platform, and then they will actually feel more comfortable to pay an annual subscription. And I think we are heading in that. Vibe coding is still so new that we’re heading towards that direction… When it comes to churn, it’s very early to say, and it’s changing very quickly. So it’s very hard to say. Obviously, churn is higher than the standard Wix, which almost doesn’t exist, right? There’s almost no churn. But if you look on a cohort basis. But Base is better than we expected and we know there’s so much more we can do. So we are very optimistic…

…[Question] Can you talk about the cohort retention trends of BASE44 and how it compares versus Wix on monthly customer plans?

[Answer] We’re seeing kind of similar behavior to what we know from the monthly on Wix. And I would actually dare to say that it’s better than what you used to see at Wix in the early days.

To prepare for an agentic future, management has made every Wix website indexable by LLMs, and has enabled agentic commerce functionalities; management thinks the user interface of websites will change in an agentic future

[Question] Wix is pretty well positioned to kind of reengineer the web for the AI era by making a lot of small business websites kind of agent ready, right? Like so they can be discovered by Gemini, ChatGPT and others more effectively versus the current web architecture, which includes a lot of total consumption for them. Can you talk about the vision you have for Wix for this era?

[Answer] The first thing that we’re doing in Wix in order to support and enable all our customers to enjoy that new mode is that every Wix website is now indexable by LLMs, right? So we make the data available to any LLM, and there’s a few formats for that. And so we ensure that ChatGPT can actually read your content and discover your website. That’s the first part. The second part is that we continuously add new standards for how to do e-commerce, the one that OpenAI released a few months ago, MCP, and a bunch of others in order to enable all the functionality to be available within LLMs or be discovered by LLMs and then run on your website. In addition to that, there’s a few more things that we think that how the user interface will change in the next couple of years. I’m not going to go into details, but I think that, that’s another super interesting opportunity for our customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Amazon, Meta Platforms, MongoDB, Okta, Salesforce, Veeva Systems, and Wix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 07 December 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 December 2025:

1. Understanding ROIC on Low Growth Businesses – John Huber

A 20% FCF yield that is durable is just as good as a reinvestment moat that grows at 20% (in fact, I’d take the former over the latter in many cases because growth rates of 20% tend not to last past a few years). Of course, many 20% FCF yields are also fleeting, but there are enough examples of durable companies (some examples below)…

…People are placing too much emphasis on the stated ROIC of low growth mature companies that earn high FCF and don’t need to retain much of their earnings. It’s important to remember that the capital on a business’s balance sheet is the money that someone else invested (i.e. shareholders in past years).

If there is no place to reinvest capital going forward, then what matters going forward isn’t the ROIC (which is based on a historical balance sheet figure that is no longer relevant). What matters in this case is the FCF that we can collect going forward and the price we have to pay to acquire that FCF (i.e. the FCF yield)…

…Imagine a real estate developer invests $5 million to build a new apartment building that produces $200,000 of annual cash flow. This is a 4% FCF yield, or in the parlance of real estate, a 4% cap rate (technically the cap rate uses a pretax number based on what RE investors call net operating income, but we’ll ignore taxes for simplicity)…

…Viewing this building as a “business” suggests this is a mediocre one at best: a 4% return on capital is not creating value because the investor could have likely earned better returns investing in some other real estate investment, other stocks, or some other asset class altogether…

…So we have a 4% ROIC business that isn’t creating value. Let’s assume the market goes south, the developer’s business is overleveraged and on the rocks, and he decides to bring on a partner to help inject much needed cash. He offers you a 50% share of this building at a valuation of just $1.5 million…

…Let’s look at your result: you invest $750k and now have a $100k of cash flow (50% share of the building’s overall annual cash flow).

This means that your return on the capital you invest is not 4% but rather 13.3% ($100k / $750k).

The same building had an original cost basis of $5 million. That was the initial capital that went into funding its development. This same asset that traded at a 4% yield now trades at a 13.3% yield. However, if you viewed the financials and crunched the ROIC for this building using GAAP financials, it would still show an ROIC of 4% because that is the capital that the original developer invested into the building.

Would this stop you from investing at a 13.3% yield (assuming you like the long-term prospects for the building)? Of course not. You would view this as a great deal.

2. Blue Owl’s teachable moment for investors and asset managers chasing yield and ‘hot money’ – Isla Binnie

Blue Owl’s (OBDC.N), opens new tab turnabout decisions in the last two weeks – to merge, and then not merge, and then maybe merge two of its private credit funds at a later time – offer a cautionary lesson for retail investors in search of higher yields and the asset managers chasing the billions in “hot money” wealthy individuals bring.

The New York-based asset manager withdrew a proposal last month to merge a $1.7 billion non-public fund for retail investors with a $17 billion publicly traded fund for institutional and retail clients after news of the deal helped send Blue Owl’s shares down more than 10% in less than two weeks. The retail investors, who had to vote on the plan, were spooked by two things: it could have forced them to take a 20% loss at current prices and Blue Owl paused redemptions until early next year…

…”The reason that private credit can advertise more yield is because they’re providing you more credit risk … it’s more concentrated investments in riskier companies. Now that doesn’t sound like a trade that should be in a liquid fund,” said Robert Cohen, director of global developed credit at DoubleLine, a bond-focused investment firm managing $90 billion in assets, referring to private credit in general.

Blue Owl’s proposal touched a nerve in credit markets already rattled in recent months due to a few high-profile bankruptcies that have undermined confidence in private credit. Also, some in the market fret that expectations of interest rate cuts by the Federal Reserve could reduce the appeal of private credit investments, one of whose main selling points is their juicy yields.

3.Want This Hearing Aid? Well, Who Do You Know? – Steven Levy

Fortell is a hearing aid, one that claims to use AI to provide a dramatically superior aural experience. The chosen few included in its beta test claim that it seems to top the performance of high-end devices they’d been unhappily using.

These testers have made pilgrimages to Fortell’s headquarters on the fifth floor of a WeWork facility in New York City’s trendy SoHo neighborhood, where they were fitted for the hearing aids—which from the outside look pretty much like standard, over-the-ear, teardrop-shaped devices. But the big moment comes when a Fortell staffer takes them down to street level. There, among street clatter, honking cabs, and delivery trucks backing up to luxury stores, they are asked to conduct a conversation with a Fortell worker. Two other employees stand behind them, adding their own loud discourse to the urban cacophony.

Despite the din, the testers clearly make out what the person in front of them is saying…

… “A lot of people regard AI as something you’ll use to make businesses more efficient,” he says. “But people haven’t really internalized that you could use AI to make products exponentially better.”

De Jonge and Morris eventually dubbed the new company Chromatic, a name they later ditched, settling instead on Fortell. They realized that there would be two critical components in an improved approach to a hearing aid. The first would exploit the recent advances in AI for a better algorithm to selectively augment conversation. And the second would be a custom chip to process that algorithm in real time.

The first requirement became the province of Igor Lovchinsky, who had been Butterfly’s AI wizard. He’d come to the field late in life; up until his mid-twenties he’d been a Juilliard-trained concert pianist but left the field when he became enamored with science. Lovchinsky felt that the AI claims made by some other hearing aid companies were overblown; they were simply tweaking the amplification, he says, or aiming the microphones in a different direction.

“What became clear is that what was needed is source separation,” he says. “Take an audio wave that contains both things you want to hear and things you don’t want to hear, and separate them into just speech and just noise.” Even in 2021, it wasn’t clear that this was possible. “We all have this incredible neural network in our heads honed by billions of years of evolution to recognize speech,” he says. “If you do the source separation with the slightest deviation from full naturalness, your brain will immediately hear it.”…

…Having the right algorithms wouldn’t be worth much if you didn’t have a properly engineered chip to run them. To lead its silicon team, Fortell tapped as CTO Andrew Casper, another Butterfly alum who was a lead engineer on a Google team making AI chips. Casper also wasn’t sure that his task could be accomplished. “Your ear is very sensitive to latency,” he says, noting that if the altered sounds weren’t processed in 10 milliseconds—a hundredth of a second—it would throw users into a hellish uncanny valley. “We didn’t know if it could be done in that amount of time with a high enough fidelity so you aren’t going to notice distortions.” Only then, he says, could the company move to the final challenge: “Can we even put this thing into your ear?”

It was going to take years before the startup got those things right and could even begin to test on humans. Fortunately, the $9 million initial stake, the majority of which came from Kushner, provided a long runway. “For the first few years of the company there was no hearing aid in sight,” says de Jonge. “We needed to build for ourselves to see if the science problems could be solved.”

By 2023, Lovchinsky and Casper had made significant progress on their respective missions. Lovchinsky’s team realized that separating out the voices required creating a proprietary version of what is known in the industry as Spatial AI, involving a 3D understanding of the real world. (Confusingly, they also use the nonproprietary technology, spatial AI, in their product.) “It gleans perspectives from multiple microphones and can infer the same way that healthy people can, from both ears,” he says. His team also found a way to train their AI models with huge amounts of synthetic data that emulated all sorts of conditions. “It’s specifically useful in the most challenging environments,” he says…

…Now that the product is launched, Fortell will sell hearing aids in a single clinic on Manhattan’s Park Avenue. It’s decked out like a posh lounge, with the devices on display in a tasteful presentation that’s straight out of the Apple retail playbook. Hanging on the wall is a silicon wafer with the circuitry of the custom chips. In the early stages, his staff of four audiologists will serve only a couple of dozen customers a week, to make sure everything goes smoothly. In any case, while ramping up production, the supply will be limited.

This is great for Fortell, but it seems de Jonge’s initial impulse to usher everyone’s grandparents into the land of the hearing is in danger of being limited to the one percent, which doesn’t exactly qualify him for a Salk medal. When I ask de Jonge how his invention can scale to change life for the masses, his replies, whether due to secrecy on future plans or just not having a good answer, seem hand-wavy. In his defense, Fortell has resisted the temptation to jack up the traditional price of premium hearing aids—the $6,800 is actually a bit less than some other medically prescribed hearing aids. (As with other high-end hearing aids, the price is part of a package that includes fitting and support from professional audiologists.)…

…It’s hard to measure hearing quality, but Fortell has set out to prove scientifically that it has a better solution to hearing loss. It contracted researchers in NYU Langone’s audiology and neuroscience departments to consult on a blind experiment comparing Fortell with the leading AI-powered hearing aid competitor, a Swiss company called Phonak, whose devices retail for $4,000 and is considered the gold standard in AI hearing products. (In the study, Phonak isn’t mentioned by name and is identified only as the control hearing aid group.)

The test matched performance in environments where noise was coming at random intervals from three directions—kind of an emulation of the Cocktail Party Problem. “This is a configuration that’s particularly good to show the advantages of this aid, because what it does is actually extracting the various signals and getting rid of some of them,” says Mario Svirsky, the Noel L. Cohen Professor of Hearing Science at NYU School of Medicine, who consulted in the study (and was paid for his time).

Svirsky says the test and its goals were set out in advance. If it showed that Fortell notched a 4-decibel increase over its rival in boosting the desired signal, it would be a home run. But when they ran the study, the difference reported between the two devices was 9.2 dB in Fortell’s favor. “The results were overwhelming,” he says. “I’ve never seen such a categorical result in my career.” In one chart, the line representing the hearing improvement from Fortell virtually towered over the Phonak line. The study concluded, “In the most challenging multi-talker environment participants had 18.9X higher odds of understanding speech versus the top AI hearing aids on the market today.”

Naturally, I sought comment from Phonak about those results. Michael Preuss, the lead audiologist for Phonak’s AI platform, has been wearing hearing aids since he was 3 years old. Phonak, he says, has been in the business for 75 years and has been working with AI in its products for the last quarter century, and for the last seven years has pursued the idea of producing an AI chip—just like Fortell. Phonak, too, has spent years developing and testing its AI system, which rolled out last year to what the company describes as acclaim and adoption. When I tell Preuss about how some startup he never heard of trounced his product in a head-to-head test, he seems unruffled. “We have seen in the past that there is no industry standard in how you set up these studies and how you do these kinds of measurements,” he says. “You can design studies to enhance your own performance.” To be sure, Fortell did set up conditions that played to its strengths. But Svirsky says that those conditions were the ones that matter to hearing aid wearers. Also, unlike almost all studies performed by hearing aid companies, Fortell has submitted its work for publication in a peer-reviewed journal.

4. “Suspicion of Gross Fraud”: some notes from passing on Intellego Technologies – Andrew Walker

The company at the center of this story is a tiny little Swedish company named Intellego Technologies; when I was researching them over the summer, they had a ~$200m market cap (note: I used USD there, but Intellego reports in SEK; for ease going forward, I will use SEK through the rest of this article. 10 SEK roughly equals $1, so just divide by ten to get to a rough USD number)…

…Bears claimed the company was…. let’s say incredibly sketchy. The financials didn’t really make sense. Despite seemingly massive profits, operating cash flow was basically non-existent. Bulls said the bears were missing the forest for the trees and misconstruing normal small company growth pains with something more nefarious…

…Obviously, that bull / bear debate seems to have been settled now; the stock getting halted because the company’s cash was frozen / the CEO getting arrested for “suspicion of gross fraud” has a way of settling debates…

…As I’ll detail, to say Intellego had a ton of red flags around it is an understatement.

But, even if you put those red flags to the side, there was a pretty easy reason not to invest: it was literally too good to be true…

…Here’s where the too-good-to-be-true part comes in: UVC dosimeters aren’t exactly an unknown technology; a quick amazon search reveals a heck of a lot of options for dosimeters. Sure, maybe a hospital grade disinfecting system needs something better than a color changing chameleon sticker, but this technology isn’t some wild revolutionary breakthrough. Intellego was guiding to more than 700m SEK in revenue and 400m SEK in EBIT for 2025. In USD, that’s ~$70m in revenue and $40m in profits, making Intellego a very large and profitable business…. and an extraordinarily fast growing one; revenue was ~260m SEK in 2024, and the company was suggesting >10B SEK (~$1B in USD) in sales in five years.

I could never find a single person who could explain to me why Intellego had a right to make such enormous margins and insane growth on a technology that seemed so simple / commoditized. I’d hear bulls wave their hands and say “probably some type of patent?”, but I’d never really hear a good answer why this was a defensible market that should yield such high profits / growth…

…As mentioned, in August Intellego guided to over 700m in revenue and 400m in EBIT for all of 2025. Intellego’s initial full year guidance came in February 2025 had been for over 500m in revenue and 160m in EBIT for all of 2025 (which in itself represented insane growth from 2024’s ~260m in revenue). If you believed those numbers, the business was going parabolic. But look at those numbers: from February to August Intellego increased their sales guidance by ~200m and their EBIT guidance by over 240m, which implies that the business was experiencing negative incremental costs. How?…

… I did want to share one last tidbit from my Intellego research: my call with their (again, I assume soon to be former) CEO. I had a call with him in early August to talk about the company. It was a really weird call (my first notes from the call were “weird call”) for a bunch of reasons, including that he showed up ~ten minutes late. I won’t get into all of the details of the call, but there is one specific thing that I’ve been thinking about a lot with the benefit of hindsight that might be interesting.

I spent most of the call pressing on my key question: how could a product that seemed so simple / commoditized generate such high margins / insane growth? The CEO was pretty dismissive of those concerns (at least in my opinion), and on the heels of the call I would have a lot of mental debate with myself: was he dismissive because he was crazy, or was he dismissive because there was something so good about the product that he knew he had the right to be dismissive (was Steve Jobs crazy to be dismissive of the Zune?). The interesting thing is that he was quite cavalier on all of my questions about competition…. but he was completely honed in when I asked him questions about the company’s accounts receivable. Multiple times he told me “our one weakness is accounts receivable” or “we know that the receivables are our big weakness.”

I’ve heard CEOs mention receivable as an opportunity to improve (i.e. bring receivables from 60 days to 50 days and ROIC improves markedly!), but I’ve never heard a CEO say they were a weakness, let alone the company’s sole weakness! It just seemed like a really weird focus / Achilles heel for a company whose products were so in demand that revenue was set to ~triple, and it seemed like a strange thing for a CEO to be so singularly focused on.

5. The Untold Story of Charlie Munger’s Final Years – By Gregory Zuckerman

In the year before his death, Munger made over $50 million from a bet on an out-of-favor industry he had shunned for 60 years. He revved up his real-estate activities, working with a young neighbor to place big, long-term wagers, unusual for a nonagenarian. He faced down health challenges and wrestled with the future.

“Even a week or two before passing away, he was asking questions such as, ‘Does Moore’s Law apply in the age of AI?’” recalls his friend Jamie Montgomery, referring to whether artificial intelligence would see exponential gains like those experienced in computational power…

…Munger made his own investments, too. Sitting in a recliner in his library, he’d grab green Value Line binders from a nearby desk and pore through data on publicly traded companies.

For decades, he barely looked at coal stocks, friends say, but in 2023, these companies grabbed his attention. Coal usage was in a long-term decline, and investors saw a bleak future for the industry. Yet many producers remained profitable, trading at inexpensive levels. Coal will remain necessary as global energy demand grows, Munger argued to friends and others.

“He read an article that said coal was down the chute,” Borthwick recalls. “He said, ‘Horse feathers.’ ”

In May 2023, Munger purchased shares of coal miner Consol Energy. Later in the year, he bought shares of Alpha Metallurgical Resources, which produces coal for steel production. By the time of Munger’s death, Consol had doubled in value. Alpha had also surged. Together he scored paper gains of more than $50 million, friends say…

…Back in 1978, a surgeon had bungled cataract surgery, leaving him blind in his left eye. He learned to compensate, installing bright lights around the house. Around 2014, though, Munger experienced a problem in the optic nerve of his right eye. He faced the possibility of going blind—yet he took the setback in stride, says Li Lu, a regular visitor. Munger decided to adjust his life, asking others to read to him and contemplating other steps.

“I’ll have to learn Braille,” he told one friend. He had studied it after his botched cataract surgery but never mastered it. He was ready to try again.

That turned out not to be necessary. His right eye slowly improved, but Munger’s movement became constricted…

…Munger was counting down to a 100th birthday party on Jan. 1, 2024. Friends and longtime business associates including Jim Sinegal, Costco’s co-founder, planned to fly to Los Angeles for the festivities.

Munger’s health was faltering, though. He sensed the end was near. When a friend asked how he was feeling, he replied: “There’s a lot wrong with me.”

When he discussed his legacy, he said he was comfortable with his accomplishments and optimistic about Berkshire’s future. 

“Once it’s built, you don’t need to be Warren and Charlie,” he told a friend. “What we have is a framework for looking at investments.”

Near the end of life, Munger leaned on humor for strength. He told family members that Diet Coke was responsible for his longevity, lightening the mood.

​And he shared a wish with a visitor.

“Oh, to be 86 again,” he said.

Late on Thanksgiving evening two years ago, days before his death, Munger was admitted to a hospital near Montecito. He asked family members to leave the room so he could call Buffett one last time.

They shared a last farewell.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple. Holdings are subject to change at any time.

An Easy Rule-of-Thumb to Avoid Frauds

Check a company’s net income and cash flow.

I recently wrote briefly about Intellego Technologies (SSE: INT) and how the company had possibly committed fraud:

“Earlier this week, the company’s CEO was arrested on “suspicion of gross fraud”, and SEK 100 million of its cash reserves were seized by Swedish authorities. The trading of Intellego Technologies’ shares on the Swedish stock market was also suspended.

Although not much is known yet of the apparent misdeeds conducted by the CEO, the “gross fraud” is “related to [Intellego Technolgoies’] press releases and quarterly reports in 2025.””

There is one tell-tale sign that Intellego Technologies’ management was highly likely to have engaged in chicanery: A huge discrepancy between the company’s net income and operating cash flow, as shown in Table 1. This is a well-known flag for possible financial-wrongdoing by a company that is described in forensic accountant Howard Schilit’s book, Financial Shenanigans.

Table 1; Source: TIKR

Not every company whose net income and operating cash flow diverges for some time is fraudulent. Netflix* is a great counter example. Table 2 shows the company’s net income and operating cash flow over the past decade. The two financial numbers took very different paths from 2015 to 2019 before eventually converging.

Table 2; Source: TIKR

Nonetheless, a good rule of thumb to avoid a company in the stock market that is conducting fraud is to watch its net income and operating cash flow. If a company’s net income looks much better than its operating cash flow for some time, it pays to look beneath the hood.

*There are no certainties in the world of finance, so there is still a very remote possibility that Netflix is a fraud, although the probability decreases with each passing year. In any case, Netflix’s net income and operating cash flow took different paths from 2015 to 2019 because the company was investing heavily in developing its in-house content library, which required cash upfront.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Netflix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 23 November 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 November 2025:

1. Blue Owl private credit fund merger leaves some investors facing 20% hit – Antoine Gara

Earlier this month, Blue Owl told its shareholders that it planned to merge its Blue Owl Capital Corporation II fund, which has $1bn in assets and was one of the first private debt funds targeting wealthy individual investors, with its OBDC fund, which has $17bn in assets.

Blue Owl Capital Corporation II investors are being asked to exchange their shares in the private fund for shares in OBDC at the stated net asset value of both funds. However, OBDC trades on public markets at a discount of about 20 per cent to the stated value of its assets. Blue Owl Capital Corporation II, meanwhile, is not publicly traded and instead offers investors the ability to redeem cash every quarter at the fund’s stated value.

If the mooted deal were to be approved by shareholders and completed at current prices, Blue Owl Capital Corporation II shareholders would see the value of their investments fall by about 20 per cent.

Blue Owl Capital Corporation II investors will be restricted from pulling money from the fund until the merger with OBDC closes in early 2026, at which time they will permanently lose the ability to redeem cash at the fund’s NAV…

…Jonathan Lamm, chief financial officer of OBDC, conceded in an interview with the Financial Times that at current prices, the investors in Blue Owl Capital Corporation II could take a potential haircut on their investments. But he said the merger came with significant benefits, such as the ability to own more liquid shares in OBDC, which trade on the New York Stock Exchange.

2. Blue Owl’s clever private-to-public deal makes investors see red – Sujeet Indap

Blue Owl, a US-based private capital firm, just took a bruising in such a skirmish. On Wednesday it cancelled a planned merger between two affiliates that lend to middle-market companies. One of these “business development companies” is publicly traded; the other is private, so its investors have more limited opportunities to sell their holdings.

While now dead, the merger deserves study. Here is how it worked: investors in the unlisted company would have received shares in the listed one. Measured in terms of fund assets, the swap was a wash: an owner of $1 of what sits in the unlisted bucket would still hold a claim on $1 of stuff in the enlarged, listed counterpart.

The catch was that the listed company’s shares were trading in the market at a 20 per cent discount to their net asset value. So in return for getting access to an investment they could sell whenever they liked, Blue Owl’s clients were taking a pretty sharp haircut if they wanted to sell immediately. Predictably, they cried foul.

While that’s the simplified version of events, the deal actually came with some pretty complex engineering. Had the acquiring publicly traded BDC been trading at a premium to its net assets, the exchange would be calibrated based on its share price, not the — lower — net asset value. In return for their $1 of assets they would get paper they could sell into the market also for $1, but representing a claim on stuff worth less than that.

Confusing? Welcome to private markets.

3. Going All-In on MSTR – Ben Carlson

A reader asks:

Let’s say I have a brother. Let’s say he was on a lucky hot streak this year YOLO’ing into the most speculative plays in the market (quantum, crypto, meme stocks, etc) and was up 100% YTD. Pressing his luck, he thought it was a good idea to put nearly all of his portfolio into MSTR (using margin for more leverage) when it was trading in the 300’s and he is now down 50%. I told him to never touch MSTR with a 10-foot pole and if he was bullish Bitcoin, just buy Bitcoin. I also told him many times to never use margin, especially on high risk stocks. He is at risk of a significant % of his net worth (>50%) going away forever with a home purchase on the horizon as well that’s in jeopardy. Now he suddenly wants my advice on how to get out of this mess. I told him I don’t know and I honestly don’t. It’s a darned if you do, darned if you don’t lesser of two evils situation. How do you deal with clients that consistently ignore your advice and now want your help getting out of a mess?…

..This is the problem with the bull market brain you get from making big gains in the markets. It’s difficult to know if you’ve morphed into a degenerate gambler when you’re making money. Investors who have taken on excessive levels of risk the past few years have been compensated for it.

Once you get a couple of big wins under your belt it’s easy to let things get out of control.

Strategy (formerly Microstrategy) was in the $300s when the brother got into the stock. Now it’s well below $200 and falling fast…

…Here’s the thing — you could try to offer sensible advice. Sell now before it gets worse and you get a huge margin call. Invest in something far more reasonable and diversified.

I’m not sure it will matter.

When I first started my blog I had this dream that I could somehow save people from making illogical financial decisions. After creating financial content for more than a decade now I’ve come to realize this but some people cannot be saved.

They are doomed to make money mistake after money mistake and there’s nothing you can do about it.

Then there are others who need to make a huge mistake before having an ah-ha moment of realization that they need to change their behavior. Some people do change their stripes but it’s not easy.

4. A Century-Old Classic Buffett Would Love – John Garrett

Every so often you stumble across a book so old, so unassuming, that it shouldn’t have any relevance to modern investing… and yet it reads as if it were written yesterday.

That was my experience with R.W. McNeel’s 1927 gem, Beating the Market. Nearly a century old, it feels startlingly contemporary…

…Although it was published three years before Warren Buffett was born, the lessons in this little volume closely mirror his own philosophy: buy below intrinsic value, bet on America, stay unemotional, seek value, avoid new issues, ignore brokers, be patient, resist the crowd, and focus on businesses with quality management — to name just a few.

You’ll find the similarities striking…

…“Before one starts in to speculate, therefore, he should paste this old creed in his hat: ‘I believe in my country – The United States of America. I believe in the American people, their genius, their brains, and their brawn. I believe in their honesty, and their integrity and dependability. I believe that nothing can stand in the way of their commercial advancement and prosperity.’” R.W. McNeel…

…“Charlie and I have always considered a ‘bet’ on ever-rising U.S prosperity to be very close to a sure thing. Indeed, who has ever benefitted during the past 237 years by betting against America? If you compare our country’s present condition to that existing in 1776, you have to rub your eyes in wonder. And the dynamism embedded in our market economy will continue to work it’s magic. America’s best days lie ahead.” Warren Buffett…

…“Hold firm the principles underlying all successful speculation, that earning power makes values, and values make prices in the long run, and, having in mind the value based on earning power of any particular stock.“ R.W. McNeel

“Put together a portfolio of companies whose aggregate earnings march upward over the years, and so also will the portfolio’s market value.“ Warren Buffett…

…“One chief reason many fail to buy stocks when they are low is because of fear. Periodically prices of stocks representing ownership in the great productive industries of the United States and her great railroad systems fall so far that ownership in them is selling for 25 to 50 cents on the dollar of the value of the bricks and mortar and working capital which the stocks represent. But the majority of people will not buy them then because they are afraid. If they would analyze the cause of their fear they would discover it to be due to doubt as to the very stability of American institutions, for nothing less fearsome would justify certificates of ownership in the great industries of the nation selling at such ridiculous prices.” R.W. McNeel…

…While Buffett ultimately built a far broader and more sophisticated investing framework than McNeel could ever have imagined, the foundations McNeel laid in 1927 remain remarkably solid. Strip away the technology, the speed, the data, and the noise, and you find the same timeless principles: discipline, patience, rationality, independent thought, and a focus on value anchored in real businesses run by real people.

That is why this nearly century-old book still feels so alive. Markets evolve, but human nature does not. The behaviours that drove booms and busts in McNeel’s era are the same forces we wrestle with today — fear, greed, impatience, imitation, overconfidence, and the lure of the crowd.

Or, as Buffett put it most succinctly:

“Humans behave the way humans behave, and they’re going to continue to behave that way in the next 50 years.”

McNeel understood that in 1927.

5. Robotaxis and Suburbia – Ben Thompson

Another classic of the Uber bear genre was this 2014 post by NYU finance professor Aswath Damodaran attempting to determine Uber’s true value; the startup had just raised $1.2 billion at a $17 billion valuation, and according to Damodaran’s calculations, “it is difficult to justify a price greater than $10 billion” (his actual valuation was $5.9 billion). Investor Bill Gurley — before his dramatic powerplay that led to the ouster of founder Travis Kalanick — explained what Damodaran got wrong in How to Miss By a Mile: An Alternative Look at Uber’s Potential Market Size:

The funny thing about “hard numbers” is that they can give a false sense of security. Young math students are warned about the critical difference between precision and accuracy. Financial models, especially valuation models, are interesting in that they can be particularly precise. A discounted cash flow model can lead to a result with two numbers right of the decimal for price-per-share. But what is the true accuracy of most of these financial models? While it may seem like a tough question to answer, I would argue that most practitioners of valuation analysis would state “not very high.” It is simply not an accurate science (the way physics is), and seemingly innocuous assumptions can have a major impact on the output. As a result, most models are used as a rough guide to see if you are “in the ball park,” or to see if a particular stock is either wildly under-valued or over-valued…

Damodaran uses two primary assumptions that drive the core of his analysis. The first is TAM, and the second is Uber’s market share within that market. For the market size, he states, “For my base case valuation, I’m going to assume that the primary market Uber is targeting is the global taxi and car-service market.” He then goes on to calculate a global estimate for the historical taxi and limousine market. The number he uses for this TAM estimate is $100 billion. He then guesses at a market share limit for Uber – basically a maximum in terms of market share the company could potentially achieve. For this he settles on 10%. The rest of his model is rather straightforward and typical. In my view, there is a critical error in both of these two core assumptions.

Gurley argued — correctly in retrospect, given that Uber’s gross bookings over the last 12 months were $93 billion in rides and $86 billion in deliveries — that Damodaran failed to consider how a radically better experience could dramatically expand the addressable market, and completely missed the potential for network effects leading to an outsized share of that expanded market…

…That last sentence was about Uber’s diminished bargaining vis-à-vis a centralized robotaxi operator versus individual drivers, and it’s an important one in terms of Uber’s long-term valuation. However, as robotaxis continue to expand — Waymo is now in five cities (three via their own service, two via Uber), Tesla (with human supervisors in the car) in two, and Amazon’s Zoox in one — I do wonder if I am making a similar mistake to Horan and Damodaran.

First, like Horan, am I too caught up in the current economics of robotaxis? As an apostle of zero marginal costs I am intrinsically allergic to the depreciation inherent in the cars themselves, along with the significant marginal costs in terms of energy and insurance; Uber side-stepped this by offloading those costs to the drivers. Can scale solve this? At some point — Cybercab already points to this future — vehicles will be purpose-built at scale to be robotaxis, and my experience with Full Self-Driving (Supervised) has me convinced that insurance costs will be manageable, not just because of scale, but because there will be fewer accidents.

Second, like Damodaran, am I limiting my thinking by focusing on the current market — even if that market is already massively larger than the taxi & limo market ever was? The experience of a Waymo is certainly magical; it’s also peaceful, and by removing the human from the equation, provides a sense of safety and security that Uber has always struggled with. This last point could address a major suburban point point, which is kids: the lockdown in kids’ freedom corresponded with a dramatic rise in organized activities, the sheer volume of which leaves lots of parents feeling like unpaid Uber drivers themselves. Some may rely on Uber to solve this problem; it seems likely to me far more would be willing to entrust their children to a Waymo.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Waymo), Amazon, and Tesla. Holdings are subject to change at any time.

Shorting Stocks Is Incredibly Tough To Do

It’s not just the fundamentals of a business that you have to get right.

Occasionally, the investing lesson that shorting stocks is an immensely tough way to invest in the stock market is reinforced for me.

I published Why It’s So Difficult To Short Stocks and Shorting Stocks Is Hard, Really Hard in April 2020 and February 2024, respectively. In these articles, I shared how Luckin Coffee (OTC: LKNCY) and Herbalife (NYSE: HLF) made life treacherous for their short-sellers because their stock prices rose strongly in the interim before sinking. In particular, Luckin Coffee’s stock price rose even while management was committing fraud. I wrote in Why It’s So Difficult To Short Stocks:

“It turns out that fraudulent transactions at Luckin could have happened as early as April 2019. From 1 April 2019 to 31 January 2020, Luckin’s share price actually increased by 59%. At one point, it was even up by nearly 150%.

If you had shorted Luckin’s shares back in April 2019, you would have faced a massive loss – more than what you had put in – even if you had been right on Luckin committing fraud. This shows how tough it is to short stocks. Not only must your analysis on the fundamentals of the business be right, but your timing must also be right because you could easily lose more than you have if you’re shorting.”

I was reminded of Luckin Coffee by Intellego Technologies (SSE: INT). Based in Sweden, Intellego Technologies presumably offers dosimeters. When a surface requires disinfection with UV radiation, it is important to know if the surface has received the adequate dosage. This is where Intellego Technologies’ dosimeters are claimed to be able to help; they are devices that change colour after exposure to a certain amount of UV radiation.

Earlier this week, the company’s CEO was arrested on “suspicion of gross fraud”, and SEK 100 million of its cash reserves were seized by Swedish authorities. The trading of Intellego Technologies’ shares on the Swedish stock market was also suspended.

Although not much is known yet of the apparent misdeeds conducted by the CEO, the “gross fraud” is “related to [Intellego Technolgoies’] press releases and quarterly reports in 2025.”

Intellego Technologies held its IPO in June 2021. Its stock price closed at less than SEK 5 on the day of its listing. At the start of 2025, Intellego Technologies’ stock price was SEK 41. It climbed to SEK 78 at the end of June, before closing at a peak of SEK 213 in early September. Just prior to the trading suspension, Intellego Technologies’ stock price had fallen to SEK 47.

An investor back in June 2025 who thought the company had committed fraud and thus shorted its shares at the end of the month, had to endure a gain of nearly 200% in the stock price from SEK 78 to SEK 213 before the collapse to SEK 47 occurred. And this happened even when the investor is most likely right about Intellego Technologies being a fraud. This echoes what happened with Luckin Coffee in April 2019 to April 2020, and just reinforces the lesson for me on how incredibly tough shorting stocks is. Both your analysis on the fundamentals of a business and your timing must be right, because you could easily lose more than you have when you’re short selling.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 November 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 November 2025:

1. Berkshire Hathaway Inc. News Release – Warren Buffett

One perhaps self-serving observation. I’m happy to say I feel better about the second half of my life than the first. My advice: Don’t beat yourself up over past mistakes – learn at least a little from them and move on. It is never too late to improve. Get the right heroes and copy them. You can start with Tom Murphy; he was the best.

Remember Alfred Nobel, later of Nobel Prize fame, who – reportedly – read his own obituary that was mistakenly printed when his brother died and a newspaper got mixed up. He was horrified at what he read and realized he should change his behavior.

Don’t count on a newsroom mix-up: Decide what you would like your obituary to say and live the life to deserve it.

Greatness does not come about through accumulating great amounts of money, great amounts of publicity or great power in government. When you help someone in any of thousands of ways, you help the world. Kindness is costless but also priceless. Whether you are religious or not, it’s hard to beat The Golden Rule as a guide to behavior.

I write this as one who has been thoughtless countless times and made many mistakes but also became very lucky in learning from some wonderful friends how to behave better (still a long way from perfect, however). Keep in mind that the cleaning lady is as much a human being as the Chairman.

2. BlackRock Faces 100% Loss on Private Loan, Adding to Credit Market Pain – Davide Scigliuzzo and Silla Brush

About a month ago, BlackRock Inc. deemed the private debt it had extended to Renovo Home Partners, a struggling home improvement company, to be worth 100 cents on the dollar. As of last week, the firm had a new assessment: zero.

The drastic revision comes as Dallas-based Renovo — a roll-up of regional kitchen and bathroom remodeling businesses created by private equity firm Audax Group in 2022 — abruptly filed for bankruptcy last week, indicating it plans to shut down…

…It was no mystery Renovo was in a tough spot. In April, lenders had agreed to take losses and convert some of their loans into equity as part of a recapitalization that was supposed to give the company a chance to turn its business around, the people said. In the third quarter, they also allowed for deferred cash interest payments on its restructured debt, an arrangement known as payment-in-kind, regulatory filings show.

Yet at the end of September, funds managed by BlackRock and MidCap Financial were still marking the new Renovo debt at par, which typically indicates investors expect to be paid back in full.

3. Not Joined at the Hip: The Relationship between the Fed Funds Rate and Mortgage Rates – David Pendered

A time-honored, but flawed, assumption about the relationship between mortgage rates and interest rates has been turned on its head as the two have moved in opposite directions following the Federal Reserve’s interest rate cuts over the past year…

…But the Federal Reserve doesn’t set mortgage rates. Instead, the Fed sets short-term interest rates—often called the fed funds rate—in an effort to fulfill its dual mandate from Congress: promoting maximum employment and stable prices. The Fed’s short-term rates factor into how banks and financial institutions set many other rates, such as those for business loans, credit cards, and auto loans. And, of course, mortgages…

…Kris Gerardi and Domonic Purviance, both of the Atlanta Fed, explained that the presumed connection between mortgage rates and the fed funds rate is a misconception. For the past 20 years, mortgage rates have been more closely associated with the interest paid on 10-year Treasury notes than with the fed funds rate set by the FOMC, according to Gerardi, a financial economist who studies real estate finance and housing economics, and Purviance, a subject matter expert who analyzes risk in the housing market and threats it could pose to the financial system.

“While mortgage rates do, typically, move fairly closely with short-term interest rates like the fed funds rate, they are more strongly linked to longer-term rates such as the 10- or 20-year Treasury yield,” Gerardi said. “This is because the average life of a mortgage is around seven to 10 years.”

Gerardi observed that many factors determine longer-term yields on Treasuries and that the Fed’s short-term interest rates are just one factor. Others include the market’s expectation for economic growth, the federal government’s fiscal policies on spending and taxation, inflation expectations, lender capacity as homeowners refinance their mortgages, borrowers’ credit risk, and so forth. Gerardi said, “This means that, at times, mortgage rates and short-term rates can move in opposite directions.”

4. The Benefits of Bubbles – Ben Thompson

Late last year Byrne Hobart and Tobias Huber made a new contribution to our understanding of bubbles with their book Boom: Bubbles and the End of Stagnation. While Perez focused on the benefits that came from financial speculation leading to long-term infrastructure, Hobart and Huber identified another important feature of what they called “Inflection Bubbles” — the good kind of bubbles, as opposed to the much more damaging “Mean-reversion Bubbles” like the 2000’s subprime mortgage bubble. First, here is Hobart and Huber’s definition of an inflection bubble:

Inflection-driven bubbles have fewer harmful side effects and more beneficial long-term effects. In an inflection-driven bubble, investors decide that the future will be meaningfully different from the past and trade accordingly. Amazon was not a better Barnes & Noble; it was a store with unlimited shelf space and the data necessary to make personalized recommendations to every reader. Yahoo wasn’t a bigger library; it was a directory and search engine that made online information accessible to anyone. Priceline didn’t want to be a travel agent; it aspired to change the way people bought everything, starting with plane tickets.

If a mean-reversion bubble is about the numbers after the decimal point, an inflection bubble is about orders of magnitude. A website, a PC, a car, a smartphone — these aren’t five percent better than the nearest alternative. On some dimensions, they’re incomparably better. A smartphone is a slightly more convenient tool than a PC for taking a photo and quickly uploading it to the internet, but it’s infinitely better at navigation. A car is not just slightly faster and more reliable than a horse (although in the early days of the automobile industry, it was apparently common for pedestrians to yell “Get a horse!” at passing motorists); cars transformed American cities. Modern-day Los Angeles is inconceivable on horseback. The manure problem alone beggars the imagination.

This is what makes inflection bubbles valuable:

The fundamental utility of inflection bubbles comes from their role as coordinating mechanisms. When one group makes investments predicated on a particular vision of the future, it reduces the risk for others seeking to build parts of that vision. For instance, the existence of internet service providers and search engines made e-commerce sites a better idea; e-commerce sites then encouraged more ad-dependent business models that could profit from directing consumers. Ad-dependent businesses then created more free content, which gave the ISPs a better product to sell. Each sector grew as part of a virtuous circle…

… In this case, the optimistic take would be that AI is already delivering tangible benefits, that those benefits are leading to real demand from companies and consumers, and that all of the money being spent on AI will not be wasted but put to productive use. That may still be the case today — all of the hyperscalers claim that demand for their offerings exceeds supply — but if history is any indication we will eventually overshoot.

There is, however, a pessimistic way to ask that question: will the AI bubble be beneficial like the positive bubbles chronicled by Perez and Hobart and Huber, or is it different? There have been reasons to be worried about both the physical buildout and the cognitive one.

Start with the physical: a huge amount of the money being spent on AI has gone to GPUs, particularly Nvidia, rocketing the fabless design company to a nearly $5 trillion valuation and the title of most valuable company in the world. The problem from a Perez perspective is that all of this spending on chips is, relative to the sort of infrastructure she wrote about — railroads, factories, fiber, etc. — short-lived. Chips break down and get superseded by better ones; most hyperscalers depreciate them over five years, and that may be generous. Whatever the correct number is, chips don’t live on as fully-depreciated assets that can be used cheaply for years, which means that to the extent speculative spending goes towards GPUs is the extent to which this bubble might turn out to be a disappointing one.

Fortunately, however, there are two big areas of investment that promise to have much more long-term utility, even if the bubble pops.

The first is fabs — the places where the chips are made. I’ve been fretting about declining U.S. capacity in this area, and the attendant dependence on Taiwan, the most fraught geopolitical location in the world, for years, and for much of that time it wasn’t clear that anything would be done about it. Fast forward to today, and not only are foundries like TSMC and Samsung building fabs in the U.S., but the U.S. government is now a shareholder in Intel. There is still a long path to foundry independence for the U.S., particularly once you consider the trailing edge as well, but there is no question that the rise of AI has had a tremendous effect in focusing minds and directing investment towards solving a problem that might never have been solved otherwise.

The second is power. Microsoft CFO Amy Hood said on the company’s earnings call:

As you know, we’ve spent the past few years not actually being short GPUs and CPUs per se, we were short the space or the power, is the language we use, to put them in. We spent a lot of time building out that infrastructure. Now, we’re continuing to do that, also using leases. Those are very long-lived assets, as we’ve talked about, 15 to 20 years. And over that period of time, do I have confidence that we’ll need to use all of that? It is very high…

…It’s hard to think of a more useful and productive example of a Perez-style infrastructure buildout than power. It’s sobering to think about how many things have never been invented because power has never been considered a negligible input from a cost perspective; if AI does nothing more than spur the creation of massive amounts of new power generation it will have done tremendous good for humanity. Indeed, if you really want to push on the bubble benefit point, wiping away the cost of building new power via bankruptcy of speculative investors — particularly if a lot of that power has low marginal fuel costs, like solar or nuclear — could be transformative in terms of what might be invented in the future…

…I’ve been less worried about the cognitive capacity payoff of the AI bubble for a while: while there might have been concern about OpenAI having an insurmountable lead, or before that Google being impregnable, nearly everyone in Silicon Valley is now working on AI, and so is China. Innovations don’t stay secret for long, and the time leading edge models stay in the lead is often measured in weeks, not years. Meanwhile, consumer uptake of AI is faster than any other tech product by far.

What is exciting about the last few weeks, however, is that there is attention being paid to other parts of the stack, beyond LLMs. For example, last week I interviewed Substrate founder James Proud about his attempt to build a new kind of lithography machine as the center of a new American foundry. I don’t know if Proud will succeed, but the likelihood of anyone even trying — and of getting funding — is dramatically higher in the middle of this bubble than it would have been a decade ago.

It was also last week that Extropic announced a completely new kind of chip, one based not on binary 1s and 0s, but on probabilistic entropy measurements, that could completely transform diffusion models. Again, I don’t know if it will succeed, but I love that the effort exists, and is getting funding. And meanwhile, there are massive investments by every hyperscaler and a host of startups to make new chips for AI that promise to be cheaper, faster, more efficient, etc. All of these efforts are getting funding in a way they wouldn’t if we weren’t in a bubble.

5. An Interview with Michael Morton About AI E-Commerce – Ben Thompson and Michael Morton

What we started to do is we took a couple different products and we ran them through the traditional funnel and we’ll go back to the first example I used, shoes for flat-footed runners. What I did to start the exercise was I did hours and hours of research reading literally podiatry magazine posts, and every single post about the best running shoes for flat feet, I organized them, I ranked them, so what shoes got first and second, and we came out with some clear winners. “Here are the one, two, and three best running shoes for people with flat feet”, so we know what the best answer is.

Now let’s put it in Google search, and what you found was the PLAs at the top, the carousel you’ll see a set of icons that are horrible for getting the right answer.

So are those pure payment to get there, or is Google actually making determination of what’s the best answer?

MM: Yeah, for the work we did, one of the six was of the top ranked running shoes and when you looked at the models, their slugging percentage was, I would say 60 to 80% of the time, what they showed you out of the five icons were the best running shoe. So if they had five, they’d get one bad one.

Now, that’s a good question people have pushed back, “Well, how can these people be at the top of the feed if they’re paying for it” and this inevitably boils down to a conversion game. Shouldn’t it really only be the best products? And in an ideal state, yes, but this is also an output of which websites have better conversion rates? Who has bigger marketing budgets? Who’s looking to build a brand at this specific time? No one knows a perfect answer for the weightings and outputs of Google Search. Well, there are people, but their emails have @google.com, not our email addresses.

So why did Google’s results get like this, to the extent that you feel one out of six was a good answer? And you contrast the ChatGPT where four out of six are good answers. Is this a matter of, to your point, they’re measuring things like conversion factors, what actually goes through? Is it some people just paid more? Was this something that they can fix or is it that the money flowing in is too much that they can’t actually recommend four out of six because two, three and four might not pay them very much? What happened?

MM: This is probably an hour podcast in itself, but to try to simplify it as best as possible, I think there’s a lot of influencing factors. We are all very familiar with the gamification that has occurred with search, the entire giant industry of SEO, an army of marketing consultants to tell you how to win the keyword bidding game…

…MM: Yes. And look, before I came on here today, I re-ran the exercise, and search was again one for six for the shoe. But then I did AI mode in Google for the flat-footed running shoes — basically batted perfect, just incredible.

So that’s the question. Can Google fix this?

MM: Yeah. Michael Nathanson and I, I was like the devil on his shoulder while Google was going down every day, ChatGPT is just adding users and the bear case is just building and building, building and I’m over there, I’m like, “Oh, they got a problem, Michael, they got a problem”, and Michael’s been doing this for long enough where it’s really hard in these moments to see through this overwhelming wave of negative sentiment. And the day after Google I/O, I go into Michael’s office, I’m like, “Okay, I think they’re going to run towards this problem”, and now you’re sitting on the biggest distribution network in the world, the best AI infrastructure stack, and you’ve increased the friction from moving from being a Google user to a ChatGPT user. So people like you and I were ChatGPT probably day one, my mom and wife are now just going to end up being AI overviews and AI mode and maybe never ChatGPT people. So I think Google has the tool set to win this…

So, who is the number one winner? Let’s grant this is going to happen, it’s so much better, people are going to be searching on ChatGPT for products. Who wins?

MM: Amazon. (laughing) This is like where movie starts with the ending scene, and then you work towards it — Amazon should win. And the way to work through this is you can go a couple angles. Again, why I like searching this subject so much, and thinking about it is, ask the models. So, we ask ChatGPT, Gemini, Grok, and all the different models, “For a e-commerce query, what do you weight in your decision-making process?”, and from most important to least important. And the top three, number one is price, number two is trustworthiness, and number three is speed. Price, speed, trustworthiness, you start to see where this is going and then I asked them, “Okay, of these weightings, who does the best job at delivering?”, every singly model, Amazon is number one, Walmart is number two and you go down the list, Target, Best Buy, eBay-…

…MM: Yeah, let’s take a step back. I’m Brand A, I sell most of my stuff on Amazon, I order it, it gets sent to the warehouses in Amazon, but I have 40% of this business that’s not on Amazon, but I don’t want to have a 3PL that I use outside of Amazon, it’s just a pain in the butt, why don’t I use Amazon? Now what Amazon will let you do is for the stuff that you sell on your own store, not on Amazon, they will deliver in unmarked boxes. So, it’s not like the Amazon Prime labeled all over it, and it’s just multichannel fulfillment, and for a long time, Walmart said, “You can’t use that, if you’re a third party merchant selling on our marketplace, you have to use our fulfillment network, or UPS or FedEx, but you can’t use the…” — basically, you can’t use Amazon multichannel fulfillment, you got to play within these rules.

I think it was in April of 2025, Walmart removed the multichannel fulfillment limitation. So now if you’re a Walmart and you’re plugging in your first party and third party inventory into ChatGPT, the whole thing about Amazon’s mode is that FBA business.

I just want to make sure I understand this. By multichannel fulfillment, you mean that you can buy on Walmart and it’s delivered by Amazon or Walmart? Or Walmart will deliver for any product?

MM: No. So, you can sell it on a Walmart marketplace. Now one of the Walmart rules is is that it can’t be delivered by a truck with Amazon labeling on it. You’ll see the Amazon Flex workers that drive around in cars with stuff, so who knows exactly? And if everybody is going to follow the rules here. But it’s just interesting because Walmart runs towards this new channel, and, in theory, the third party sellers on Walmart’s marketplace that would be presented in a ChatGPT answer have the ability to use a multichannel fulfillment service that is not Walmart’s and is not their own, and it brings that incredible distribution network to ChatGPT.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, and Microsoft. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

Last week, I published The Latest Thoughts From American Technology Companies On AI (2025 Q3). In it, I shared commentary in earnings conference calls for the third quarter of 2025, from the leaders of US-listed technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s third quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

A key focus for Airbnb’s management is integrating AI across the Airbnb app

We are driving this growth by focusing on 4 key areas: making our service better, bringing Airbnb to more parts of the world, expanding what we offer and integrating AI across our app.

Airbnb’s management has been laying the foundation for a more personalised Airbnb, powered by AI, over the past year; management’s end goal is to have the entire Airbnb app become an end-to-end AI agent for users to plan and book their trips

Over the past year, we’ve been laying the foundation for a more intelligent, more personalized Airbnb from rebuilding our tech stack to launching a series of new AI features. We now have more than a dozen AI work streams underway, and they’re all focused on really creating a more personal experience for guests and hosts and making it easier to discover what we offer…

…What we want to do take AI search, which is conversational, AI customer service and the messaging platform, which is conversational and integrate them to one AI assistant or concierge. And eventually, the entire app will act like an AI agent from the top of the funnel through your trip on reservation and leaving review and then bringing you back through the app end to end.

Airbnb’s management rolled out an AI customer support assistant in 2025 Q3 that can take actions for customers and deliver personalised responses; the assistant is a custom-built AI interface designed by Airbnb; the assistant was initially launched in the USA and it has reduced customers’ need to contact a human agent by 15%; management will soon expand the AI assistant to over 50 languages in 2026; management thinks AI-powered customer support is a very difficult problem to solve for Airbnb because (1) every single accommodation option on Airbnb is unique, and (2) the stakes are very high; with AI-powered customer support, management has found that problems that used to take hours to handle can be resolved in seconds

This quarter, we rolled out smarter and faster AI customer support. Our AI customer support assistant has smarter responses. It includes answers your reservation or listing and also provides quicker, more personalized responses. It also lets you take common actions like canceling or changing reservation dates directly from the chat. So what we did is we designed this custom user interface that’s not just text-based, but it’s got rich user interface modules. So it’s a really custom-built AI interface built right into the messaging platform. Now we initially launched this in the United States, where it’s already reduced people’s need to contact a human agent by 15%. So now we’re going to expand it to more countries in more language, and we expect this to be in over 50 languages next year…

…Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind. And therefore, the issue types, customer service is really challenging, right? Oftentimes, customer service agent will hear an issue that they’ve never heard before because it’s from a host that might be a first-time host. And the guest and host might be speaking different languages. The might be simply locked out in a small town in a foreign country. You can imagine how complicated some of this stuff is. So we decided with AI to start with the hardest single problem we could think of, which was customer service. Customer service, we think, is a lot harder than, say, travel search. And the reason why is because the stakes are highest. You can’t hallucinate. You have to handle sensitive customer data. You’ve got to be fast in real time. You’ve got to escalate to the agent if there’s a trust and safety incident. And we are finding that it’s working really well. And in fact, we can go from solving a problem in hours to solving a problem in seconds.

Airbnb’s management will be rolling out AI search on the Airbnb app in 2026; the AI search function will allow customers to have a conversation with the Airbnb app to design the perfect trip; Airbnb has access to all the leading AI models that are publicly available, such as Alphabet’s Gemini and OpenAI’s GPT, to power the AI search function; an example of personalisation for Airbnb’s management is knowing what the user’s purpose of travel is and suggesting the appropriate type of accommodation option; management is testing out AI search right now; the roll out of AI search will be in 2 phases; Phase 1 of the roll out will be the ability for users to key in searches in a free text, natural language way, and receive responses in that manner; Phase 2 is when AI search can become truly conversational

We’re also building out AI-powered search. And this is a really, really big part of our AI strategy. You’re going to see this. We’re testing it now. You’ll see this rolling out through the app next year. And this will let people have a conversation with the app, just like a chatbot about what they’re looking for, so we can help them design the perfect trip. And remember that we have access to all the same models that every other chatbot and AI application has. So we think this is going to be a really delightful product to use…

…I think what we want to do in the future, and this is like now going back to our AI strategy is knowing more about the customer. understanding what their intent is. And if people are traveling for business last minute, one night, we should probably prioritize a hotel for them. Some people do want a more hotel-like experience. Other people are hardcore about the original philosophy of Airbnb. They want to feel like a local when they’re traveling. Those people probably should not see hotels very much…

…We have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API…

…What we’re testing now is if you go to the search box in Airbnb, there’s where, location, when, date, who guest, we’re testing a what box and what is a free text natural language input, which is similar to ChatGPT or Gemini. You’ll be able to type it in. And based on that, we’re going to essentially — you’re going to see like natural language results. So the search cards, not just will be structured data, but will be essentially natural language generated copy and search results. That’s Phase 1. Phase 2, it’s going to become what I guess you’d call an AI multi-turn. Multi-turn, I think, is just a fancy way of saying conversational. So you’ll be able to have a conversation. So you’ll be able to like — the information on the cards, my vision is instead of saying like 2-bedroom, 2 bath, $60, 5 reviews, a pool hot tub that no 2 people see the same copy, just like 2 people typing in ChatGPT see different outputs based on the memory and the type of question they have. So we want Airbnb to be the same way where the output is also natural language. It’s unique. And you’re going to start to see this iterably happen over the course of next year. Eventually, it will become more conversational.

Airbnb’s management thinks their approach to AI is different because they want to use AI to help people connect in the real world; management thinks that people will increasingly want real-life experiences in the age of AI, and this is especially so for the younger generation; management thinks a bet on Airbnb is akin to a bet that people will yearn for real-life connections as AI proliferates

What makes our approach different is that we’re not just using AI to pull people deeper into the screens. We’re using it to get them off their phones and help them connect to the real world. Because I believe in the age of AI, more and more, what’s going to happen is what’s on a screen will be artificial. You won’t know if it’s real or not. In the age of AI, people are going to increasingly want what’s real and what’s real is in real life. They’re going to create real experience with real people in the real world. And I think that’s especially true for younger generations who grew up on social media are now surrounded by AI-generated content. So we think Airbnb is the best way to experience the magic of the real world. So while other companies are using AI to keep you online, we’re really trying to do the opposite, get you off your phone and into the real world…

…A bet on Airbnb is a bet on AI because it’s a bet that the more AI proliferates the content we consume on devices, the more people are going to yearn for real connection with real people in the real world.

Airbnb’s management thinks that Airbnb can benefit from AI more so than other companies, especially other travel-related companies; management thinks specialization will win in travel when it comes to AI, and Airbnb has many unique capabilities

I think that Airbnb probably more than most other companies, especially companies in travel can benefit from AI. Probably the reason why is because primarily, we don’t have SKUs. Most of our homes, most of our service experiences, they’re not SKUs, they’re one of a kind…

…We think that we’re going to be very successful at this because, number one, we have access to all the same frontier models as the leading AI companies. We have access to the same models as Google, OpenAI and the other companies because they’re all available by API. So really, you’re not going to win or lose on the model because they’re all available. You’re going to win or lose on what you do with them. And our thesis of AI is that specialization will win in travel. That’s our theory, that specialization will win. We have a lot of unique capabilities. We understand travel, we have one of the best design teams in the world, so we can design custom interfaces…

…We do think Airbnb could be a one-stop shop for travel. And then we have a lot of capabilities that no one else has built, and we don’t think AI companies will want to develop like a messaging platform in the vast majority of people who book an Airbnb use the messaging platform.

Airbnb’s management thinks ChatGPT’s commerce-integration was not ready, hence Airbnb was notably absent from ChatGPT’s recent launch of app integrations; management thinks being integrated with ChatGPT will cause Airbnb to become a commodity-like data layer; management is open to integrating Airbnb with other chatbots, but there are a number of things that need to happen, namely, (1) customer integrations for Airbnb, (2) not being a commodity data layer, and (3) the right presentation of Airbnb results to highlight the unique nature of the company’s offerings

[Question] Airbnb was notably absent from ChatGPT’s app integration launch when other major travel players were there. Can you just talk about your thought process here?

[Answer] We just didn’t think the integration was ready. We care a lot about how Airbnb shows up in the world. And when I looked at what the demonstration, I thought it was a great concept. It was a little bit hard to discover at the time you had to actually download the app, the company’s application. We didn’t want to be positioned as essentially a data layer like a commodity. There were certain tools that we had to build. 

When you book an Airbnb, you want to make sure that you see personalized results to you that you have to have an account on Airbnb, messaging is core to our platform. So it’s really about making sure that we had enough features. But we are not at all opposed to integrating into like chatbots. And I would imagine in the future that you would see Airbnb across a large surface area of the Internet. We just have a couple of principles when we are integrating.

Number one, we want to make sure that while we like the idea of being a launch partner, we still have — we like to have custom integrations if we’re going to be a launch partner, and we want to make sure that, that integration is really well developed. Number two, we don’t want to appear as a commodity. Number three, we certainly don’t want to be a data layer. And number four, we really want to make sure that people understand the uniqueness of Airbnb when they’re seeing results. So for example, we chose not to integrate with Google Hotel Finder because Airbnbs were positioned like commodities next to hotels, and we just didn’t think that was the right presentation.

Airbnb’s management is currently holding off on building an advertising business on the Airbnb app because they think AI search is disrupting the old digital advertising paradigm, so they want to nail down AI search first before introducing advertising; it appears that Airbnb may be introducing advertising very soon after the launch of AI search

With regards to advertising, we’ve been looking at this for a long time. One of the things that’s really changed is the entire paradigm of search is changing in the age of AI. So what we didn’t want to do is design a like kind of ad unit model around old search to then disrupt the ad model to AI search. So we really want to nail AI search so that as we think about advertising, we integrate into this new search paradigm, which we’re looking at right now. So that’s the status. I don’t have — and obviously, we don’t preannounce things. We are sharing that we are going to be launching AI search imminently. But beyond that, we’re not disclosing other pieces we’re launching, but expect more in this next year.

Arista Networks (NYSE: ANET)

Arista Networks’ management sees the company having superior AI networks that improves the performance of AI accelerators; Arista Networks’ strength in AI networking comes from a few sources, (1) superior hardware, (2) innovative fabric architecture, (3) AI-focused telemetry and provisioning automation, (4) high-quality software, (5) leadership of ethernet consortiums, and (6) partnerships with important AI players; Arista Networks’ Etherlink distributed switch fabric powers some of the largest AI fabrics

On September 11 at our Analyst Day, we showcased both networking for AI and AI for networking with our continued momentum across our data-driven network platforms. Unlike many others, our Etherlink portfolio highlights our accelerated networking approach, bringing a single point of network control for zero-touch automation, trusted security, traffic engineering and telemetry to dramatically improve compute and GPU utilization. Superior AI networks from Arista improves the performance of AI accelerators…

…Our success in AI has many sources, the sheer power and performance of our hardware platforms, our innovations in fabric architecture, our AI-focused telemetry and provisioning automation, our reputation for the highest quality software and our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN. And most importantly, the way we partner with the world’s largest AI companies…

…Our Etherlink distributed switch fabric powers some of the largest AI fabrics in the world. It’s also an excellent underlay for data centers of all sorts, providing a full line rate fabric with no hotspots at petabit scale for all workloads, including AI.

Arista Networks’ networking solutions are compatible with NVIDIA’s systems, but management is also keen to create an open ecosystem to build the AI stack, which includes compute, memory, and networking; the open ecosystems Arista Networks is participating in include the Ultra Ethernet Consortium (UEC) and Ethernet Scale Up Networking (ESUN); Arista Networks has unveiled its first ESUN specification together with 12 industry experts; UEC recently published its first specification; Arista Networks’ Etherlink portfolio is entirely compatible with the UEC; ESUN was started with 4 vendors including Arista Networks, but management expects 20-30 members over time

We interoperate with NVIDIA, the worldwide market leader in GPUs, but we also recognize our responsibility to create a broad and open ecosystem, including AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage and VAST Data to name a few, and build that modern AI stack of the 21st century. This stack includes the trio of compute, memory storage and a solid network foundation to run training and inference models…

…Our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN…

…At the Optical Compute Conference, OCP, Arista unveiled its first Ethernet for Scale-Up Networks or ESUN specification, along with important 12 industry experts. While we began with 4 co-founders, we are now supporting and increasing to more people so that we can build the right interoperable scale-up standard…

…After 2 years of lots of hard work led by Hugh Holbrook and now Tom Emmons, UEC did publish their first specification, I believe it was 1.0 in June of 2025. Arista’s Etherlink portfolio is entirely UEC capable, compatible, and we will continue to add more and more compliance, packet trimming, packet spring, dynamic load balancing. These are all important features that our switches support…

…We’ve been an early pioneer, 4 vendors started this together, including Broadcom, Arista and a couple of our cloud titan customers. I’m pretty sure it will be 20, 25, 30 over time. And having a standards-based OCP ESUN agreement will allow us to expand UEC into the scale up configuration as well, leveraging UEC and IEEE specs.

Arista Networks’ management is confident of hitting their previous goal of $1.5 billion in total AI-related networking revenue; management is targeting $2.75 billion in total AI-related networking revenue in 2026; management is now looking at the goal of $15 billion in revenue in the next few years, and a big chunk of the $15 billion will come from AI-related revenue

Our stated goal of $1.5 billion AI aggregate for 2025, comprising of both back end and front end is well underway. We are now committed to $2.75 billion out of our new target of $10.65 billion in revenue, representing 20% revenue growth in 2026…

…As we get now confident about exceeding our $10 billion goal next year, we’re looking at our next goal of $15 billion in the next few years. And I think AI will be a very large part of it

Arista Networks’ management is seeing unprecedented demand for AI build-outs; management sees a golden era in networking, driven by AI, and a growing total addressable market (TAM) exceeding $100 billion in the coming years

The demand and scale of AI build-outs is clearly unprecedented, as we look to move data faster across multiplanar networks…

…We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista’s driving network scale of AI XPUs, handling the power and performance. Basically, the tokens must translate to terawatts, teraflops and terabits. We are experiencing a golden era in networking with an increasing TAM now of over $100 billion in forthcoming years.

Arista Networks’ Autonomous Virtual Assist (AVA) has agentic capabilities that help customers troubleshoot issues with their networks

Arista AVA or Autonomous Virtual Assist, uses AI to help our customers design, build and operate their networks. AVA draws on both our internal knowledge base and also on the customers’ data stored in NetDL, Arista’s network data lake plus AVA has agentic capabilities to help troubleshoot proactively.

Arista Networks has a recent partnership with Oracle for Oracle Acceleron, which includes migrating Oracle’s Exadata platform from Infiniband to Ethernet

At Oracle AI World, Ken was invited to formally announce our collaboration with Oracle Acceleron. This builds upon a decade of partnership with Oracle, starting with our Exadata migration from InfiniBand to Ethernet for AI networks to RoCE, RDMA over converged Ethernet, and now multiplanar networking across cloud AI for on-time job completion in gigawatt scale AI data centers.

Arista Networks’ management continues to think that the company’s networking solutions can co-exist with white box solutions; management thinks white box solutions are suitable for companies with simple use cases, while Arista Networks’ solutions are suitable for more complex use cases; management has long seen Arista Networks as having 2 groups of competitors for AI networking, namely, NVIDIA’s bundle and white box solutions; management is seeing Arista Networks’ market share remaining stable relative to the other 2 groups of competitors; management is also seeing the entire networking market growing, benefitting all 3 groups; there was a recent case of a neocloud with non-NVIDIA GPUs that could not get a white box networking solution to work for mission-critical AI workloads, and had to adopt Arista Networks’ solution

Arista also continues to clarify our role in white box and how we will continue to coexist like we always have the past decade or more. The concept is clear. It’s all about good, better and best, where in some simple use cases, a commodity white box is good enough. Yet in other cases, customers seek the value of better Arista blue boxes with state-of-the-art hardware with built-in NetDI for signal integrity, physical, passive, active component and troubleshooting management. The best is, of course, the Arista branded EOS platform for the ultimate superiority…

…We always, as you know, coexist with 2 other types of competitors. One is the bundling strategy with NVIDIA and the other is the white box. So we have not seen any significant changes in share up or down at the moment, it’s stable. Having said that, it’s also a massive market. And we think rising tide rises all boats and this boat is feeling pretty good…

…I’ll give you one example where they were just not getting their white box to work. These are AI mission-critical workloads. And we’re seeing a neocloud come right in with, in this case, non-NVIDIA GPUs, in fact, where they’re looking to deploy Arista with its excellent hardware. And at first, they wanted to do an open NOS, but now they are adopting a hybrid strategy where it’s not only an open NOS, but Ken’s EOS is coming to shine in its full glory in this use case. So in this case, I think it’s a Blue Box to start with, but it’s quickly going into a hybrid state of blue and branded EOS box.

Arista Networks earns lower gross margins from its Cloud and AI Titans customers compared to other customer groups

We do have a mix of product margin where it’s significantly below 60% with our cloud and AI titans driving the volume and higher obviously, for the enterprise customers. The average of which, together with services is yielding that number. So when the mix tilts heavily towards the cloud and AI, you can expect some pressure on our gross margins.

Arista Networks is involved with the early designs of 5-7 AI accelerator projects (i.e. AI chip systems projects) at any point in time; management sees the possibility of 4-5 AI accelerators emerging over the next couple of years; management thinks the non-NVIDIA AI accelerators will emerge because the standards for Ethernet are getting stronger over time

I think at any given time, we have 5 to 7 projects with different accelerator options. Obviously, NVIDIA is the gold standard today, but we can see 4 or 5 accelerators emerging in the next couple of years. Arista is being sought to bring all aspects, the cabling, the co-packaging, the power, the cooling as well as the connection to different XPU cartridges, if you may, as the network platform of choice in many of these cases. So we are involved in a lot of early designs.

I think a lot of these designs will materialize as the standards for Ethernet are getting stronger and stronger. We now have a UEC spec. You heard me talk about the Scale-Up Ethernet spec, ESUN, where we can bring different work streams onto the same Ethernet headers, transport headers, data link layer, et cetera. So I think a lot of this will be underway in 2026 and really emerge in 2027 as Scale-Up Ethernet becomes a more important part of that.

In terms of deciding the networking platform of choice for GPU clusters, it’s a joint decision between the AI model builders and the cloud computing infrastructure providers, and Arista Networks works closely with both groups; 

[Question] You mentioned large language model providers like OpenAI, Anthropic, and they have announced partnerships with your cloud titans. Can you share with us who is driving the decision-making on networking hardware on these announcements?

[Answer] Specific to who makes the decision, it’s really a combination. We intimately work with the software and LLM players because they certainly guide the design, but we also work with the cloud titans, and it’s a shared responsibility between both of them and where the responsibility for procuring the large data centers and the power and the location and the cooling is clearly done by our cloud Titans, but the specifications on exactly what’s required on the scale up, scale out network is done by the partners like OpenAI and Anthropic. So it’s really a joint decision.

Arista Networks is progressing well with its 4 major AI customers, with 3 of them having crossed the 100,000 GPUs mark in their clusters; the remaining major AI customer will be crossing the 100,000 GPU mark soon; Arista Networks’ work with the major AI customers has mostly been scale out; management thinks Arista Networks as being seen as a very important participant in the buildout of massive AI clusters

All 4 are doing well on the 100,000 mark. 3 have already crossed it. The fourth one, I don’t know if they’ll cross it by end of the year or next year, but they’re getting there. So we’re feeling pretty good on our large GPU deployments…

…Until now, majority of how we’ve measured our AI success through our Cloud and AI Titans has been number of GPUs and how much are they installing and can we verify that the Ethernet network works. The majority of it to date has been scale out…

…How are these being built? Clearly, they’re being driven by large language models, tokens transformers, inference use cases, you name it all. So the influence is clearly coming from these players you named. But the way they are driving the infrastructure, and I can’t keep track of the gigawatts myself, it’s 10 gigawatts here, 10 there, 30 there. It’s adding up to a lot. But I can just tell you, no matter what it is, Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer. 

Arista Networks’ management is seeing AI demand taking longer to reach a stage where they have a sense of predictability on when the contracts land

The only other thing I’d add to this just generally as a topic is that when you think about that the large AI use cases are acceptance clauses, it really comes down to that coming together and the timing of that. That doesn’t follow a seasonality model…

…Good point. It lands when it lands. That is a very good point that Chantelle is making that in the cloud, we started having predictability of how they landed and how they got constructed. In AI, it’s taking longer.

As Arista Networks’ large customers focus on AI, the other parts of the company’s business are growing slower

It doesn’t leave the core business with a lot of opportunity. But that’s not to say it may be flattish, it may be grow. It’s to say that our customers are putting more attention there and that the existing business, which is already on very large numbers, will have lesser growth. We don’t yet know if it’s flattish or single digit or whether more will go to AI. We frankly can’t predict the mix this early in the game on 2026, but we think we’re in for a great ride in 2026.

Arista Networks’ management sees 3 big use cases for the company’s networking technologies for AI, namely, (1) scale up, (2) scale out, and (3) scale across; Arista Networks is also participating in scale across; management sees Arista Networks eventually participating in scale up, but it will take time; management thinks scale up deployments will have lower margins for Arista Networks, but they will carefully balance scale up, scale out, and scale across to achieve the overall appropriate margins; management thinks Arista Networks will be meeting scale up demand mostly with blue box solutions that come with lower software content from the company

There are 3 big use cases sitting in front of us, scale up, scale out and scale across. Arista’s participation to date has largely been in scale out. So we’ve got 2 major use cases in addition to augmenting this…

…Arista has been looked at as a very important and relevant participant, especially right now in the scale out and scale across. We will participate in the scale up. It will take a little longer. Today, it is largely a set of proprietary technologies like NVLink or PCIe, and I think that will happen more in ’27…

…As we go to significant scale up volume, we expect more margin and economic capability coming together. In other words, the volume of these things will be larger, the pressure on margins will be greater. So — but we will carefully have a mix of scale up, scale out and scale across to not affect the overall margins, but definitely take our fair share in that…

…What I think the evolution of the blue box will be, I think it will be more significant in the scale up use cases where there’s a higher dependency on the strength of our hardware and our NetDI capability and a lower requirement for software.

Arista Networks’ management is seeing a sea-change happening in back-end networks where the use of Infiniband previously is now switching to the company’s Ethernet solutions; management is seeing back-end and front-end networking converging more and more; management is seeing that Arista Networks is the only networking company outside of China that is successfully selling both front-end and back-end networking; management thinks the convergence of front-end and back-end networking is really advantageous for Arista Networks

I think a year or maybe even 2 years ago, Meta, I may have told you this, we were literally outside looking in at all these back-end networks that were largely being constructed by — with InfiniBand. We’ve seen a sea change, particularly this year, where obviously, more and more times we’re being invited to construct their 800 gig, last year was more 400 gig. And I think next year will be a combination of [ 800 gig and 1.60 terabits ] on the back end. The back end is putting pressure on the front end, which is why it’s getting more and more difficult for us to say, okay, what’s the back-end number that natively connects to GPUs and what is the front end. But we know of concrete cases in our cloud titans, where not only is it putting pressure on the AI number, but they’re having to go and upgrade their cloud infrastructure to deal with it. That part is happening in a small sort of way, but what’s happening in a big sort of way is the back and front are coalescing and converging more. And it’s really becoming hard to tell, and it’s probably six of one half a dozen of the other…

…We’re seeing that Arista, I think, is the only successful vendor outside of China selling both front end and back end. And this is where our engineering alignment is so important because we can offer the customer a consistent solution across their entire infrastructure. I think this is a unique differentiator that will really help us succeed as these networks become more and more mainstream…

…In terms of the front end and back-end converging, this is truly advantageous to us because the front end requires a massive number of features. It’s incredibly mission-critical and supports a whole variety of applications, not just the straightforward of demanding communication patterns of the AI back end. So we see that the — our ability to tackle both of them effectively is a significant source of strength and a real differentiator and something that’s not easy for competitors to replicate. If you look at NVIDIA, for example, the sales volume is small in the front end and Cisco is small in the back end. And so I think we’ll see that kind of convergence being beneficial to us.

Arista Networks is doing well in both disaggregated scheduled fabrics and nonscheduled fabrics; management has no preference over one or the other and is happy to support whichever is best suited for customers’ needs

[Question] How should we think about your market opportunity between disaggregated scheduled fabrics versus nonscheduled fabrics, which appear to be used in the largest AI accelerator clusters at one of your largest customers?

[Answer] we’re not religious. We jointly developed the DSF architecture with one of our leading cloud titans, Meta. And we’ve been selling the nonscheduled fabric for a very long time. So we’ve never been religious about this. And both are doing very, very well at our cloud titans and specifically the one we co-developed with… 

…We’ve had both architectures in massive production scale for, I think, 15 years now. And we’ll continue to offer this range of choice to our customers, offering them their choice between the highest value fabric with deep buffers, no hotspots, congestion-free, loss-free or an unscheduled fabric, which is maybe lower cost, but also can be more difficult to operate. And they both run the same software. So it gives the customer a range of options and a consistent operating model.

In the earlier days of the AI data center buildout, there were 2 of the larger neoclouds that did not even consider Arista Networks’ networking solutions because they wanted to go with NVIDIA’s GPU & networking bundle, but now, management is increasingly seeing more neoclouds wanting to work with Arista Networks

[Question] You mentioned neocloud is an area where you’re getting more momentum. I think you guys actually said at the Analyst Day as well. I’m just curious like what are you seeing with that customer set? I guess, from my perspective I’ve historically kind of thought of that customer as being more focused on the bundle, which isn’t necessarily your game, but it sounds like you’re maybe talking a bit more positively.

[Answer] In the beginning, we were looking at them bundling. I can think of 2 examples where we weren’t even invited to the party because you want my GPU, you’ve got to get the network from me, so we weren’t there. But there are — leaving the 2 aside, and even I think those 2 might be — might get open-minded over time, there are many more neoclouds worldwide coming up that are really looking for Arista’s help, not only on the product, but on the network design, on the software capability, they just don’t have the staff and expertise to do everything themselves, and they would rather let us satisfy their network needs. So we are taking down many neocloud and smaller enterprises, admittedly smaller numbers of GPU clusters as well.

Arista Networks’ management sees power as being a really important asset in AI data center buildouts

But if they start with 1,000 to a few thousand, then we’re hopeful they’ll grow because the one advantage they seem to all have is colo space and power, which is, as you know, is a very prestigious asset going forward.

The size of AI data centers are much larger than the traditional data centers that used to be Arista Networks’ bread-and-butter; companies’ attention are all on new buildouts for AI, and not on the refreshment of existing CPU-based data centers into AI data centers

[Question] We’ve seen a lot of the deals with the hyperscalers or the AI model companies with new data center build-outs, probably not a level since we’ve seen with the cloud build-out. So I’m just curious, is there a way to think about Arista’s opportunity with new network builds versus refreshing or upgrading existing networks?

[Answer] That’s exactly the way to think about it because in the past, with the cloud, we rarely got to talk about gigawatts and beyond. So much of them were multi-megawatts. So these are newly constructed AI build-outs as opposed to the traditional CPU or storage-driven cloud build-outs. Of course, they will have refresh too. But frankly, they’re not getting the attention. All the attention is going to the new build-outs for AI. So that’s the right way to look at it.

Coupang (NYSE: CPNG)

Coupang’s management is focused on building Coupang’s internal AI computing infrastructure; management is running small tests on opening the infrastructure to 3rd-party usage, but there are no concrete plans to do so at the moment; management’s focus with AI is to generate practical savings for Coupang; management is seeing AI deliver tangible benefits across Coupang’s operations, such as in demand forecasting, automating fulfillment processes, optimizing delivery routes, and more; management is confident that AI will deliver significant savings for Coupang; management also sees AI as an opportunity to improve Coupang’s service quality and customer satisfaction

We are focused on building our own internal AI computing infrastructure to support our operations and improve performance and cost efficiencies. We have some small effort to test and learn on the — on making parts of that technology available externally. But we’re not at the stage of having or discussing any real customer demand or capital plans there. I think in all that we do, we’ll focus on practical applications, practical savings for the company for the — primarily and remain disciplined in how we allocate resources…

…AI has always been very central to operations, and that’s only becoming more true. AI is developing — delivering tangible benefits across our operations, including in areas that relate to demand forecasting, automating fulfillment processes, optimizing delivery routes among many other applications. These advances are helping us reduce waste, improve productivity and enhance the customer experience. We’re confident that AI will deliver significant savings and improve our P&L over time. And we have many efforts underway that we expect to bear fruits along those lines…

…AI is also more than just about efficiency. It provides an exciting opportunity to raise the bar for service quality and customer satisfaction. And we’re just as eager to expand our investment and experimentation cycles on that front.

Datadog (NASDAQ: DDOG)

Datadog experienced strong revenue growth in AI native customers in 2025 Q3; management saw an acceleration of growth in the AI cohort in 2025 Q3 when excluding the largest customer (likely to be OpenAI); management is seeing AI native customers broaden in number and size; Datadog has more than 500 AI native companies, of which 100 are spending more than $100,000 annually (was 80 in 2025 Q2), and 15 are spending more than $1 million (was 12 in 2025 Q2); management sees the activity of the AI native customers primarily as an indication of what’s to come as companies of every size and industry incorporate AI; AI native customers accounted for 12% of Datadog’s revenue in 2025 Q3 (was 11% in 2025 Q2); management thinks the percentage of Datadog’s revenue from AI native customers will be less relevant metric over time as AI usage in production broadens to non-AI natives; Datadog’s larger AI native customers encompass a fairly broad group of AI companies

We also experienced strong revenue growth for our AI native customers and a broadening contribution to growth among those customers. There, too, we saw an acceleration of growth in our AI cohort in Q3 when excluding our largest customer…

…We continue to help AI native customers big and small to grow and scale their businesses. And we continue to see this group broaden in number and size with more than 500 AI native companies in this group, but 100 of which are spending more than $100,000 annually with Datadog and more than 15 who are spending more than $1 million annually with us. While we know there’s a lot of attention on this cohort, we primarily see it as an indication of what’s to come as companies of every size and every single industry incorporate AI into their cloud applications…

…In Q3, this group represented 12% of our revenue, up from 11% last quarter and about 6% in the year ago quarter. I will note that over time, we think this metric will become less relevant as AI usage in production broadens beyond this group of customers…

…[Question] On the AI side, and I don’t want to talk about the customer, but more the other ones, like 15 customers over 1 million. That’s like a big number and 100 over 100,000. How do we have to think about the nature of those?

[Answer] It’s actually fairly broad. So there is model vendors, there’s models — model that can be the lens model that can be video, it can be sound generation, it can be all of the various parts of the stack you see as independent companies. It can be — there’s quite a few companies that do that work on the coding side. So coding assistants and vibe coders and everything in that range. Some of these are very new companies. Some of these are not very new companies, some of these started 5, 7, 8 years ago. And we’re sort of not necessarily AI native from day 1, but very quickly, that would give them the growth they see today with the people to AI. So we see a little bit of that. We have companies that are other parts of the stack in AI on the, say, the server side, the other components of the infrastructure. And we have other companies that are purely applications filled with AI. So we have a bit of everything in there.

Datadog’s management is seeing high customer interest in Datadog’s Bits AI agents; the Bits AI SRE agent already has thousands of customers in preview-access; management is getting very enthusiastic feedback from customers on the time and cost savings Bits AI is delivering; a RUM (Real User Monitoring) product user has used Bits AI SRE Agent to significantly improve meantime to resolution; management is currently unsure if Bits AI will have a bigger impact on Datadog’s business from direct monetisation, or indirect monetisation; management thinks Bits AI has been a differentiator for Datadog; management is improving Bits AI in a very aggressive manner; Bits AI has helped Datadog land some large Cloud SIEM (security information and event management) deals

We are seeing high customer interest in our Bits AI agents, which we announced at our DASH user conference in June. We have now onboarded thousands of customers for preview access, the Bits AI SRE agent. And as we prepare for general availability, we are getting very enthusiastic feedback on the time and cost savings enabled by Bits AI. As RUM user recently told us, with Bits AI SRE being on call 24/7 for us, meantime to resolution for our services has improved significantly. For most cases, the investigation is already taken care of well before our engineers sit down and open their laptops to assess the issue. And this is not an isolated comment. We see the potential here for our agents who radically transform observability and operations…

…In terms of the impact for next year, on the packaging side, I’m not completely sure yet whether the biggest impact will be seen from what we charge Bits AI itself or for the rest of the platform, that it gets benefits from the differentiation of Bits AI…

…But what we can tell is this is differentiating, this is good. It works significantly better than anything else we’ve seen or heard of in the market, and we are doubling down on it. We have many, many teams now working on deepening Bits AI SREs to making sure it goes further into the resolution doesn’t just point to the issue, but fixes the code that all these kind of things working hard on that. We’re also working on breadth, making sure that we train it on many more types of data, many types of sources, sometimes even systems that are observe the systems that are not dialed up, so we can cut across to other systems our customers are using. So we are very, very aggressively developing Bits AI SRE…

…The Bits AI Agent is — it really has a growth factor for customers. So what works really well is and we’ve seen that number of times, like we set it up for them. It’s running on their alert and they go through an outage and they still go to the motion, so they still go — they still set up a bridge and they have 20 people and they spend 2 hours and in the end, they have an idea what went wrong. And then they go to Datadog and they see, oh, there’s an investigation that had run. And 3 minutes into the outage, it got the same conclusion that we got 2 hours later with 20 people on the call. And that’s completely eye-opening for customers when they see us. And we have — so that’s why we get many quotes about it…

…That’s what helped us win some large land deals for our Cloud SIEM products because the combination of the SIEM that runs extremely efficiently on top of observability data that runs very efficiently on top of Flex Log, but also saves an immense amount of time by getting 90% of the issues out of the way with automated investigation that’s extremely attractive to customers.

Datadog’s management recently launched LLM Experiments and Playgrounds for general availability, so companies can rapidly iterate on LLM applications and AI agents; management also recently launched LLM-as-a-judge evaluations for general availability for customers to access their AI application’s quality and safety; the number of LLM spans customers are sending to Datadog has quadrupled in the past few months; management is seeing a lot of interest in Datadog MCP servers; the Datadog MCP servers bridges Datadog and AI agents from 3rd parties; preview customers of Datadog MCP servers are using real-time production data for trouble shooting, root causes analysis and automation in AI agents; management sees MCP as a way to cement Datadog into customers’ workflows; management continues to see customer interest grow for next-gen AI observability; 5,000 customers are sending AI data to one or more of Datadog’s AI integrations; Datadog’s foundational model for time series forecasting, TOTO OpenWave, is one of the top downloads on Hugging Face over the past few months; Datadog currently has products getting into market for GPU monitoring, but those are not generating any significant revenue; all of the details on AI-related revenues management has shared are not for GPU monitoring

In LLM observability, we recently launched LLM experiments and playgrounds for general availability, helping teams to rapidly iterate on LLM applications and AI agents. We also launched custom LLM-as-a-judge evaluations for general availability, which lets customers write evaluation prompts to access application quality and safety. As an illustration of growth and adoption in the past few months, the number of LLM spans customers are sending to Datadog has more than quadrupled.

We are seeing a lot of interest in the Datadog MCP servers. Our MCP server acts as a bridge between Datadog and AI agents, such as Codex OpenAI, Claude by Anthropic, Cursor, GitHub Copilot, Goose by Block and many more. Our preview customers are using real-time production data context to drive trouble shooting, root causes analysis and automation in these agents. One user told us, “The Datadog MCP server is a great tool. It enables me to get the last 5 of my app and follow the spans and traces all the way to the root cause. I have never been more hooked on Datadog.” So we see MCP adoption as a great way to cement Datadog even further into our customers’ workflows…

…We continue to see rising customer interest for next-gen AI observability with over 5,000 customers sending us AI data to one or more of our AI integrations…

…A shout out to our AI research team for the amazing work they have published. Our TOTO OpenWave time-series forecasting model has been one of the top downloads on Hugging Face over the past few months, and that is across all categories. It is very impactful as, among other things, the high quality of this work allows us to attract world-class AI researchers and engineers…

…We have products that are getting into the market now for GPU monitoring. But these don’t generate any significant revenue yet. So all the revenues we’ve shared, like the acceleration, et cetera, that’s not related to us capitalizing more on GPUs, that’s a future opportunity.

Example of a 7-figure expansion deal with a heavy equipment company; the heavy equipment company will replace its open source log solution with Datadog’s products; the heavy equipment company also plans to adopt Datadog’s LLM Observability

We signed a 7-figure annualized expansion with a Fortune 500 heavy equipment company. With this expansion, this customer will replace its open source log solution with Datadog log management and Flex logs. They plan to adopt LLM Observability and their IT team is using cloud cost management to improve cost visibility and governance.

Example of a 9-figure expansion deal with a leading AI company; the AI company has expanded its usage of multiple Datadog products and has committed to an early renewal with higher commitments to secure better terms; the AI company is Datadog’s largest AI native customer (and is likely OpenAI)

We signed a 9-figure annualized expansion with a leading AI company. This company has been a long-time Datadog customer and has expanded their usage of multiple products, securing better economics for a higher commitment with an early renewal…

…We extended the contract of our largest AI native customer.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management sees the market opportunity in cloud and AI growing rapidly into trillions of dollars

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers of our business. Meanwhile, we are advancing rapidly in AI, where we are incredibly excited about our opportunities. We’re building a comprehensive set of AI Observability products to help our customers tackle the higher complexity that comes with these technologies. And we are building AI into Datadog…

…The market opportunity in cloud and AI is expected to grow rapidly into the trillions of dollars and companies of every size and industry are looking to adopt AI to deliver value to their customers and drive positive business outcome. So we’re moving fast to help our customers develop, deploy and grow into the cloud and into the AI world.

Datadog’s management’s current guidance for 2025 is significantly higher than the guidance given in the 2024 Q4 earnings call, and the biggest surprise has been the adoption of AI growing faster than expected

[Question] If we go back to the beginning of the year, Datadog was expecting 19% revenue growth. It looks like you’re tracking to something over 26% growth now, and that’s just the high end of your guidance. So I guess my question is, what surprised you the most this year?

[Answer] I think the biggest surprise for us has been that — so AI in general has or AI adoption has grown faster than we thought it would at the beginning of the year. So we’ve seen that across our AI cohort. We’ve seen also that we got some of our new products and new, like the changes we’re making on the go-to-market side to click perhaps earlier than we would have thought otherwise. So all in all, we saw the leading part of the business with AI growth faster, not the lagging but the slower growing, more traditional part of the business also accelerate and that gets us where we are today.

Datadog’s management thinks that customers will eventually want agentic monitoring capabilities in a unified platform for observability because (1) it’s not practical for customers to manage so many integrations that each have their own management control and observability control, and (2) the AI parts and non-AI parts cannot be separated

[Question] When you look at some of the independent software vendors that are releasing Agentic solutions, Agentic portfolios. A number of them are including observability as part of their sort of value proposition. Is there any work you think Datadog has to do to sort of infiltrate that market or make sure that customers look to Datadog as that Agentic monitoring capability as some of these independent software vendors try to bundle in observability into their solutions.

[Answer] There’s absolutely no doubt to us that the customers will even want a unified platform for observability for all of this. There’s 2 parts to that. One is, historically, every single piece of software we integrate with, whether that’s SaaS or things that customers on themselves, also has its own management control and observability control. But you’re not going to log into [ 70 ] or in the case of customers we mentioned that they use 60 integrations for the smaller customers, 150 integrations for the larger ones. It’s not practical to actually go and manage that separately. So we think all of that belongs in a central place, and that’s the historical trend we’ve seen. We also think that you can’t separate the AI parts from the non-AI parts of the business. So you’re not going to look at your agents separately that you do at your web hosting and your database and your — everything else you have in your stack. So all of that in the end will be attached to observability.

Datadog’s management thinks AI technology allows Datadog to build capabilities into the On-Call product that it otherwise could not; management thinks the future of On-Call, when infused with AI in eras such as incident prediction, is very exciting

We entered the field with On-Call because we wanted to own the end-to-end incident resolution. So we wanted because we before that, we were detecting the incidents and sending the alerts, and then we were pretty much where the resolution happened after that. Customers were spending their time in data to diagnose and understand what was going on. So we wanted to own the full cycle. .

And we thought that with AI, in particular, we’d have the ability to do things if we are on the whole cycle that we couldn’t do otherwise. So what you see right now is, I mean, this resonates with customers, they adopting to product. We’ve mentioned like some exciting customers with say, [ one ] with 5,000 seats for On-Call, which is very exciting. But in the future, there’s many more things we can do in working on for that product.

If we both detect incident and notify, we can do some sort of things such as even predicting the incident and notifying early or rerouting early or telling people before the incident actually takes place, how they can potentially fix it. So these are all things we’re working on. I mean, look, if you look at the various product announcements we’ve made, whether that’s Bits AI or SRE or the time series forecasting model we have released. When you assemble all that, you get to a very, very interesting picture of what we can do in the future. So we’re excited by that.

Nu Holdings (NYSE: NU)

Nu Holdings’ management’s vision for Nu Holdings is to be AI-first, where foundational models are deeply integrated into the company’s operations; management thinks Nu Holdings is uniquely positioned to be a leader in the use of AI in financial services, ahead of incumbent banks and regional fintechs; management has developed Nuformer, in the past 12-15 months; Nuformer is Nu Holdings’ proprietary approach for building large generalizable models that are based on principles similar to those behind leading LLMs (large language models); Nuformer has 330 million parameters and was trained on approximately 600 billion tokens, which is already a leading scale of data for the financial services industry, but Nu Holdings’ full data set is much larger; management believes Nu Holdings’ full data set gives Nuformer a unique edge in improving its capabilities; the adoption of foundational models has delivered an improvement 3x higher than what’s typically observed in successful machine learning upgrades for credit models; the adoption of foundational models has helped Nu Holdings meaningfully increase credit limits for eligible customers while maintaining the same level of risk; management is scaling the use of foundational models to Mexico and every other part of Nu Holdings’ business; management thinks that embedding AI into Nu Holdings is a nce-in-a-lifetime opportunity to further differentiate the company from traditional banks

Our vision is to become AI first, which means integrating foundation models deeply into our operations to drive an AI-native interface to banking, while creating meaningful benefits for both our customers and our business…

…We believe Nubank is uniquely positioned to become AI first and a leader in the use of AI in financial services globally, and we’re already starting to see the first breakthroughs. Since our early days, we’ve known that technology and data will be our strongest competitive advantage, being cloud native and built entirely on modern architecture enables us to simulate, experiment, train and deploy foundation models at scale. Coupled with our proven ability to attract world-class talent, this puts us ahead of incumbent banks and regional fintech competitors and places us in a unique position globally.

Over the past 12 to 15 months, we developed Nuformer, our proprietary approach for building large generalizable models based on advanced transformer architectures and self-supervised learning principles similar to those powering world-class LLMs. These models provide a deeper understanding of customer behaviors and can be deployed across our critical risk and personalization engines. To reach this level of performance, the first generation of our Nuformer model was built with 330 million parameters and trained on approximately 600 billion tokens, an unprecedented scale of data by financial industry standards. That data represents only a fraction of our full data set, which spans trillions of tokens and reflects the vast scale and diversity of Nubank’s platform. Our business model with principality at its core generates a deep repository of high-quality transactional and behavioral data, giving us a distinctive edge by enabling Nuformer to learn from richer context and continuously strengthening its predictive power.

Historically, gains in credit performance have come from our main fronts, incorporating more and better data sources into models, expanding training samples or reducing bias within them, optimizing positive frameworks, including the use of complementary models that evaluate different dimensions of credit risk; and finally, refining modeling techniques from definition of targets to model architecture and feature engineering. The adoption of foundation models represents a radical expansion of this last frontier. It brings a research-driven approach that moves the needle through advances in model architecture and training processes, enabling rapid and continuous improvement as AI researchers push the boundaries of what’s possible. When we applied this approach, the models were built to deliver an average improvement about 3x higher than what’s typically observed in successful machine learning model upgrades. Translating this into business outcomes, our initial models enabled a major upgrade to credit card limit policies in Brazil, allowing us to meaningfully increase limits for eligible customers while maintaining the same overall risk appetite. This successful breakthrough within an already robust underwriting model, like credit card in Brazil, underscores the significant potential of these advanced approaches. We’re now focused on scaling this innovation beyond Brazil, already in motion in Mexico and extending them across every part of Nubank from personalization and cross-sell to fraud and collections, further reinforcing both the strength of our model and our ability to execute at scale.

That said, we’re still just scratching the surface. As always, at Nubank, it’s still day 1, but we believe that embedding AI into our business represents a once-in-a-lifetime opportunity to further differentiate Nubank from traditional banks.

Nu Holdings’ management sees AI improving Nu Holdings’ understanding of each customer; management sees AI changing the way users interact with the company; management sees significant opportunity to use agentic workflows across Nu Holdings’ products

For our customers, AI is enhancing our understanding of each individual and their financial needs, allowing us to deliver personalized recommendations, contextual offers and products and proactive insights at the right amount. It will also transform the way people interact with Nubank, be it through a simpler and seamless app or to a number of additional channels, embedding conversational user interfaces. We think there is a significant opportunity to include Agentic workflows across most products and services, improving customer experiences across the board.

Nu Holdings’ management thinks AI is helping Nu Holdings improve risk management and scale efficiently; AI is helping Nu Holdings reduce credit losses and fraud losses; AI is helping Nu Holdings improve productivity

For our business, AI is strengthening how we manage risk and scale efficient. It is helping us to design safer and more precise financial solutions, reducing credit and fraud losses and enabling tailored collection strategies that drive better recoveries. At the same time, it is enhancing productivity across the company from leaner operations to faster development cycles and higher engineering throughput.

Paycom Software (NYSE: PAYC)

Paycom’s management has now enabled IWant, Paycom’s new command-driven AI product, across the entire client base; IWant has successfully responded to millions of queries from employees, managers and executives; management is seeing a dramatic uptick in usage of IWant, especially among new users (and this includes C-suites); management is particularly encouraged by the engagement with IWant among C-suites; IWant is hosted by Paycom and draws from a single database, which minimises errors; management sees IWant as a new way of accessing and navigating Paycom’s software ecosystem; IWant is changing how new and existing Paycom users are accessing Paycom’s software ecosystem and deriving value; IWant gives C-suites access to information about their companies that they previously did not have directly; management is currently seeing sticky user behaviour with IWant, but it’s still early for IWant since it was launched in July 2025

We also executed the launch of our award-winning and industry-first command-driven AI product, IWant. Now enabled across our entire client base, IWant is transforming how our clients and their employees engage with their HR and payroll data.

IWant has already successfully responded to millions of queries from employees, managers and executives, extending the power of our full solution automation. We are seeing a dramatic uptick in usage, especially among new users, which include the C-suite and newly onboarded employees of our clients. The intuitive nature of IWant means new employees no longer need training on the system and are able to utilize the full solution upon hire. I’m particularly encouraged by the engagement we are seeing among the C-suite. Traditionally, executives have not been daily users of HCM solutions. With IWant, thousands of C-suite executives are already pulling data and insights directly from the Paycom system and the feedback has been phenomenal…

…IWant hosted by Paycom only draws from Paycom’s single database, which eliminates conflicts created by inconsistent or duplicative external data sets, significantly improving data integrity and the quality of the user experience…

…If you’re a new user being added on to our system, meaning you’re a new employee, meaning you’re just now gaining access to this system, it’s your predominant way to use our software. And so as we look into the future, I would expect we would see more and more people utilizing IWant as a way to access and navigate through our system in order to make changes and receive information than what you would — those that are actually navigating through the traditional way…

…With IWant, the more of our product that you have that you’re utilizing, the more access to the information that you have. So it becomes important in that as well as with IWant, you’re eliminating all navigation as well. So you don’t really need training on the system. Most new employees, they would come into our system and they would have some level of training on how to use the system. With IWant, we’re just not seeing that with new employees coming on to the system. You just tell it what you want, and it takes it there. And so again, sometimes usage patterns are hard to change. And I don’t think someone should change their usage pattern unless there’s an opportunity to be more efficient or get something — get there quicker. And we’re seeing that with new people that are onboarded in the system. And then we’ve also seen that with traditional users that may not have been achieving full value for all the modules that they have…

…I, as a CEO, I’m not set up on our benefit system to go run benefit information. I’m not set up on our applicant tracking or talent acquisition system. I’m not set up on our payroll to run all the payroll stuff or HR or any of it, expenses, any of it. With IWant, I can go in and I get access to everything. I don’t need to know how to use it. I don’t need to know how to do anything. I just tell it the information that I want…

…[Question] We see some AI systems out there that users may use initially, but then go back to how they’ve operated before. Can you share a little bit about the ramp and consistency of usage you’re seeing so far?

[Answer] We’re not seeing people use it a couple of times and then stop using it. I will say that when you looked at it in the early days, people didn’t know how to use it. If you ask IWant where the closest pizza restaurant is to you, it’s not going to be real successful in answering that question. And so people had to kind of learn how to use it to their benefit. And it’s been a short period of time. Again, we’ve had IWant out since July. And every client we have has it and all their employees do now.

Paycom’s management thinks that command-driven functionality will be the future for all software

I’m confident that command-driven functionality is the future for all software.

Paycom’s management has significantly expanded Paycom’s data center capabilities to support the company’s push for automation and future AI developments; management frontloaded $100 million in capex in 2025 Q3 to match the IWant rollout; the $100 million capex provides Paycom with multiyear capacity to support its AI initiatives; management has to do extensive optimisation to run IWant on Paycom’s own infrastructure; management thinks it’s more expensive to rent AI compute capacity than to build Paycom’s own capacity, especially since Paycom has been operating its own data centers for the last 27 years

To facilitate the automation experience, including IWant and future AI developments in the pipeline, we significantly expanded our data center capabilities, spending roughly $100 million of AI-focused CapEx on our Phoenix and Oklahoma City data centers. We front-loaded this CapEx to match the timing of our IWant rollout in Q3…

…More specifically, we invested approximately $100 million into our data centers, and that spend is now largely complete. This investment provides us a multiyear capacity runway to support our AI initiatives…

…[Question] Are you guys doing anything to optimize the usage of GPUs to better handle the millions of queries you’re already seeing, whether it be in the underlying LLM or just teaching users what they can and can’t do?

[Answer] There’s a lot you have to do to optimize. It matters how many times you’re hitting it. It matters how you’re filtering through. We use these things to also look at nonresponse rates and everything else. So there’s a lot that we go through to be able to analyze. And this is a daily analyzation of what’s going on within our product. So I don’t want to describe everything that we’re doing. It does matter though, how you develop something to how much capacity of the GPU you’re going to actually utilize or need…

…We also looked at utilizing public cloud type data centers, if you will, to be able to host for us and utilizing their GPUs. And with where we see ourselves going in the future and what the costs were associated with just being able to handle our current load, initial load for IWant, we felt it better for us to go ahead and just set up and buy our own plus that way we have control over it, and it’s operating just as all the rest of our business has for the last 27 years of operating our own data centers. So it’s really worked for us.

Paycom’s management does not see the need for major capex again in the next few years after the $100 million capex for AI infrastructure

We did have to make a spend in order to have that capacity for both what we’re doing now and into the future. So we’re in this business now. I don’t expect that we would have any level even close to this type of spend over the next couple of years…

…I was just saying I don’t know of any major CapEx opportunities for next year or even the year after from a CapEx perspective…

Paycom’s management is not seeing Paycom’s competitors coming up with AI-powered solutions

To the extent our competitors do have AI, we’re not running into it when we talk to their clients. And I don’t know how they’re paying for it because when we looked into it, it was pretty expensive to rent.

Sea Ltd (NYSE: SE)

Sea’s management is seeing Shopee’s AI efforts contribute meaningful monetisation gains in 2025 Q3; Shopee’s AI efforts include (1) smarter search, (2) better recommendations, (3) more personalized content, (4) enhanced product discovery for shoppers, and (5) generative AI tools for sellers to make their product listings more appealing; Shopee’s AI efforts  have led to the following in 2025 Q3, (1) a 10% year-on-year increase in purchase conversion rate, (2) a 12% year-on-year increase in buyer purchase frequency, and (3) a 15% year-on-year increase in average monthly active buyers; Shopee will not be building foundational large language models or data centers like what Big Tech is doing; management wants Shopee to utilise AI technologies developed by Big Tech, and focus on applications; the majority of Shopee’s customer service is now handled by AI and customer-satisfaction is very high

Our AI efforts have already begun to bear fruit, contributing meaningfully to our monetization gains in the third quarter. Smarter search, better recommendations and more personalized content have made Shopee easier and more enjoyable to shop on. We have also used AI to enhance product discovery beyond search, helping buyers find relevant and interesting items even when they arrive without a specific purchase in mind. We empowered sellers with AI tools, enabling them to generate image, videos, text descriptions and virtual showrooms to make their product listings more appealing. These initiatives have increased buyer engagement, improving our purchase conversion rate by 10% year-on-year in the third quarter. Taken together, all these efforts have resonated with our customers. Buyer purchase frequency across our markets continued to improve, going up a further 12% year-on-year in the third quarter. Average monthly active buyers also increased 15% year-on-year in the third quarter…

…We’re not going to like develop — trying to make some fundamental large language model breakthrough. We’re not going to build data centers. I think like for that part, we are very much like open to work with all the like big tech like who are kind of — we have a lot of admiration with respect to how much effort and how much they can do to continually have the breakthrough of the technology and make technology more powerful and more useful. And what we are going to more focus on applications and how that technology built in Silicon Valley or anywhere in the world transform to a consumers’ daily life, a small business like in Indonesia, in Vietnam, in Brazil. So that will be specially what we are good at…

…Now majority of our customer service is handled by AI like a chatbot and the satisfaction rate is very, very high.

Shopify (NASDAQ: SHOP)

Shopify’s management sees Shopify holding the data-advantage in the AI revolution in commerce; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly

If AI is fueled by data, then Shopify has a clear advantage. We power millions of merchants and billions of transactions. That gives us access to a world of data across a spectrum of commerce. And we’re using that data to create better shopping experiences for both merchants and shoppers…

…We’ve structured data across billions of products so our partners can surface the most relevant items in seconds.

Shopify’s management is thinking of AI in 3 ways, (1) how AI helps merchants sell better, (2) how AI helps merchants operate better, and (3) how AI helps Shopify operate better

We think about the evolution of AI in 3 ways: how AI will help our merchants sell everywhere, how AI will help our merchants operate smarter, and how we, as a company, will use AI to build better.

Shopify’s management thinks agentic commerce will fundamentally change how consumers shop; Shopify has a number of tools for merchants to thrive with agentic commerce, namely, Catalog, Universal Cart, and Checkout Kit; management thinks agentic commerce has 3 layers, (1) discovery, (2) the purchasing experience, and (3) the post-purchase journey; management is building for a seamless and intuitive shopping experience across the 3 layers; Shopify has structured data across billions of products, which helps its AI partners surface relevant products quickly; leading AI players including ChatGPT and Perplexity, are already using Shopify’s Catalog tool to power product discovery directly inside their chat interfaces; Universal Cart and Checkout Kit are powering in-chat shopping flows within ChatGPT and Microsoft Copilot; Shopify is building tools that help AI agents keep customers engaged and informed throughout the entire post-purchase experience; management believes that Shopify is helping its merchants be primed for success in agentic commerce; management has seen AI-driven traffic to Shopify stores grow 7x, and orders attributed to AI searches up 11x, since January 2025; a recent survey of shoppers by Shopify showed 64% of shoppers are likely to use AI for their buying in BFCM (Black Friday – Cyber Monday); management thinks agentic commerce is still really early; Shopify’s AI agent tools were built only last year

AI is helping our merchants sell everywhere, what’s known as agentic commerce. Put simply, AI is able to fundamentally change how we shop, moving from search to conversation, helping all consumers purchase more efficiently. And that’s why we built the commerce for agents tools that we introduced on our last call, Catalog, Universal Cart and Checkout Kit. These tools make it easier for agents to shop across merchant stores on a buyer’s behalf…

…Agentic commerce is so much more than just the last click. Think about it in 3 layers: product discovery, purchasing experience and the post-purchase journey. Now if you’re only looking at the payment or checkout layer, you’re missing the bigger picture of what we’re building: a seamless and intuitive shopping experience end to end.

First, let’s talk discovery. We’ve structured data across billions of products so our partners can surface the most relevant items in seconds. It’s clear where this is going. Shopping is becoming more conversational, more personalized and much more efficient. And that’s why the leading AI partners are already using Catalog to power product discovery inside their experiences. I’m sure you all saw the announcement about our partnership with ChatGPT, which is a strategic play that we’re really excited about. But let me be clear, we’re also partnered with other leaders in conversational AI like Perplexity, and our goal is to power product discovery for all agents, making us the standard across the Internet…

…On purchasing experience. Once a shopper finds what they want, Universal Cart and Checkout Kit make add to cart and checkout seamless inside the conversation. ChatGPT, along with Microsoft Copilot have already partnered with us here to make in-chat shopping flows possible.

And finally, post purchase. We’re investing in tools that help agents keep customers engaged and informed, order status, return, support, reorder prompts, so the experience stays smooth and merchants build durable relationships with their customers…

…What all this should tell you is that our merchants are primed for success in the new world of agentic commerce…

…Since January, we’ve seen AI-driven traffic to Shopify stores up like 7x. And we’ve actually seen orders attributed to AI searches up like 11x since that. So the data is showing it’s already growing. And we actually just recently did a survey for — to consumers to better understand some BFCM trends, and something like 64% of shoppers told us they’re likely to use AI to some extent in their buying…

…It’s still obviously very, very early. But what we’re really trying to do is laying the rails for agentic commerce…

…We built AI agent tools last year, now we’re partnering with everyone that matters. 

Shopify’s AI assistant for merchants, Sidekick, saw 750,000 shops using it for the 1st time in 2025 Q3; to-date, Sidekick has had 100 million conversations with merchants, with 8 million in October 2025; merchants’ conversations with Sidekick can go 50-100 turns deep, covering a wide range of topics; Sidekick will get better over time; Sidekick was built 2 years ago, before there was hype about AI assistants

Sidekick, our on-platform intelligent assistant, is a prime example of that commitment. And frankly, the rate of adoption speaks for itself. In Q3 alone, over 750,000 shops used Sidekick for the first time. And to date, Sidekick has had almost 100 million conversations with merchants, with 8 million in October alone. And it’s quickly becoming the default way merchants get things done. Hundreds of thousands of merchants are running core parts of their business using Sidekick. In fact, conversation can go from 50 to 100 turns deep, covering everything from analytics and building new customer segments, to automating better SEO and so much more…

…At this scale, Sidekick will only get smarter and more powerful…

…We built Sidekick 2 years ago, well before any of the hype around that.

Shopify’s management is using AI to drive Shopify to build better products; Shopify has an internal tool known as Scout, which is a voice of the customer system that indexes hundreds of millions of merchant feedback items and makes them searchable; anyone in Shopify can use Scout to get grounded answers in seconds when similar requests would have taken weeks in the past; Shopify is developing other similar tools to Scout to make faster, better, decisions

The last thing I’ll touch on with AI is how we’re using it to build better products. For years, we’ve been honing our internal capabilities in the same way we’ve been empowering our merchants: shipping fast, measuring what matters and scaling what works using AI…

…We’re turning vast amounts of raw signal into ship products and features quickly and relentlessly…

…We have a tool effectually known as Scout. Now Scout is an internal voice of the customer system that indexes hundreds of millions of merchant feedback items, making them searchable within our tools. Any PM, designer, engineer or, frankly, anyone at the company, including myself and Jeff, can ask a question and get grounded answers in seconds. That used to take weeks. Patterns emerge by market, vertical and merchant size, allowing us to write clear specs, prioritize better and ship with confidence. And Scout is just one of many tools we’re developing to turn our own signals, whether it’s support tickets, usage data, reviews, social interactions or even Sidekick prompts into fast informed decisions.

Shopify Campaign has seen 9x year-on-year increase in budget commitments in 2025 Q3; Shopify Campaign has seen 4x year-on-year increase in merchant adoption of Shopify Campaign in 2025 Q3; management has delivered product-improvements to Shopify Campaign, including an AI-powered ranking improvement

We’ve seen 9x year-on-year increase in budget commitments from merchants this quarter for Campaigns. In fact, if you just look at Q3 2024 to Q3 2025, we’ve actually seen a 4x year-on-year growth in merchant adoption of Campaigns…

…On the product side, this thing keeps getting better and better. We introduced Gross Sales, which is this new default high-reach objective in campaigns. We just shipped an AI-powered ranking improvement, which is showing some really good early results in terms of performance gains.

Tencent (OTC: TCEHY)

Tencent’s investments in AI are benefiting its ad targeting, game engagement, coding, and gaming and video production activities; management is currently seeing AI efficiency gains in the form of growth in revenue and gross profit; management thinks that AI enables Tencent to build more, instead of reducing costs

Our strategic investment in AI are benefiting us in business areas such as ad targeting and game engagement as well as efficiency enhancement areas such as coating and game and video production…

… And if you look at the benefit AI, at this stage, a lot of the efficiency gains are more on the revenue side. and the gross profit side. So you see pretty good growth in those items. But in terms of the cost item, I would say we have already done a pretty big organizational optimization a few years back. And the organization that we have is actually an efficient and AI adoption actually allows our team to do more as well as instead of to reduce cost, which I think some other companies you are probably comparing with.

Tencent’s management is upgrading the team and architecture of Tencent’s Hunyuan foundation model; management believes that Hunyuan’s imaging and 3D generation models are industry-leading; management is hiring more top-notch research talent for Hunyuan; management believes that Hunyuan’s capabilities will improve, and that all the models in China are currently pretty similar 

We are upgrading the team and architecture of our Hunyuan foundation model, whose imaging and 3D generation models are now industry-leading…

…In AI, we enhanced Hunyuan’s large-language models, complex reasoning capabilities, especially in coding, mathematics, and science. Our Hunyuan full length image generation model is ranked first globally at text-to-imaging models by LLMArena. And our Hunyuan 3D model is the top-ranked generative model of Hugging Face…

…In terms of the Hunyan team and the Hunyuan architecture, we are actually hiring more top-notch talent, especially in the research area in order to complement our existing strong engineering team and they are complementary to each other. And we have also been improving the Hunyuan overall architecture across different dimensions such as improving the hardware and software infrastructure in order to support better data preparation, to support better pretraining of the model, as well as to support reinforcement learning across different knowledge domains at scale. So these are the improvements that we are making more specifically on the Hunyuan team as well as the Hunyuan architecture…

…And I would say we are actually happy with the progress we have made already. And if you wait a little bit for our mix model, you can see, meaning for improvement in terms of the Hunyan capability. And I believe with the new improvements that we have been making, we’ll continue to pick up pace on the Hunyuan capability. And at this point in time, we actually do not believe that there is a decisive better model in China as everybody is actually locked in the pretty close range and different models may be differentm maybe better in different use cases as well. So we don’t believe we are really behind. 

Tencent’s Management thinks Weixin’s adoption will gain further traction as Hunyuan becomes more capable, Yuanbao becomes more widely used, and more agentic AI capabilities are introduced by Tencent; Tencent’s AI assistant Yuan Bao has new features to serve Weixin users better, such as Yuanbao-generated content in Tencent News Feed; management wants to add more functionalities from Yuanbao into Weixin; management thinks as Weixin users get exposed to Yuanbao’s capabilities through Weixin, they will become Yuanbao app users too; management is currently seeing a pretty good ramp in Yuanbao engagement; management’s blue-sky scenario for agentic AI is an AI agent that can help users perform a multitude of tasks within the Weixin ecosystem; Tencent is still very, very early in building a capable AI agent; management is also starting to work on vertical agentic capabilities

As Hunyuan’s capabilities continue to improve, our investment in growing Yuanbao adoption and our efforts in developing agentic AI capabilities, we think Weixin will gain further traction…

…Now in terms of how Yuanbao and Weixin complement each other. I would point to the fact that Weixin has actually introduced a number of AI features based on Yuanbao’s capability…

And we also enriched the Tencent News Feed in Weixin with Yuanbao-generated content and allowed a lot of users to use that as a way to explore more news content, related news content, as well as ask questions on the news content. And we’re actually adding more and we are planning to add more functionalities of Yuanbao into Weixin. So those functionalities actually, one, serve the Weixin users better; and two, actually help Yuanbao to gain a larger audience and more and more of these audiences find Yuanbao’s capability through Weixin and eventually become a Yuanbao app user…

…We actually have been also seeing quite a good ramp in terms of Yuanbao engagement. So I think you see both the model capability as well as our AI products keep on improving…

…I think the blue sky scenario is that eventually, Weixin will come up with an AI agent actually can help the user to essentially do a lot of tasks within Weixin and leveraging AI, right? Because if you look at the ecosystem of Weixin, it has a very strong communications and social ecosystem and it has a lot of data that allows the agent to understand about the users, feeds as well as the intentions and interest. It has a very strong content ecosystem in the form of official accounts and video accounts. It has mini-program ecosystem, which essentially includes most of the use cases on Internet. It has a commerce ecosystem, which allows people to buy stuff and the payment ecosystem, which actually allows people to pay for it almost immediately. So that is almost ideal assistant for users and understands about the users’ needs and can actually perform all the tasks within the ecosystem. So that’s the blue sky scenario.

Now I think, how do we get there, right? At this point in time, it’s actually very early stage in terms of development. Weixin is doing a number of things. In parallel, for example, it’s introducing Yuanbao capabilities into Weixin so that we can test out a lot of the AI features on a stand-alone basis with innovation. It’s also enhancing search with AI so that we can serve the users search and information collecting as well as analysis needs more efficiently.

We are also starting to work on vertical agentic capabilities. And that’s something that we are working on. We have not launched it yet. But then very likely, we’ll be sort of working on a functionality one by one.

Tencent’s management has introduced AI Marketing Plus, Tencent’s automated ad campaign solution for targeting, bidding, placement, and ad creation; AI Marketing Plus helps improve advertisers’ return on marketing spend; increases in commercial query volume and kick-through rates contributed to notable revenue growth in Weixin Search; management has increased the relevance of Weixin search ads through the upgrade of LLM capabilities; AI Marketing Plus helps advertisers reach inventories and user profiles automatically rather than manually, and SMBs are the most eager to adopt AI Marketing Plus; management is also seeing large enterprises being interested in AI Marketing Plus, in a similar manner to how enterprises are adopting Meta Platforms’ Advantage+ automated ad campaign platform; when AI Marketing Plus was initially released, large advertisers initially had some trust issues, but they started adopting it when they tested it and saw superior ROI (return on investment); the percentage of Tencent’s advertisers and the percentage of Tencent’s advertising spending that is going through AI Marketing Plus are steadily increasing

We introduced our automated ad campaign solution, AI Marketing Plus through which advertisers can automate targeting, bidding and placement, as well as optimize ad creation, improving their return on marketing investment…

In terms of the AI Marketing Plus automated campaign solution. We believe the automated ad campaign solution benefits all the advertisers who deploy it by enabling them to automatically reach inventories, as well as user profiles that are more performant than the inventories and user profiles they were manually targeting. You’re right to say that small- and medium-sized businesses are the first — or the most eager to adopt this kind of product because they have the least legacy process to replace, and that’s what we’re experiencing right now. But we’re also seeing bigger advertisers adopting AI Marketing Plus too, parallels the experience of Meta’s Advantage+ automated ad solution overseas…

…In terms of the advertising revenue, roughly half of the growth or about 10 points was due to a higher CPM, which contribute primarily to AI supported ad tech as well as to close loop benefits. And then the other half was due to increased impression volume, which reflects increased user engagement and increased — in terms of the commercial payment volume trends, then there is a measured improvement…

…[Question] What you hear how the AI marketing plus product, any early data points on the performance and ROI for merchants on that.

[Answer] When you introduce this automated ad campaign system, the biggest sort of leap for the advertisers is allowing us as the platform operator to actually manage the bidding process on their behalf. And of course, there’s a degree of internal conservatism within the bigger advertisers as to whether to entrust the platform to manage the price or not. And typically, the larger advertisers will run the automated and the manual processes in parallel for a period of time and compare the ROI to verify whether the automated process is delivering more performance or not. And we’ve turned on that automated bidding tool relatively recently. But the early results are positive, those advertisers who are adopting the automated solution are enjoying superior returns. And therefore, the percentage of our advertisers and the percentage of our advertising spending that is going through AIM+ are steadily increasing. 

Within the Fintech and Business services segment, Business Services revenue grew in the teens year-on-year in 2025 Q3, despite supply constraints with GPUs; management thinks Tencent’s current amounts of GPUs are sufficient for the company’s internal usage, although there are some shortages for external usage; Tencent’s cloud revenue would have grown more if not for GPU constraints, because management prioritises internal consumption of GPUs

Turning to Business Services. Despite supply chain constraints on sourcing GPUs, revenue grew at a teens rate year-on-year in the third quarter, benefiting from higher cloud services revenues and increased technology service fees generated from rising Mini Shop e-commerce transaction volumes. Revenue from our Cloud Storage and Data Management products, namely Cloud Object Storage, TC House, and factor GB grew notably year-on-year due to increased demand, including from leading automotive and Internet companies. And for WeCom, we launched an AI summarization feature to generate project recaps and provide advice based on users e-mails and conversations to enhance some project collaboration efficiency…

In terms of Yuanbao adoption and also the CapEx spending at this point in time. We actually believe that there’s no insufficiency of GPUs for us at this moment. All our GPUs actually sufficient for our internal use. And there is some limiting factor for external cloud revenue…

…In terms of cloud business, I think we have been increasing our revenue finally sort of this year, right? In the past few years, our revenue has not grown that much, but our gross profit has grown very significantly. And this year, we’re growing both the revenue as well as the gross profit and the business is actually sort of profitable. One constraint of cloud business growth is availability of AI chip because when AI chips are actually in short supply, we actually prioritize internal use as opposed to renting it out externally. And the other way to say is if there is not an AI supply constrained and our cloud revenue should be growing more.

Tencent’s operating capex in 2025 Q3 was down 18% year-on-year because of supply changes; non-operating capex was down 59% year-on-year; free cash flow was flat year-on-year but down 11% sequentially; there is a difference between capex and cash-capex because of timing differences; management has lowered capex target for 2025 from previous level of low-teens percentage of revenue; the lower capex target for 2025 is because of AI chip availability

Operating CapEx was RMB 12 billion, down 18% year-on-year, primarily due to supply changes. Nonoperating CapEx was RMB 1 billion, down 59% year-on-year, reflecting higher base last year relate construction. Free cash flow was RMB [ 38.5 ] billion, largely stable year-on-year as operating cash flow growth was offset by higher CapEx payments. On a quarter-on-quarter basis, free cash flow was up 36% due to higher gains gross ratio…

…[Question] This quarter, CapEx was around RMB 13 billion, but the cash payment for CapEx was RMB 20 billion. So how should we interpret the difference between these 2 figures?

[Answer] In terms of CapEx, the difference timing gap between the accrual of server-related expenditure and cash payment, which can cause temporary mismatches between the two. In particular, the credit period for us to pay server suppliers is usually 60 days…

…In terms of the CapEx for 2025 to share with you in 2024, our total CapEx grew by 221% year-on-year was about 12% of the revenue. Previously, 2025, we guided total CapEx was as a percent of revenue to be at low teens. The 2025 CapEx will be lower to our previous guided range, but the amount will be higher than that of 2024…

…[Question] The capex for 2025 will be lower than the previous guidance, but higher than the ’24 actual capex spending, if I get that right, does it reflect a change of AI chip availability or a change of investment strategy or a change of your expectation of a future token consumption.

[Answer] It’s not a change in terms of expectation of future token consumption. It is indeed a change in terms of AI chip availability.

The Trade Desk (NASDAQ: TTD)

Trade Desk’s management sees AI improving the effectiveness of the open Internet by allowing vastly superior price discovery

AI is accelerating the improved effectiveness of the open Internet. Of course, every significant AI innovation and AI product needs quality data. The most valuable data to an advertiser is their own conversion and customer data. We will win long term because we built a business where buyers can own their future, which requires them to own, protect and use their own data. AI is fast-tracking progress for companies that are eager to put their data to work and can leverage automation intelligently…

…AI is accelerating the path to the open Internet having vastly superior price discovery and fungibility. A world with better price discovery and better open Internet supply chains for the quality content will decrease the value of user-generated content destinations and other similar apps and sites that are full of ads and unsafe content.

Nearly all of Trade Desk’s clients have already tried Kokai, and 85% are using Kokai as the default; management thinks Kokai is a significant upgrade over Trade Desk’s previous platform, Solimar, when Solimar was already the most performant DSP (demand side platform) in the world; compared to Solimar, Kokai has delivered 26% better cost per acquisition, 58% better cost per unique reach, and 94% better click-through rate; Kokai has a distributed AI architecture where every function has a separate AI model; management thinks the distributed AI architecture allows Kokai to parallelize all AI efforts and enables checks and balances between disparate functions; Bayer used Kokai to advertise on Spotify and saw 15% growth in incremental reach; Specsavers used Kokai in the UK and saw a 43% reduction in cost of securing customer appointments, and a 50% reduction in conversion time; Danone used Kokai and achieved a 33% increase in conversion rates

Today, nearly all of our clients have tried Kokai with nearly 85% using Kokai as their default experience…

…Kokai is the best upgrade we have ever made to our product relative to all previous versions and certainly relative to Solimar. Campaigns that have switched to Kokai are seeing impressive results. Since its launch, Kokai has delivered on average 26% better cost per acquisition, 58% better cost per unique reach and a 94% better click-through rate compared to Solimar. These are incredible performance improvements on top of what was already considered the most performant DSP in the world.

Kokai has a number of features in it that are game changers for our clients and for the open Internet. We’ve used the industry’s most advanced AI to enhance our system with an architecture we call distributed AI. We break down every function and create separate AI models for each of them from valuing impressions to managing identity to choosing supply paths to predicting a price required to clear and to forecast the performance and reach of a campaign before a single dollar is even spent. This effort to distribute allows us to parallelize all AI efforts and enables checks and balances between these disparate functions…

…Bayer recently added Spotify to their omnichannel campaigns on Kokai and saw a 15% growth in their incremental reach…

…Specsavers in the U.K. saw a 43% reduction in the cost of securing customer appointments using Kokai while also cutting the conversion time by almost 50%. Danone saw conversion rates go up by 1/3 for their Actimel yogurt product, leveraging the retail data marketplace and omnichannel strengths of Kokai.

Trade Desk’s management has launched and grown several AI-powered products in 2025 that upgrade the supply chain so that advertisers get more bang for buck; OpenPath has grown by hundreds of percent in 2025 9M; OpenPath gives advertisers a clearer picture of the inventory they are buying, and gives publishers better sense of what advertisers are willing to pay; management has just launched OpenAds, an auction that Trade Desk developed and sometimes hosts as an option for publishers; Trade Desk enables other buyers or DSPs to use OpenAds to bid into a fair auction; Trade Desk is working to integrate OpenAds with more than 20 of the web’s biggest publishers; management expects OpenAds to dramatically improve the supply chains of mobile in-app ads and browser-based ads; Trade Desk has launched Pubdesk, which is improving the supply chain by publishing data for the sell side; the data includes what advertisers paid the supply chain and what signals advertisers value; Pubdesk was largely fueled by Sincera, which Trade Desk acquired earlier in 2025; Deal Desk helps advertisers better manage one-to-one deals; Deal Desk is powered by AI and can predict how a deal will perform relative to the open market; management thinks Deal Desk can replace outdated upfronts; deals on Deal Desk are performing 35% better than those running on Solimar; OpenPath connects directly into premium publisher auctions; Disney is using OpenPath because it wants its premium inventory to be correctly assessed and valued; management will open source key elements of OpenAds; Hearst used OpenPath and achieved a 4x improvement in ad fill rates and 23% revenue increase; SSPs (supply side platforms) are integrating with Deal Desk; Trade Desk’s use of AI in supply chain optimization is finding better paths to publishers with double-digit percentages of efficiency

It cannot be overstated how much AI has changed and will change our business and the open Internet. This year, we’ve launched and grown several products that are solely focused on substantially upgrading supply chains so that buyers get more for their money.

We have grown OpenPath by many hundreds of percentage points this year, which means our clients are getting clear views of exactly what they’re buying and publishers have a clearer sense of what advertisers are willing to pay when they describe their inventory in a transparent and accurate way.

OpenAds is an auction that we develop and sometimes host as an option for publishers. We then bid into a fair auction and even enable other buyers or DSPs to do the same thing, too. The market needs a healthy auction and some sell-side players have continually weakened the integrity of the auction. So we’re developing an open source option that raises that bar. We just launched it, and we’re already working to integrate with more than 20 of the biggest publishers on the web. We expect this to dramatically improve the supply chains of mobile in-app ads and browser-based ads, which, of course, can use the help in an AI scraping world.

Pubdesk is improving the supply chain by publishing data for the sell side. Resellers, sellers and publishers can log into the platform and see what we paid the supply chain, what signals we value and adjust their sites and inventory to get more. This is largely fueled by the Sincera team and data that we acquired earlier in the year.

Deal Desk is a better way to manage one-to-one deals. Not only does it facilitate the buy, but using AI, it predicts how a deal will perform relative to the open market. This product enables them to do deals, but also gives them the unprecedented data and tools to avoid bad deals. It is important to note that this product will be foundational to a healthy forward market that can replace the outdated upfronts. So far, deals on Deal Desk are performing about 35% better than those running on Solimar, which is more similar to the way they run everywhere else in the programmatic ecosystem…

…OpenPath connects directly into many of these premium publisher auctions and companies like Disney do this because they want to ensure that their premium inventory can be correctly assessed and valued…

…We will open source key elements of OpenAds, and we will expose its mechanics for review. Just like recent innovations such as UID2 or OpenPath or Ventura, our intent here is to incentivize a more transparent, competitive marketplace for all…

…Publishers like Hearst are seeing a 4x improvement in ad fill rates and 23% revenue increase when integrating OpenPath…

…On the supply side, SSPs such as PubMatic are integrating with Deal Desk using the new price discovery provisioning API that helps sellers better understand and identify how sellers can increase the quality of their inventory.

The injection of AI into our supply path optimization is finding better paths to publishers with double-digit percentages of efficiency.

Trade Desk’s management thinks that the emergence of agentic search will not change much of the premium open Internet, but it will result in more search-like inventory available, and this inventory will be new premium advertising opportunities that is up for grabs for Trade Desk

[Question] Are you seeing an impact from agentic search on available publisher inventory?

[Answer] We look at roughly 20 million ad impression opportunities every single second. That’s about 1.7 trillion every single day… When you look at that many impressions, and just to be open, we buy a low single-digit percentage of that total, you’d have to when it’s that big. So that means that if we take 20 million down to 15 million per second because of AI, there’s not really much different about our business model, nothing at all…

…I actually see whatever effect the AI search world is having on inventory supply, given that I’ve said over and over again in this call that there’s more supply than demand and it is more of a buyer’s market than ever. Whatever effect it’s having on it, first of all, is fairly de minimis as it relates to the open Internet at large. We shouldn’t define the open Internet as just what happens in a browser. It is much bigger than that. It’s everything that happens in CTV, movies, sports, journalism, everything, and that’s both in an app and in a browser, that’s in every form of media that touches the Internet. If you look at it from that perspective, I don’t think it’s had any meaningful effect. And because I don’t think CTV is going anywhere, I don’t think music is going anywhere. I don’t think sports is going anywhere. I think that the premium open Internet will continue to play the most significant role in building brands and doing actual advertising. And I don’t think that AI will change that.

I do actually think that there will be more search-like inventory available, which I think is really premium advertising opportunities. In the past, companies like ours have not had access to companies like Google’s ad inventory as it relates to search. I think in a world where that’s much more competitive and there isn’t a winner-take-all outcome, which I don’t think there will be. I think there’s going to be a bunch of opportunities for us to buy into their inventory. think it will actually look a lot like CTV in the sense that fragmentation will be nearly perfect in the sense that there are enough players that there’s competition and that no one is big enough to have a monopoly or be a draconian, but it’s consolidated enough that everybody will be rational and highly competitive. And I anticipate that, that will create new advertising opportunities that will have really amazing results and efficacy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Coupang, Datadog, Meta Platforms, Nu Holdings, Paycom Software, Sea, Shopify, Tencent, and The Trade Desk. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 November 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 November 2025:

1. Return on Invested Capital (ROIC): Why High Returns Require More Than High ROIC – Eugene Ng

Investors have been fascinated with return on invested capital (ROIC) and, in particular, seek to invest in businesses that can generate high ROICs. And for good reason, the higher the ROIC, the better the business. Yet, businesses with high ROICs alone are insufficient to generate strong long-term investment returns…

…We seek to explain why a company with a high ROIC would not necessarily deliver a similar high long-term total shareholder return.

In addition, businesses must be able to continue reinvesting capital at an attractive ROIC that allows them to grow revenue, earnings, and free cash flows strongly and compound for a long time.

It is not one or the other; it has to be both (unless the business’s valuation/price is really low). Unfortunately, there are very few companies that can do both, especially over a long period…

…Currently, all tangible and intangible assets, whether purchased or acquired via M&A, are capitalized on the balance sheet and expensed in the income statement over their useful lives.

However, internally generated intangibles are not capitalized and are immediately expensed on the income statement rather than recorded on the balance sheet and amortized over time. This is because accountants are uncertain about the sales that these investments may generate. So, to be conservative, they do not apply the matching principle of sales and expenses, and expense the outlays immediately. This causes near-term expenses to rise, profits to fall.

This significantly depresses near-term profitability, making companies that are spending a lot on intangibles seem less profitable than they really are and more expensive by conventional valuation metrics (e.g., price-to-book (PB) or price-earnings (PE) ratios), particularly when they are heavily reinvesting early on…

… The best companies in specific sectors (i.e., 80th percentile) tend to generate much higher ROICs. For example, looking at adjusted ROIC, the sectors are software, computer & peripherals, semiconductor equipment & products, IT consulting & services, communications & equipment, internet software & services, internet & catalog retail, biotechnology, and tobacco…

…Companies only create value when they (1) keep growing durably for an extended period of time, and (2) earn a return on capital that exceeds their cost of capital consistently.

For a company to keep growing fast, there must be a significant opportunity and a large total addressable market (TAM) to reinvest new capital at high returns relative to costs and to penetrate and gain market share. The faster they can grow, the greater the cash flows and value creation.

Conversely, competitive advantage is what sustains growth and high ROIC. Companies with attractive profitability will tend to attract new entrants seeking to compete profits away from the incumbents, causing ROICs to mean-revert.

A business with a wide moat and numerous competitive advantages in a highly monopolistic/duopolistic/oligopolistic market structure with strong unit economics tends to sustain higher ROIC durably over extended periods…

…Only a small percentage of the entire universe (55,321 companies) has very high ROICs: ~5.5% have >20% ROIC, ~3.6% have >25% ROIC, ~2.4% have >30% ROIC, and ~1.5% have >40% ROIC…

…ROIC is a static snapshot in time. How ROIC changes over time matters as well. One should focus not just on the absolute ROIC, but also on return on incremental invested capital (ROIIC). Think of ROIC as stock, and ROIIC as flow. If incremental capital is reinvested at ROIICs that are even higher than high ROICs, it will drive ROICs higher over time, and vice versa…

…Revenue growth translating into earnings growth is the single most significant contributor to rising stock prices. If companies can keep growing earnings for years and decades, and if the stock market is not too exorbitantly expensive, one will likely still end up with a fine-looking result. Earnings are the weighing machine for stock prices over the long term…

…The reinvestment rate measures the percentage of earnings that a company plows back into the business every year (i.e., reinvestment / net income).

ROIC measures the return the company makes on these reinvested earnings…

…Suppose a 19.5% ROIC company is unable to reinvest any capital and does not grow earnings. Assume it trades at a 15x PE (assuming no change in valuation multiples), if the company chooses to return 100% of that capital via share buybacks or dividends, it would be an implied 6.7% (1/15) earnings yield that the shareholder will effectively indirectly receive via buybacks/dividends/higher enterprise value, and adding the 0% earnings growth, would render a significantly lower combined total shareholder return of 6.7% as compared to the company’s much higher ROIC of 19.5%.

Whereas if this 19.5% ROIC company reinvests more of its earnings to achieve higher earnings growth, its total shareholder returns tend to converge to the higher combination of its earnings growth and its earnings yield (assuming no change in PE valuation multiples). The math is counterintuitive. The implications are profound.

Notably, the price (i.e., PE ratio) matters much more when earnings are growing much more slowly, and matters less when earnings are growing much faster.

2. The Great Decoupling of Labor and Capital – Abdullah Al-Rezwan

Almost two decades ago, Hewlett-Packard (HP) was the first tech company to exceed $100 Billion annual revenue threshold in 2007. At that time, HP had 172k employees. The very next year, IBM joined the club, but IBM had almost 400k employees…

…Alphabet required 76k employees to get to their first $100 Billion. Their most recent incremental $100 Billion? Just 11,000! (assuming they add another 3k employees in 4Q’25)…

…Historically, Microsoft used to be much more human capital intensive company as they required 124k and 97k incremental employees to get to $100 Billion and $200 Billion revenue milestones respectively.

But their most recent $100 Billion? Only SEVEN thousand!…

…Meta is the youngest of these companies. They will likely reach $200 Billion revenue milestone next quarter. Their first $100 Billion took 63k employees while the recent one will likely take one-third of that number!…

…Amazon really didn’t exhibit much of a pattern for their journey to $500 Billion revenue milestone. In fact, their hiring pattern is perhaps the poster child of post-pandemic over hiring as the company really was in the thick of pandemic induced massive upward demand shock and misread the post-pandemic hangover. While historically they took 200k to 400k employees for their incremental $100 Billion revenues, they added their last $200 Billion revenue with only 36k incremental employees!…

…I wouldn’t be surprised if Amazon reaches $1 Trillion revenue in 3-4 years by adding only ~100-200k incremental headcount. If that happens, it would mean while Amazon required 1.5 million employees to get to $500 Billion revenue, the next $500 Billion revenue would come with only ~10-15% of incremental headcount!

Of course, I haven’t even mentioned the largest company in the world: Nvidia! When they reached $100 Billion LTM revenue in 2024, they only had 30k employees and they will likely reach their next $100 Billion with only ~6-8k incremental headcount!

The trend isn’t necessarily just confined to tech companies either. Walmart’s full-time employees number remained relatively constant for the last 10 years while their revenue grew by $200 Billion during this period. In fact, Walmart recently mentioned that the headcount will remain static for the next three years as well. So, it is likely that Walmart will add $300 Billion incremental revenue since 2015 with basically no incremental headcount!

3. Missing a bidding war: a mea culpa on Metsera ($MTSR) – Andrew Walker

Pfizer announced a deal to acquire Metsera4 (MTSR) for $47.50/share plus a CVR in late September (per the proxy, the all in value of that offer is ~$54.66/share; see p. 45). If you read the MTSR proxy, you could see that there was actually a higher bid for MTSR; page 44 of the proxy notes that “party 1” had made an offer that was valued at $59.46, but the board determined to go with the Pfizer offer for a variety of reasons, but most notably “potential regulatory risks.”

That proxy background got really interesting earlier this week, when party 1 (Novo Nordisk) lobbed in an unsolicited proposal to buy Metsera that was deemed a superior bid (over Pfizer’s strong objections, including a lawsuit filed Friday night!). The superior bid and prospect of a bidding war sent Metsera stock up ~20%. Not bad for a merger arb!…

…Why do I think you could have predicted a possible topping bidder?

Because the presence of a higher bid was right there in the MTSR proxy.

MTSR’s proxy came out October 17th. It discloses that party 1 (who we now know to be Novo) offered a package valued at $59.46/share (see p. 44) for MTSR. As mentioned above, MTSR ultimately turned down Novo in favor of the certainty of the Pfizer deal.

You’ll recall I mentioned earlier that boards often turn down higher bidders with some type of regulatory or financing uncertainty in favor of a lower offer with more deal certainty.

But bidders and boards often differ quite a bit in their assessment of risk. The funny thing about public companies is that they are required to file a proxy with the background of a deal, and bidders who were passed over can then read the proxy and say, “huh, the board was concerned about that? We think they were completely wrong” or “o, we didn’t realize this one item was a gating factor for the board; let’s fix that issue and go back with a better bid.” And, even if the board still thinks the offer is inferior, the higher bidder can always take the question directly to the company’s shareholders, and shareholders will very often let the board know they’d prefer the higher price and antitrust risk to the certainty of the lower price.

So MTSR fits into a unique and perhaps my favorite of all of the no lose set ups: a merger arb that is scheduled to go through where there is a publicly confirmed strategic that has offered a higher price and was turned down for some reason. The reason this set up is so interesting is the spurned bidder can wait, read the proxy, see all of the companies projections, see what the company was worried about when it came to antitrust, see what other bidders were bidding….. and then chose to swoop in at the last second with nearly unprecedented amounts of information!

Again, this set up is rare…. but time and time again I see that the market underprices the odds of a topping bid from a bidder who was offering more and got passed over for some reason (generally anti-trust9). Let me give a few examples:

  • My favorite example is Disney / Fox. They announced a merger in late 2017 that valued Fox at about $28/share (plus a spinoff)…. but then a few months later Comcast swooped in with a $35/share offer, and Disney eventually bumped their bid to $38/share. So, If you had bought Fox stock the day the initial deal was announced, you’d have made ~35% in ~6 months through the course of the bidding war…. and, if Comcast had never shown up, you’d have still made a normal arbitrage spread!
  • How could you have known that Comcast might come in over the top. Well, there were plenty of press reports that Comcast had been trying to buy Fox with a higher offer before Fox sealed the deal with Disney…. but you also could have read Fox’s initial proxy in late May 2018 and seen / confirmed that Comcast had made a much higher offer for Fox! Again, that proxy came out late May 2018…. Comcast made their (public) topping offer a few weeks later.
  • Chevron announced a deal to buy Anadarko for $65/share in mid-April 2019; right when the bid was announced David Faber reported “Occidental was prepared to pay $70 a share for Anadarko and is currently exploring its options.” Anadarko traded slightly below the Chevron price when the deal was announced…. Sure enough, Occidental came with a topping bid less than two weeks after the Chevron bid was announced and eventually won that deal (I believe Andarko’s stock closed at $73.39/share when the definitive OXY deal was signed ~a month later, so that’s a very nice bump insider of a month…. btw, OXY’s CEO does not come off well in the Anadarko proxy).
  • Marriott and Starwood announced a deal that valued Starwood at ~$71/share in November 201510. Starwood was a very hot commodity and there were plenty of rumors that other strategics were looking at buying it; those rumors were confirmed when the proxy came out in February 201611. It disclosed nearly unlimited strategic interest in Starwood’s portfolio, but in particular I’d note that Company G and Company F both sent offers to buy Starwood for $86/share that were dismissed for one reason or another. Sure enough, in March Anbang offered $76 and then $78/share, their bid was deemed superior, and Marriott eventually had to bump their bid to $79.53/share (or $85.36 if you included the value of the spin).
  • One thing that was/is so unique about the marriott / starwood set up? Marriott’s CEO was acknowledging the potential for a bidding war when the deal was announced; this FT article from right after the initial deal was announced has an incredible quote from him, “Will other bidders crash the deal? We hope they won’t.” The article goes on to speculate that Hilton, Hyatt, IHG, or several Chinese companies could serve as interlopers.

4. The Risky Movement to Make America Nuclear Again – Michael Riley

When Oklo Inc., a nuclear power startup, applied in 2020 to operate its first reactor, the company rested largely on outsize ambition. Its MIT-educated co-founders, a married couple named Jacob and Caroline DeWitte, lived in a mobile home park in Mountain View, California, in space 38. Oklo, which had only 20 full-time employees, wanted to build small reactors across the country, transforming the way towns and industries are powered. To realize that dream, it needed the US Nuclear Regulatory Commission to say the company’s design was safe.

Two years later, Oklo had failed to pass even the first step of the approval process. In 2022, after months of frustrating back and forth, the NRC concluded that the company didn’t provide verifiable answers to the most basic safety questions. The regulator denied the application. A former senior agency official, who spoke on the condition of anonymity, says Oklo “is probably the worst applicant the NRC has ever had.”…

…In 2025, Oklo’s reactor design is still unlicensed. But, in a sign of how radically the safety landscape has changed for nuclear power, the company’s business promise seems bright. Oklo went public last year and now has a market value hovering around $20 billion. In May, Jake was in the White House when President Donald Trump signed four executive orders designed to herald a nuclear renaissance. “It’s a brilliant industry,” Trump said, DeWitte at his side.

The startup’s backers long had a Plan B: If Oklo couldn’t win approval from the agency charged with protecting the public from nuclear accidents, they would, essentially, go after the regulator, in much the way Uber Technologies Inc. and other Silicon Valley startups have obliterated regulatory roadblocks. One of the architects of Oklo’s attack-the-regulator strategy is a law professor-turned-venture capitalist with ties to the Koch empire. He says the public shouldn’t be worried…

…Not far from the massive silver dome is a patch of government land where the DeWittes have staked their future. Little more than a sign and a couple of porta potties stashed amid the juniper bushes, this is where the two are planning to build Oklo’s reactor, Aurora, which they’ve described as a more modern version of the EBR-II. They have vowed that their reactor will share the same inherent safety characteristics.

Edwin Lyman, a physicist and director of nuclear power safety with the Union of Concerned Scientists, says the assumption that reactors like EBR-II are “passively safe” is misguided. “It’s gaslighting,” he says. Sodium fast reactors are notoriously difficult to operate, which accounts for the technology’s long history of accidents and meltdowns. Sodium leaks can create fires that spray a toxic sodium-oxide aerosol into the air. If the coolant comes into contact with water, hydrogen explosions can result in both the reactor itself and the power generation plant. And compared with light-water reactors, fast reactors leak neutrons that need extensive shielding to make them safe. “If something goes wrong, the potential for a Chernobyl-like escalating event is actually much higher than it is with light-water reactors,” Lyman says.

When Oklo submitted its first application to the NRC in 2020, the agency was under pressure from Congress and the industry to show it could license new reactors more efficiently. The agency’s licensing team was eager to begin what it called a Phase 1 review—essentially checking that the application is complete enough to move to a more rigorous scientific and safety evaluation. With an experienced company, Phase 1 usually takes about two months. “We thought we could get Oklo to that point in about six months,” says a former agency official familiar with the company’s application, who asked for anonymity to talk openly about the company’s application.

Major sticking points soon emerged. The company declared that, based on its extensive calculations, Aurora was one of the safest nuclear reactors in the world and there was no plausible accident that would result in a release of radiation into the environment. Yet the NRC staff identified important scenarios that Oklo didn’t appear to consider: What if undulating pipes from a sudden leak wrecked key systems? What if the seals of the reactor capsule failed, creating a pathway for radiation to reach the outside? The regulators also asked about the risk of flooding inside the reactor capsule, which the NRC said “may represent a potential criticality issue.” Nuclear experts say that’s a technical way of saying that the agency was worried about the possibility of an uncontrolled fission event, which could result in a dangerous steam explosion inside the reactor vessel.

As the licensing team dug in, Oklo couldn’t provide the supporting analysis for many of its basic safety assumptions, according to four officials who spoke to Businessweek about the application, as well as public NRC documents. In some cases, supporting files the company claimed to have were not available when the NRC tried to examine them, one official says.

“We needed the evidence that this reactor could be built and operated safely, and it just wasn’t forthcoming,” says one of the four officials.

Finally, in January 2022, the NRC denied Oklo’s application. By that point, the company had raised more than $25 million, and its dream of mass producing small nuclear reactors had seemed in reach. But at the NRC, the company never made it beyond Phase 1.

In a flashy video posted on YouTube last year, the DeWittes, clad in jeans, stroll across the high prairie near the Idaho National Laboratory. They’re introduced by a narrator whose tone mixes soothing and serious. “Meet the husband-and-wife engineering duo that discovered a game-changing technology buried in a government lab in Idaho,” the narrator says.

The six-and-a-half-minute video was published on the YouTube channel of a Utah-based organization called the Abundance Institute, identified on its website as “a mission-driven nonprofit focused on creating a space for emerging technologies.” In contrast to other pro-nuclear outfits including Third Way and the Breakthrough Institute, the Abundance Institute has been ferocious in its criticism of the NRC. In January its CEO penned an op-ed in the Wall Street Journal that labeled the regulator “lawless,” then followed up with social media posts declaring that it was time to abolish the agency.

5. AI Could Be the Railroad of the 21st Century. Brace Yourself – Derek Thompson and Richard White

Even in these early answers, you can see both a difference and similarity between the transcontinentals and AI.

A difference: The transcontinental project was government-financed from the jump. It was launched as a wartime strategy to keep California in the Union and backed with government loans and land grants. The AI buildout, by contrast, is overwhelmingly financed by the richest companies in the private sector.

A similarity: The transcontinentals were “central” to the U.S. economy in the second half of the 19th century—so central, in fact, that whenever the railroads caught a cold, the entire economy sneezed. In 2025, AI is similarly eating the entire economy—from the stock market (AI-related stocks have accounted for 75% of S&P 500 returns since ChatGPT launched in November 2022) to the construction industry. According to JPMorgan, data centers “are eclipsing office construction spending” and pushing up electricity prices across the country…

…The railroads were built with debt. Debt, debt, debt. The whole thing was a tottering Jenga tower of leverage, and it came crashing down every 15 years or so. By contrast, the AI buildout has not relied significantly on borrowing. Most data center construction to date has been financed by free cash flow from the major US tech companies with capital from private-capital firms like Apollo and Blackstone.

But this might be changing—and fast. Last week, Bank of America Global Research reported that “borrowing to fund AI datacenter spending exploded in September and so far in October…

…In the 1800s, the railroad supply chain was partly owned by, or directly financed by, the government, which led to years of corruption that exacerbated the severity of the economic panics that followed. I am reminded of the news that the Trump administration has been taking minority stakes in US chip companies (Intel) and demanding a share of their export revenue (Nvidia, AMD). Maybe not the single most auspicious sign.

Second, go back to that Fahnestock paraphrase: “We have borrowed immense amounts of money, built relatively little, and the lines we’ve built go nowhere. We have nothing to carry. This is simply going to collapse.” I think it is fair to say that, to date, the AI hyperscalers have not borrowed immense amounts of money (yet); they’ve built a lot, and what they’re building is being broadly used by tens of millions of people. The folks at Exponential View estimate that total generative AI revenues this year will exceed $60 billion. Say what you want about AI, but it is not an empty railroad cart leading to the desolate Nevada desert! This is a train that people are riding…

…Thompson: What are some timeless lessons that the railroads offer for other transformative technologies, such as AI?

White: Transformative technologies are built by people who never under-promise. They always overestimate the beneficial consequences of what they’re doing in the short-term and underestimate the costs of what they’re doing.

Second, the people who hype these technologies, the people who control the companies that are seeking to master these technologies, very often do not understand the technologies themselves. They can over-promise because literally they know what they want to promise to get financing and to get money and to get profits. But they often have very little idea of what these technologies will do. And so these technologies turn out to be something of a black box. You open them up and all kinds of things pop out. Some of them are things you’re anticipated. Many things are going to be things that you don’t anticipate.

Third, these technologies virtually always become bubbles. Because they take on this belief that if you’re going to change the world, if this is the secret to the changing world, everybody should get in on this. The railroads were the American stock market and American financial market in the late 19th century. I mean, that’s where the money went. It dwarfed everything else. In that way, they invent American financial markets and they invent the way that the bond market and the stock market will later work. But it means a relatively few corporations can make the whole thing boom and make the whole thing bust.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet, Amazon, Mastercard, Meta Platforms, Microsoft, and Visa. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the third quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Alphabet (NASDAQ: GOOG)

Alphabet’s 1st-party AI models, including Gemini, now process 7 billion tokens per minute from direct APIs; the Gemini App now has 650 million monthly active users; queries on Gemini App has 3x-ed from 2025 Q2; management sees Alphabet’s AI models as world-leading; 230 million videos have been created with Veo3; 13 million developers have built with Alphabet’s generative AI models; management will release Gemini 3 in 2025 Q4; the number of tokens per month processed by Alphabet has increased from 980 trillion in May 2025 to 1.3 quadrillion, up 20x from a year ago; Alphabet is applying Gemini internally and this has increased the productivity of its sales team by over 10%, leading to hundreds of millions in incremental revenue; Alphabet’s customer support division has used Gemini to manage over 40 million customer sessions year-to-date; management thinks the pace of frontier model development is still phenomenal 

Our first-party models, like Gemini, now process 7 billion tokens per minute via direct API used by our customers. The Gemini app now has over 650 million monthly active users, and queries increased by 3x from Q2…

…Our models are world-leading. GEMINI 2.5 Pro, Veo, Genie 3 under viral sensation Nano Banana are among the very best in class. Over 230 million videos have been generated with Veo 3, and more than 13 million developers have built with our generative models. We are looking forward to the release of Gemini 3 later this year…

…In July, we announced that we processed 980 trillion monthly tokens across all our surfaces. We are now processing over 1.3 quarterly and monthly tokens, more than 20x growth in a year, phenomenal…

…We’re also applying Gemini internally to help us serve customers with increased speed, intelligence and efficiency. Our sales teams use Gemini enriched with ads knowledge to streamline customer interactions. This increased productivity by over 10% led to hundreds of millions in incremental revenue and frees up sellers to engage with more customers at a deeper, more strategic level. In our customer support division, Gemini-powered solutions have managed over 40 million customer sessions so far this year and resolved hundreds of thousands of customer inquiries, and we’re just getting started…

…On the on the pace of frontier model research and development. Look, I think 2 things are both simultaneously true. I’m incredibly impressed by the pace at which the teams are executing and the pace at which we are improving these models. But it also is true at the same time that each of the prior model you’re trying to get better over is now getting more and more capable. So I think both the pace is increasing, but sometimes we are taking the time to put out a notably improved model, so I think — and that may take slightly longer. But I do think the underlying pace is phenomenal to see.

Google Cloud saw accelerating growth in 2025 Q3 with AI as a key driver; Google Cloud backlog grew 46% sequentially to $155 billion in 2025 Q3 (was $106 billion in 2025 Q2); Google Cloud is singing new customers faster, with 34% year-on-year increase in new GCP (Google Cloud Platform) customers in 2025 Q3; Google Cloud signed more deals over $1 billion in 2025 9M than in 2023 and 2024 combined; more than 70% of existing Google Cloud customers use Alphabet’s AI products; Google Cloud has 13 product lines that have annual run rate of more than $1 billion each; management thinks Google Cloud offers the widest array of chips, and 9 of the top 10 AI labs are on Google Cloud; revenue from products built on Alphabet’s generative AI models in 2025 Q3 was up more than 200% year-on-year; nearly 150 Google Cloud customers have each processed 1 trillion tokens in the last 12 months; WPP is using Alphabet’s AI models to improve efficiency by up to 70% when creating advertising campaigns; Swarovski is using Alphabet’s AI models to raise e-mail open rates by 17% and accelerate campaign localization by 10x; management recently launched Gemini Enterprise and it is seeing strong adoption of agents; the packaged enterprise agents by Gemini Enterprise have already exceed 2 million subscribers aross 700 companies

Cloud had another great quarter of accelerating growth with AI revenue as a key driver. Cloud backlog grew 46% quarter-over-quarter to $155 billion…

…Next, Google Cloud. Our complete enterprise AI product portfolio is accelerating growth in revenue, operating margins and backlog. In Q3, customer demand strengthened in 3 ways. One, we are signing new customers faster. The number of new GCP customers increased by nearly 34% year-over-year. Two, we are signing larger deals. We have signed more deals over $1 billion through Q3 this year than we did in the previous 2 years combined. Third, we are deepening our relationships. Over 70% of existing Google Cloud customers use our AI products, including Banco BV, Best Buy and FairPrice Group…

…Today, 13 product lines are each at an annual run rate over $1 billion…

…We have a decade of experience building AI accelerators and today, offer the widest array of chips. This leadership is winning customers like HCA Healthcare, LG AI Research and Macquarie Bank, and it’s why 9 of the top 10 AI labs choose Google Cloud…

…In Q3, revenue from products built on our generative AI models grew more than 200% year-over-year. Over the past 12 months, nearly 150 Google Cloud customers each processed approximately 1 trillion tokens with our models for a wide range of applications. For example, WPP is creating campaigns with up to 70% efficiency gains. Swarovski has increased e-mail open rates by 17% and accelerated campaign localization by 10x…

…Earlier this month, we launched Gemini Enterprise, the new front door for AI in the workplace, and we are seeing strong adoption for agents built on this platform. Our packaged enterprise agents in Gemini Enterprise are optimized for a variety of domains, are highly differentiated and offer significant out-of-box value to customers. We have already crossed 2 million subscribers across 700 companies.

Alphabet has a full-stack approach to AI, spanning infrastructure, research, products, and platform; management continues to see Alphabet’s AI infrastructure as a key differentiator; Alphabet is the only company scaling both NVIDIA’s GPUs as well as its own TPUs; Alphabet is now shipping the new A4X Max instances powered by NVIDIA GB300; the 7th-generation of Alphabet’s TPU will be available soon; management is seeing tremendous demand for TPUs; AI startup Anthropic recently announced that it would access up to 1 million TPUs

Our full stack approach spans AI infrastructure, world-class research including models and tooling, and our products and platforms that bring AI to people everywhere…

…Our extensive and reliable infrastructure, which powers all of Google’s products is the foundation of our stack and a key differentiator. We are scaling the most advanced chips in our data centers, including GPUs from our partner, NVIDIA, as well as our own purposeful TPUs. And we are the only company providing a wide range of both. As we announced yesterday at NVIDIA GTC, we are now shipping the new A4X Max instances powered by NVIDIA GB300 to our cloud customers. We are investing in TPU capacity to meet the tremendous demand we are seeing from customers and partners, and we are excited that Anthropic recently shared plans to access up to 1 million TPUs.

Alphabet’s management sees AI expanding Google Search; the growth in overall queries and commercial queries seen in 2025 Q2 accelerated in 2025 Q3, driven by AI Overviews and AI Mode; the acceleration of growth from AI Overviews in Google Search in 2025 Q3 was more pronounced with younger people; AI Mode has seen strong and consistent week-over-week growth in usage since launch in the USA and queries doubled sequentially; AI Mode has been rolled out globally in 40 languages; AI Mode now has 75 million daily active users; AI Mode is driving incremental total query growth for Google Search, including commercial queries; Google Search users can now shop conversationally in AI Mode; all US users of Google Search now have access to try-on capabilities for clothing items; management sees agentic experiences as additive to the way Google Search users seek information; management is working on agentic experiences across key verticals and they think it’s important that Alphabet also creates value for its partners when building these experiences; Alphabet has introduced agentic checkout and partnerships for agentic commerce; AI Overviews now has 2 billion users; AI Overviews continue to monetise at a similar rate as traditional Google Search, but management sees the opportunity for the monetisation to improve; Google Search’s paid clicks and CPCs were both up 7% year-on-year in 2025 Q3; management sees the opportunity in AI Mode to take queries that are not fully commercial and yet still serve attractive advertising offerings

AI is driving an expansionary moment for Search. As people learn what they can do with our new AI experiences, they’re increasingly coming back to Search more. Search and its AI experiences are built to highlight the web, sending billions of clicks to sites every day. During the Q2 call, we shared that overall queries and commercial queries continue to grow year-over-year. This growth rate increased in Q3, largely driven by our AI investments in Search, most notably AI Overviews and AI Mode…

…AI Overviews drive meaningful query growth. This effect was even stronger in Q3 as users continue to learn that Google can answer more of their questions, and it’s particularly encouraging to see the effect was more pronounced with younger people.

We’re also seeing that AI Mode is resonating well with users. In the U.S., we have seen strong and consistent week-over-week growth in usage since launch and queries doubled over the quarter. Over the last quarter, we rolled out AI Mode globally across 40 languages in record time. It now has over 75 million daily active users, and we shipped over 100 improvements to the product in Q3, an incredibly fast pace. Most importantly, AI Mode is already driving incremental total query growth for Search…

…Our investments in new AI experiences, such as AI Overviews and AI Mode, continued to drive growth in overall queries, including commercial queries, creating more opportunities for monetization. These AI experiences are enhancing how people connect with businesses and shop on Search. We recently added shopping capabilities in AI Mode, which now help people shop conversationally in Search, and we expanded try-on capabilities to more clothing items, now available to anyone in the U.S…

…This is all early, but we see agentic experiences really as additive to the way people seek information. It helps us answer people’s tough questions. It helps us — it helps people get stuff done, and it helps businesses in the process…

…We’re working on multiple agentic experiences across key verticals such as travel, commerce, shopping and so on, and we’re paying a lot of attention to creating a seamless user experience but also to the fact that we need to integrate different partner ecosystems in a way that it creates value for them…

…At I/O, we also introduced new agentic checkout, which will let shoppers use like agentic AI to buy products from merchant sites and so on. We have a partnership with PayPal to help merchants build agentic commerce experiences. We have a new open protocols for agent-to-agent transactions and so on and so on…

…AI Overviews is scaling up and working for our entire user base. We’re now scaled to over 2 billion users here, and we’re continuing to expand ads in AI Overviews in English to more countries, across desktop, mobile and so on. And as I’ve shared before, for AI Overviews, even at our current baseline of ads below and within the AI’s response, overall, we see the monetization at approximately the same rate…

…We’re excited about the opportunity of richer experiences in AI Mode and AI Overviews to basically open up then the opportunity for also much richer placements…

…As you will see in the 10-Q, paid clicks were up 7% year-on-year and CPCs were up 7% year-on-year…

…There is the question of whether queries actually increase with AI Mode, and Sundar actually talked about it and mentioned the opportunity that he sees here. So I think it’s important to separate those 2 things. And I personally also see this, what I just said in my last remarks, that I think, over time, there’s an opportunity to actually take, let’s say, queries that are not fully commercial but could have an adjacent commercial relationship to basically expand this into more attractive ads offerings without — while really creating a really interesting user experience at the same time.

Alphabet’s management recently rolled out AI features that help Youtube content creators streamline their entire content creation workflow; Youtube can now automatically products in content creators’ videos to make them more shoppable; Alphabet’s recommendation systems are driving watch time growth in Youtube; the use of Gemini in Youtube is driving improvement in content discovery; management is excited about the revenue growth powered by Demand Gen in Youtube’s direct response advertising business; Alphabet has improved Demand Gen’s performance on Youtube where it can now increase conversion value by more than 40%; Demand Gen is helping Youtube further monetise shopping-related categories; more advertisers are adopting interactive direct response ads on Youtube in the living room, with an annual revenue run rate exceeding $1 billion globally; management has introduced Veo 3 integration and speech to song for content creators in Youtube; Youtube Shorts has lower revenue-share than traditional Youtube

At our Made on YouTube event, we rolled out a number of AI-powered features that are helping create a supercharged creation and build their businesses. AI is now streamlining the entire content creation workflow from generated video tools and more efficient editing to AI-powered insights that help creators optimize their channels. We are also using AI to expand monetization, automatically identifying products to make their videos more shoppable…

…Our recommendation systems are driving robust watch time growth in our key monetization areas like Shorts and Living Room. As we leverage Gemini models, we’re seeing further discovery improvement…

…On direct response, we’re excited about the growth in revenue we’re seeing, especially from small and medium advertisers adopting Demand Gen. We also improved performance on Demand Gen with over 100 launches helping to increase conversion value by more than 40% for advertisers using target-based bidding on YouTube. The retail vertical continues to lead our growth on YouTube with Demand Gen helping us further monetize shopping-related categories.

Looking at the living room, our long-term bet, more advertisers are adopting interactive direct response ads, leading to an annual revenue run rate exceeding $1 billion globally for this format…

…We continue to invest in AI-powered features that are helping creators supercharge creation and build their businesses. with Veo 3 integration and speech to song, creators go from idea to iteration quicker, and new channel insights help them better understand performance…

…Shorts, which has a lower revenue share than in stream that helps to improve some of our gross margins.

Alphabet’s management intends to launch Waymo in London and Tokyo in 2026; Waymo has expanded operations in a number of US cities, and testing in New York City continues to scale; Waymo now has the option for enterprises to offer Waymo as a work-travel option; management launched Waymo teens accounts in Phoenix recently and usage is growing steadily; management thinks there’s a real opportunity to infuse Gemini into Waymo to improve the in-vehicle experience for users in 2026

Next year Waymo aims to open service in London, and they are working to bring service to Tokyo. They’ve also announced expansions to Dallas, Nashville, Denver and Seattle and secured permission to operate fully autonomously at San Jose and San Francisco Airports. Autonomous testing continues to scale in New York City. The new Waymo for Business allows enterprises to offer Waymo as a work travel option. And we launched Waymo teens accounts in Phoenix this summer. We are pleased to see usage steadily increase with positive feedback from teens and their parents alike…

…[Question] How far are we from an integration of Waymo into more of the core Gemini capabilities and the users on the platform taking your user data of where I’m going, what hotel I’m staying at, what airport I’m staying at and having integrated that into Waymo?

[Answer] Waymo clearly is scaling up, particularly in 2026. And I think the possibility, as you said, of Gemini, particularly with the multimodal experience as well as services like YouTube, I think there’s a real opportunity to make the in-car experience dramatically better. Definitely something we are excited about, and you’ll see newer experiences in 2026 for sure.

Alphabet’s management recently rolled out AI Max in Search for businesses and it can understand and predict consumer intent in Google Search;  AI Max is already used by hundreds of thousands of advertisers and is Alphabet’s fastest-growing AI-powered Search ads product; AI Max unlocked billions of net new queries in 2025 Q3; AI Max helps advertisers discover new customers at the exact moment they need their product or service; Kayak used AI Max in Search and grew conversion value by 12%

Businesses can now tap into our most powerful AI search experiences. Using our most advanced AI models, we can understand and predict intent like never before, unlocking entirely new commercial pathways to provide valuable new consumer connections and helping us monetize even more efficiently. Rolled out globally in September, AI Max in Search is already used by hundreds of thousands of advertisers, currently making it the fastest-growing AI-powered search ads product. In Q3 alone, AI Max unlocked billions of net new queries. By delivering the most relevant ad across surfaces and matching advertisers against additional queries they weren’t reaching before, AI Max helps advertisers discover new customers at the exact moment they need their product or service. Kayak, for example, look to grow conversions while staying within their ROAS goals. After turning on AI Max and Search, they grew their conversion value by 12% in early tests.

Alphabet’s management notes that GCP is seeing strong demand for enterprise AI infrastructure and enterprise AI solutions; management notes that GCP will be in a tight demand/supply situation going into 2026; management now expects capex of $91 billion to $93 billion in 2025 (up 66% from $55.4 billion in 2024 and 2024’s capex was up 69% from 2023), up from previous guidance of $85 billion; management expects capex to increase significantly in 2026; management expects the growth rate in depreciation to accelerate in 2025 Q4; when management makes capex decisions, they go through a rigorous process of assessing the return on the investment

In Cloud, demand for our products remains high as evidenced by the accelerating revenue growth and the $49 billion sequential increase in Cloud backlog in Q3. In GCP, we see strong demand for enterprise AI infrastructure, including TPUs and GPUs, enterprise AI solutions driven by demand for Gemini 2.5 and our other AI models, and core GCP infrastructure and other services such as cybersecurity and data analytics. As I’ve mentioned on previous earnings calls, while we have been working hard to increase capacity and have improved the pace of server deployments and data center construction, we still expect to remain in a tight demand-supply environment in Q4 and 2026.

Moving to investments. We’re continuing to invest aggressively due to the demand we’re experiencing from Cloud customers as well as the growth opportunities we see across the company. We now expect CapEx to be in the range of $91 billion to $93 billion in 2025, up from our previous estimate of $85 billion, keeping in mind that the timing of cash payments can cause variability in the reported CapEx number. Looking out to 2026, we expect a significant increase in CapEx, and we’ll provide more detail on our fourth quarter earnings call.

In terms of expenses, first, as I’ve mentioned on the previous earnings calls, the significant increase in our investments in technical infrastructure will continue to put pressure on the P&L in the form of higher depreciation expenses and related data center operations costs such as energy. In the third quarter, depreciation increased $1.6 billion year-over-year to $5.6 billion, reflecting a growth rate of 41%. Given the overall increase in CapEx investments, we expect the growth rate in depreciation to accelerate slightly in Q4. Second, we expect sales and marketing expenses to be more heavily weighted to the end of the year in part to support product launches and the holiday season…

…When we make a decision on investment in the long term, we go through a very rigorous process of assessing what the return could be and over what time frame we will see that return to give us the high level of confidence to then invest and make those investments for the long term.

Nearly half of all code in Alphabet is now generated by AI

The percent of code, now nearly half of all code generated by AI, that’s a way for us to leverage AI to drive further productivity across the business.

Amazon (NASDAQ: AMZN)

AWS grew 20.2% year-on-year in 2025 Q3, and is now growing at a pace last seen in 2022; AWS’s run rate has reached $132 billion (was $123 billion in 2025 Q2), and management thinks 20% growth off such a huge base is more impressive than what competitors have achieved (faster growth off a much smaller base); AWS’s backlog is $200 billion in 2025 Q3 (was $195 billion in 2025 Q2, up 25% year-on-year) and is higher now given unannounced new and large deals in October 2025; AWS has been a Gartner Magic Quadrant leader for 15 consecutive years; management sees AWS continuing to be the destination for most big enterprises and governments’ cloud migrations; AWS is where most companies’ data and workloads reside, and why most companies want to run AI in AWS; AWS operating income in 2025 Q3 was $11.4 billion, reflecting 34.6% operating margin (was 32.9% in 2025 Q2 and 38.1% in 2024 Q3); the AI–portion of AWS’s growth in 2025 Q3 come from both training and inference; a broad-base of AWS’s AI products also contributed to AWS’s AI-growth; cloud migrations by enterprises was also a strong contributor to AWS’s growth in 2025 Q3; management thinks AWS can continue growing at a similar clip as in 2025 Q3 for a while

AWS is growing at a pace we haven’t seen since 2022, reaccelerating to 20.2% year-over-year, our largest growth rate in 11 quarters…

…It’s very different having 20% year-over-year growth on a $132 billion annualized run rate and to have a higher percentage growth rate on a meaningfully smaller annual revenue, which is the case with our competitors. 

Backlog grew to $200 billion by Q3 quarter end and doesn’t include several unannounced new deals in October, which together are more than our total deal volume for all of Q3…

…Gartner has named AWS leader in its strategic cloud platform services Magic Quadrant for 15 consecutive years…

…Because of its advantaged capabilities, security, operational performance and customer focus, AWS continues to earn most of the big enterprise and government transformations to the cloud. As a result, AWS is where the preponderance of companies’ data and workloads reside and part of why most companies want to run AI in AWS…

…Moving next to our AWS segment. Revenue was $33 billion, up 20.2% year-over-year. This is an acceleration of 270 basis points compared to last quarter, driven by strong growth across both our AI and core services and more capacity, which has come online to support customer demand. AWS revenue increased $2.1 billion quarter-over-quarter and now has an annualized revenue run rate of $132 billion. AWS operating income was $11.4 billion, and reflects our continued growth, coupled with our focus on driving efficiencies across the business…

…We see the growth in both our AI area, where we see it in inference. We see it in training. We see it in the use of our Trainium custom silicon. Bedrock continues to grow really quickly. SageMaker continues to grow quickly…

…I think the other place we see a lot of growth in AWS also is just the number of enterprises who are — who have gotten back to moving from on-premises infrastructure to the cloud. And we continue to earn the lion’s share of those transformations. And I look at the momentum we have right now, and I believe that we can continue to grow at a clip like this for a while.

Amazon’s management thinks a lot of the value companies will derive from AI will come from agents and AWS is investing heavily in agents; management thinks companies will both create their own agents and use 3rd-party agents; management has launched Strands in AWS to make it easier for companies to build their own agents; management has launched AgentCore in AWS for companies who have built agents to deploy them in a secure and scalable way; Ericsson, Sony and Cohere Health are all users of AgentCore; Cohere Health is using AgentCore to deploy agents that reduces medical review times by up to 30% to 40%; AgentCore’s SDK (software development kit) has been downloaded over 1 million times; AWS has the coding agent Kiro, which attracted more than 100,000 developers in its first days of launch and that number has since doubled; AWS’s migration agent, Transform, has saved customers 700,000 hours of manual effort in 2025 9M; Thomson Reuters used Transform to transform 1.5 million lines of code per month to complete tasks faster than with other migration tools; customers have already used Transform to analyse 1 billion lines of mainframe code; AWS’s business agent, Quick Suite, has delivered 80% time savings and 90% cost savings to users; AWS’s contact center agent, Amazon Connect, is at a $1 billion annualised revenue run rate and has handled 12 billion minutes of customer interactions in the last year; customers of Amazon Connect include Capital One, Toyota, American Airlines and Ryanair

A lot of the future value companies will get from AI will be in the form of agents. AWS is heavily investing in this area and well positioned to be a leader.

Companies will both create their own agents and use agents from other companies. For those building their own, it’s been harder to build than it should be. It’s why we launched Strands to make it much easier to create agents from any foundation model that builders desire. For companies who successfully built agents, they’ve hesitated putting them into production because they lack secure, scalable runtime services or memory or observability, built specifically for agents. It’s why we launched AgentCore, a set of infrastructure building blocks that allow builders to deploy secure, scalable agents. Ericsson used AgentCore to deliver AI agents across their workforce, Sony used it to build a agentic AI platform with enterprise-level security, observability and scalability. And Cohere Health is using AgentCore to deploy agents that will reduce medical review times by up to 30% to 40%. AgentCore’s SDK has already been downloaded over 1 million times, and our builders are excited about it…

…For coding, we’ve recently opened up our agentic coding IDE called Kiro. More than 100,000 developers jumped into Kiro in just the first few days of preview and that number has more than doubled since. It’s processed trillions of tokens thus far, weekly actives are growing fast, and developers love its unique spec and tool calling capabilities.

For migration and transformation, we offer an agent called Transform. Year-to-date, customers have already used it to save 700,000 hours of manual effort. The equivalent of 335 developer years of work. For example, Thomson Reuters used it to transform 1.5 million lines of code per month, moving from Windows to open source alternatives and completing tasks or at times faster than with other migration tools. Customers have also already used Transform to analyze nearly 1 billion lines of mainframe code as they move mainframe applications to the cloud.

For business customers, we’ve recently launched Quick Suite to bring a consumer AI-like experience to work, making it easy to find insights, conduct deep research, automate tasks, visualize data and take actions. We’ve already seen users churn months long projects into days, get 80% plus time savings on complex tasks and realize 90% plus cost savings…

…For contact centers, we offer Amazon Connect which creates a more personalized and efficient experience for contact center agents, managers and their customers. Connect has recently crested $1 billion annualized revenue run rate with 12 billion minutes of customer interactions being handled by AI in the last year and is being used by large enterprises like Capital One, Toyota, American Airlines and Ryanair.

AWS has added 3.8 gigawatts of capacity in the last 12 months, more than any competitor; AWS now has double the capacity it had in 2022, and is on track to doubling capacity again by 2027; management expects to add 1 gigawatt of capacity in 2025 Q4; management is growing AWS capacity very aggressively because they see the demand; as soon as capacity is added to AWS, it is monetised

We’ve been focused on accelerating capacity the last several months, adding more than 3.8 gigawatts of power in the past 12 months, more than any other cloud provider…

…We’re now double the power capacity that AWS was in 2022, and we’re on track to double again by 2027. In the last quarter of this year alone, we expect to add at least another 1 gigawatt of power. This capacity consists of power, data center and chips, primarily our custom silicon, Tranium and NVIDIA…

…You’re going to see us continue to be very aggressive in investing in capacity because we see the demand. As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early and represents an unusual opportunity for customers in AWS.

Project Rainier, a massive AWS AI compute cluster consisting of 500,000 of AWS’s in-house Trainium 2 chips, is now online; AI startup Anthropic is using Project Rainier to build and deploy the next generation of its leading AI model; management expects Anthropic to use up to 1 million Trainium 2 chips by end-2025; Trainium 2 is currently fully subscribed, and is a multi-billion dollar business that grew 150% sequentially in 2025 Q3; Trainium is currently used by only a small number of very large customers, but management expects more customers to use Trainium once Trainium 3 comes online; the token usage of Amazon Bedrock, AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps, is mostly on Trainium; even as AWS scales Trainium, management continues to order significant amounts of chips from NVIDIA, AMD, and Intel; management sees Trainium as 30%-40% more price-performant than other options; management thinks that as companies start to scale production AI workloads, they will care a lot about price performance, and this will lead to strong demand for Trainium; Trainium 3 should preview at end-2025, with full volume coming in early-2026; there are many large and medium-sized customers who are interested in Trainium 3; management thinks that AWS will always have multiple chip options for customers and that has been true for every major technology building block; management thinks the chip team behind Trainium, Annapurna, is really strong; management expects Trainium 3 to be 40% better than Trainium 2; it was not easy to build Project Rainier to be able to scale from 500,000 chips to 1 million; Project Rainier is specific for Anthropic

We’ve recently brought Project Rainier online, our massive AI compute cluster spanning multiple U.S. data centers and containing nearly 500,000 of our Trainium2 chips. Anthoropic is using it now to build and deploy its industry-leading AI model Claude, which we expect to be on more than 1 million Trainium2 chips by year-end. Trainium2 continues to see strong adoption, is fully subscribed is now a multibillion-dollar business that grew 150% quarter-over-quarter.

Today, Trainium is being used by a small number of very large customers but we expect to accommodate more customers starting with Trainium3.

We’re building Bedrock to be the biggest inference engine in the world and in the long run, believe Bedrock could be as big a business for AWS as EC2, and the majority of token usage in Amazon Bedrock is already running on Trainium.

We’re also continuing to work closely with chip partners like NVIDIA, with whom we continue to order very significant amounts as well as with AMD and Intel. These are very important partners with whom we expect to keep growing our relationships over time…

…Because Trainium is 30% to 40% more price performant than other options out there, and because as customers, as they start to contemplate broader scale of their production workloads, moving to being AI-focused and using inference, they badly care about price performance. And so we have a lot of demand for Trainium. Trainium3 should preview at the end of this year with much fuller volumes coming in the beginning of ’26. We have a lot of customers, both very large, and I’ll call it, medium-sized who’re quite interested in Trainium3…

…We’re always going to have multiple chip options for our customers. It’s been true in every major technology building block or component that we’ve had in AWS. Really in the history of AWS, it’s never just one player that over a long period of time has the entire market segment and then it can satisfy everybody’s needs on every dimension…

…We’re different from most technology companies in that we have our own very strong chip team, and this is our Annapurna team. And you saw it first on the CPU side with what we built with Graviton which is about 40% better price performance than the other x86 processors, and you’re seeing it again on the custom silicon on the AI side with Trainium, which is about the same amount of price performance benefit for customers relative to other GPU options…

…As we think about Trainium3, I expect Trainium3 will be about 40% better than Trainium2 and Trainium2 is already very advantaged on price performance…

…It’s not simple to be able to build a cluster that has 500,000 plus chips going to 1 million. That’s an infrastructure feet that’s hard to do at scale…

…Project Rainier is something that is specific for Anthropic.

Rufus, Amazon’s AI shopping assistant, has 250 million active customers in 2025 9M; Rufus monthly users are up 140% year-on-year and interactions are up 210%; customers using Rufus are 60% more likely to complete a purchase; Rufus is pacing towards $10 billion in incremental annualized sales; management is very excited about agentic commerce; management thinks agentic commerce will be very useful for consumers who don’t know what they want to buy; management sees Rufus as a part of Amazon’s agentic commerce efforts; Amazon has a Buy For Me agentic feature where products will be surfaced for consumers, even items that Amazon does not stock but that other merchants have; management is also looking to partner with 3rd-party agents; search engines are a very small part of Amazon’s traffic today, and 3rd-party agents are an even smaller part; management thinks the current agentic commerce experience is not good for consumers; management thinks agentic commerce will expand the amount of online shopping and this bodes well for Amazon

Rufus, our AI-powered shopping assistant has had 250 million active customers this year with monthly users up 140% year-over-year, interactions up 210% year-over-year and customers using Rufus during a shopping trip being 60% more likely to complete a purchase. Rufus is on track to deliver over $10 billion in incremental annualized sales…

…As a business, we’re very excited about in the long term the prospect of agentic commerce. And it has a chance to be good for customers, it has a chance to be really good for e-commerce…

…If you know what you want to buy, there are a few experiences that are better than coming to Amazon. But if you don’t know what you want, it’s — physical store with a physical salesperson still has some advantages. Obviously, lots of people do it on Amazon all the time. But you very often want to ask questions and help — get help narrowing what you’re going to look for. And as you keep asking new questions, having a whole bunch of different options presented to you. And I think AI and agentic commerce are going to change the experience online where that experience where you’re narrowing what you want when you don’t know is going to get better online than it even is in physical environments…

…We obviously have our own efforts here in agentic commerce. We have Rufus, which I talked about in my opening comments, which is continuing to get better and better and used more broadly. And we have features like Buy for Me where we will surface on Amazon, even items that we don’t stock that other merchants have. And then if customers want us to go and buy it for them on those merchants’ websites, we will do that. And both of those have been successful for us. But we’re also having conversations with and expect over time to partner with third-party agents…

…Today, search engines are a very small part of our referral traffic and third-party agents are a very small subset of that…

…We have to find a way, though, that makes the customer experience good. Right now, I would say the customer experience is not. There’s no personalization, there’s no shopping history, the delivery estimates are frequently wrong, the prices are often wrong…

…I do think that the exciting part of this and the promise is that AI and agentic commerce solutions are going to expand the amount of shopping that happens online. And I think that’s really good for customers, and I think it’s really good for Amazon because at the end of the day, you’re going to buy from the outfit that allows you to have the broadest selection, great value and continues to deliver for you very quickly and reliably. And I think that bodes well for us.

Customers are talking to Alexa+ 2x more compared to the classic Alexa experience; customers are talking to Alexa+ for longer compared to classic Alexa; compared to classic Alexa, customers are using Alexa+ in Fire TV 2.5x more, to discover audio content 4x more, to engage with photos 4x more, and to complete shopping conversations (that end with a purchase) 4x more

We continue to be energized by the response to Alexa+. Compared to what we call the classic Alexa experience, Alexa+ customers are talking to Alexa 2x more. Those interactions are much longer, and they’re covering a broader range of topics. So using Alexa+ in Fire TV at 2.5x the rate of classic, using natural conversation to discover audio content 4x more, engaging with photos 4x more, and customers are completing 4x more shopping conversations that end in a purchase.

More than 1.3 million sellers have used Amazon’s generative AI capabilities to speed up the launch fo high-quality listings; 3rd-party seller unit mix was 62% in 2025 Q3 (62% in 2025 Q2)

Our millions of global third-party sellers continue to be important contributors to our vast selection, which helps customers find the items they need at competitive prices. We’re committed to building innovative services and features for our sellers, including our ongoing advancements in generative AI. Today, more than 1.3 million sellers have used our generative AI capabilities to more quickly launch high-quality listings. Better listings translate into better traction with customers. And in Q3, worldwide third-party seller unit mix was 62%, up 200 basis points from Q3 of last year.

The majority of Amazon’s capital expenditure (capex) in 2025 Q3 was for AWS’s technology infrastructure, including the Trainium chips

Now turning to our cash CapEx, which was $34.2 billion in Q3. We’ve now spent $89.9 billion so far this year. This primarily relates to AWS as we invest to support demand for our AI and core services and in custom silicon, like Trainium as well as tech infrastructure to support our North America and international segments. We’ll continue to make significant investments, especially in AI, as we believe it to be a massive opportunity with the potential for strong returns on invested capital over the long term. Additionally, we continue to invest in our fulfillment and transportation network to support the growth of the business, improve delivery speeds and lower our cost to serve. These investments will support growth for many years to come.

Apple (NASDAQ: AAPL)

Apple’s management sees Apple’s silicon as the heart of the company’s efforts in AI; management thinks the A19 Pro chip and M5 chip make Apple products the very best place to experience the power of AI; management has introduced dozens of new features in Apple Intelligence, including Live Translation, Visual Intelligence, Workout Buddy, Clean Up in photos, and more; management is seeing developers build on the foundation models on Apple’s devices; management expects to release the new, more personalised version of Siri next year; Apple is using Private Cloud Compute (PCC) to handle some of Siri’s queries, and the company continues to build that out; there were capital expenditures in FY2025 that were related to the build out of PCC; management intends to continue using internal foundation models together with other LLMs in building the personalised version of Siri

As we continue to expand our investment in AI, we’re bringing intelligence to more of what people already love about our products and services, making every experience even more personal, capable and effortless. At the heart of it all is Apple silicon, and we were thrilled to launch new products powered by the A19 Pro chip and M5. These incredibly advanced chips make Apple products the very best place to experience the power of AI.

With Apple Intelligence, we’ve introduced dozens of new features that are powerful, intuitive, private and deeply integrated into the things people do every day, features like Live Translation, which help users communicate across languages in real time; and Visual Intelligence, which opens new ways to learn about and explore the world. We also introduced Workout Buddy, a new experience that uses AI to provide personalized motivational insights based on a user’s workout data and fitness history. And these joined so many others from Clean Up in photos and new image creation tools to powerful writing tools. We’re also seeing developers take advantage of our own device foundation models to create entirely new experiences for users around the world. We’re also excited for our more personalized Siri. We’re making good progress on it, and as we’ve shared, we expect to release it next year…

…We’re obviously using PCC, our Private Cloud Compute, today for a number of queries for Siri, and we will continue to build it out. In fact, the manufacturing plant that makes the servers used for Apple Intelligence just started manufacturing in Houston a few weeks ago, and we’ve got a ramp plan there for use in our data centers and it’s robust…

…In ’25, we did have CapEx costs associated with building out our Private Cloud Compute environment in our first-party data centers. So you would have seen that in some of the CapEx investment in the year…

…[Question] Good to know that the personalized Siri is making good progress and on track for next year. Will you continue to use a three-pronged approach with your own foundation models and partner with other LLM providers and maybe potential M&A?

[Answer] We’re obviously creating Apple foundation models within Apple. We ship them on device and use them in the Private Cloud Compute as well. And we’ve got several in development. And so we also, from a — continually to surveil the market on M&A and are open to pursuing M&A if we think that it will advance our road map.

The Apple Watch Series 11 has the most comprehensive set of health features yet, and these health features are powered by AI and advanced machine learning; the latest Apple Watch now has hypertension notifications that were developed using large-scale machine learning models; AirPods Pro 3 can pair very well with Live Translation to deliver an incredibly new and exciting experience for users

Apple Watch Series 11 brings our users the most comprehensive set of health features yet. And Apple Watch SE 3 delivers advanced capabilities at an incredible value. AI and advanced machine learning are at the core of powerful health features like heart rate monitoring, fall detection, crash detection and more. With our latest Apple Watch lineup, we were proud to introduce hypertension notifications, developed using large-scale machine learning models. Hypertension is one of the leading risk factors for heart attack and stroke affecting more than 1 billion adults worldwide, and we expect to notify more than 1 million users of this life-threatening condition…

…With Live Translation powered by Apple Intelligence, AirPods deliver an incredibly new and exciting experience for users around the world.

Apple’s management has committed to invest $600 billion over the next 4 years (was $6500 billion in 2025 Q2; Apple has $190 billion in gross profit per year, for perspective) in the USA in areas such as advanced manufacturing, silicon engineering and artificial intelligence; Apple already built a new factory in Houston for advanced AI service

A great example is the work we’re doing in the U.S. where we’re committed to invest $600 billion over the next 4 years with a focus on innovation in strategic areas like advanced manufacturing, silicon engineering and artificial intelligence. These commitments build on our long-standing investments in America while supporting more than 450,000 jobs with thousands of suppliers across all 50 states. We built a new factory in Houston for advanced AI service, for example, which just started shipping its first products off the line, and we’re leading the creation of end-to-end silicon supply chain across the country.

It’s still too early to tell for sure, but management thinks Apple Intelligence has been a driver of demand for Apple devices, and the driving force will become greater over time

[Question] With all the hype now around AI, are you seeing evidence that AI capabilities or features are a material purchase consideration for consumers?

[Answer] I think that there are many factors that influence people’s purchasing considerations and so — and we don’t have a great in-depth survey yet on the current iPhone 17 because it’s very new in the cycle, and we give it some time to formulate. But I would say that Apple Intelligence is a factor, and we’re very bullish on it becoming a greater factor. And so that’s the way that we look at it.

Apple’s management will continue with Apple’s hybrid approach when it comes to data centers, of using its own data centers as well as those of 3rd-parties

[Question] In the wake of nearly every other large tech company massively increasing their CapEx in advance of AI demand and also mentioning that there’s scarce capacity, do you anticipate Apple altering its sort of long-standing hybrid approach to your own and third-party data centers?

[Answer] As we’ve talked about before, we are expecting increases in our CapEx spending related to AI investments. For example, as I mentioned earlier, we did end up having investments this year to build out our Private Cloud Compute environment. And we do believe this hybrid model has served us very well, and we continue to want to leverage it. And so I don’t see us moving away from this hybrid model where we leverage both first-party capacity as well as leverage third-party capacity. We’ll continue to want to build out Private Cloud Compute, as Tim outlined, as we have more usage there over time. But I think, in general, we want to continue to have this hybrid model.

ASML (NASDAQ: ASML)

Some of the recent uncertainties hampering ASML’s business has reduced, driven by AI; management thinks there will be continued investment in advanced Logic and DRAM because of AI; management thinks AI-demand will benefit a larger part of ASML’s customer base than previously expected; management is a little careful with how all the big AI-related announcements can eventually translate into real capacity, but nonetheless, has been preparing for growth; the broadening of ASML’s customer base because of AI is a big positive development for the company

I think we have seen a flow of positive news in the last few months that has helped to reduce some of the uncertainties we discussed last quarter. First, we continue to see strong news about commitment to AI. Which means, we think, investment in advanced Logic and DRAM. Second, and it’s very important for us, it looks like AI is going to benefit a larger part of our customer base. Third, we continue to make very good progress with our litho intensity, especially with EUV that continues to be adopted with DRAM and advanced Logic customers…

…If you look at the sum of the announcement, I would say this creates a pretty positive backlog of opportunity for AI moving forward…

…I think we are a bit careful with how the big announcement can translate into real capacity need on the ground.

I think the one thing I’d still like to stress one more time is we see the broadening of the customer base, I think, a very important news in that matter because whatever you do is the first set of news, I think we can all agree that we need to make sure that the market will not be supply limited. And this has always been a risk with a limited amount of customers supplying AI chips, both in logic and DRAM…

…We have said for a few quarters that we have been preparing for growth. So we were following those dynamic. And I think we know now that EUV most probably will be stronger next year. So we’ve been preparing for that. We have, as you know, also worked on longer-term capacity. So we continue to track basically the market carefully, having in mind that we want to be able to follow the demand. S

ASML recently invested in AI startup Mistral and management sees it as a strategic partnership that will help ASML improve the software in its systems and also speed up product development; ASML invested €1.3 billion in Mistral AI 

We entered into a partnership with Mistral AI. I think Mistral is really recognized on a number of fronts. They’re recognized for their business-to-business approach. They’re also recognized for the quality of their large language model. Particularly when it comes to software coding and software coding development. So, they’re recognized for that. That’s the reason why we entered into the partnership with them. Because many people look at ASML, look at our products and are really looking at hardware. But increasingly I think people appreciate the very significant software content that is within those systems. People really understand that if you get to the level of precision and the level of speed that we have in our scanners, but also quite frankly, what we need in metrology and inspection, it’s pretty clear that the software contingent therein becomes increasingly important.

So, that’s the reason why this is very strategic to us. Why it’s very strategic to improving the performance. Improving the precision and the speed of our tools as we bring them to our customers. So, therefore, this collaboration is truly a strategic choice for us. I would also say that, on top of the significance that it has for our products, AI is also a great way to improve the speed of our product development. To improve the speed of our time-to-market of any product development to our customers. That’s another big area that we’re collaborating with Mistral on.

So, all in all, we believe a very strategic partnership. We also, to underscore that strategic partnership, we were the lead investor for their Series C funding round. By being the lead investor we took approximately an 11% share in Mistral. We also have a seat on their Strategic Committee…

…ASML has invested EUR 1.3 billion in Mistral AI’s Series C funding round as lead investor. 

ASML’s management continues to see strong growth for the semiconductor market in the long-term, driven by AI; management thinks the shift of ASML’s customers towards advanced Logic and Memory chips will drive demand for advanced lithography and higher lithography intensity; management thinks the shift from 6F-square to 4F-square in DRAM will not cause the number of EUV layers to drop; it’s hard to tell how exactly how all the AI-announcements will translate into business for ASML in the next few years

[Question] Can I ask you to remind us of the long-term opportunities for ASML and a little bit the market you see there?

[Answer] We said that most probably AI will drive more advanced applications in semiconductors. So advanced DRAM, advanced Logic. This is happening and this is driving more advanced litho, higher litho intensity. We expect that to continue. As we just discussed, we see that 3D integration will become a new opportunity which we are going to pursue. As Roger explained very nicely, we also see that AI could create a lot of value in our products moving forward. So we continue to see a very strong opportunity on our technology roadmap… 

…[Question] There is a view that when you go into 4 F-squared from 6 F-squared for DRAM architecture, that’s actually negative for EUV, the EUV layer count comes down. Can you just help us understand that?

[Answer] The short answer is no. If we look at the number of EUV layer going from 6 F-squared to 4 F-squared, we do not expect the number of layers to drop. In fact, as 4 F-squared road map continues after the transition, we, in fact, expect the number of EUV layers to continue to grow. And I make that statement after many discussions with our customer. On top of that, what I’d like to add is 4 F-Squared has a bit of a more complex structure. So it’s, in fact, adding overall more litho mask, more advanced litho mask. So there is a benefit also to some extent, to advanced deep UV. So in any case, you still doubt about it. 4 F-squared is in no way a bad news for ASML…

…I think we wish we had a formula to translate all the announcement on what it means exactly for us in the next few years. But I think no one has that.

The entire semiconductor supply chain does not have very clear visibility into AI-driven demand

[Question] It feels like you wake up every day to another massive announcement from somewhere within the AI food chain. And you sort of — you spent a lot of time talking about how that to create a theoretical backlog for you, not yet orders. But I’m just wondering, you are a critical supplier into this market. You have the potential to be a backlog — to be a bottleneck rather for the market. Now of course, you don’t want that to be the case. You’re preparing for growth, et cetera. But do you feel like there’s sufficient understanding through the chain, whether it’s of where you sit or perhaps your customers?

[Answer] I think we wish we had a formula to translate all the announcement on what it means exactly for us in the next few years. But I think no one has that…

…[Question] Do you feel like your customers are giving you that heads up, right? Is there a sufficient acknowledgment through the chain?

[Answer] I think they do their very best. I will say it this way because they have the same challenge as we do.

Cloudflare (NYSE: NET)

A large digital media platform expanded its relationship with Cloudflare because the media company saw Cloudflare as the only company building the essential platform to protect and manage content for the emerging AI-driven web; the media company is looking to ramp up Cloudflare’s Pay Per Crawl service; the media company thinks Pay-Per-Crawl could turn Cloudflare from an expense-line into a revenue-generator

A Global 2000 digital media platform expanded its relationship with Cloudflare, signing a 3-year $22.8 million pool of funds contract for application services and workers. This contract marks the culmination of a powerful comeback story. We actually lost this customer to a competitor in 2016, but the Internet and Cloudflare evolved. We earned their trust back in 2023, starting with our Zero Trust portfolio. During 8 months of testing before signing this deal, our world-class security, unmatched product breadth and powerful Workers platform ran circles around the incumbent. But that’s not the whole story. The decisive factor of the win was AI. This customer looked at the landscape and correctly identified Cloudflare is the only company building the essential platform to protect and manage content for the emerging AI-driven web. This strategic win established us as the customer’s clear forward-looking partner and creates a direct on-ramp for Pay Per Crawl, which could transform Cloudflare from a vendor they pay for services into a powerful revenue generator for their business.

A web infrastructure platform signed a contract with Cloudflare for AI Crawl and Bot Management after seeing a huge surge in visits from AI scrapers and bots, leading to cost inflation with growth in revenue; management thinks the web infrastructure platform could become a Pay Per Crawl customer im the future

A global web infrastructure platform expanded its relationship with Cloudflare, signing a 14-month $1.2 million contract for AI Crawl Control and Bot Management. This customer is experiencing a massive surge in AI scrapers and malicious bots hitting their origin servers, inflating costs without revenue conversion and obscuring visibility into legitimate traffic. They selected Cloudflare for our innovative best-of-class bot blocking capabilities in addition to seamless expedited deployment by our deep platform integration. We’re already exploring a much larger opportunity with this customer for Pay Per Crawl.

Cloudflare’s management sees the company having emerged as a strategic partner to media companies in managing the new business model of the internet in the AI world; management thinks AI is a massive information consumption platform shift that will also change the business model of the internet from the previous long-standing model of (1) create content, (2) generate traffic, and (3) sell products or advertising; management thinks there are many questions that will arise with the new business model of the internet and they do not even know what this new business model will look like, but they think Cloudflare will be an important force in shaping the conversation; 80% of leading AI companies are relying on Cloudflare’s infrastructure; management sees Cloudflare as a thought leader in what the future business model of the Internet looks like; even companies such as the research departments of banks are speaking with Cloudflare to figure out a new business model of the internet in the AI world, and the same goes for brands and small businesses

We talked last quarter about how the rise of AI would impact media companies. Cloudflare has emerged as a strategic partner to these firms as they work through what the new business model of the Internet will be. But it goes beyond just media. Businesses of all shapes will be transformed by the rise of AI. I don’t think people yet appreciate how AI is another massive information consumption platform shift, just as we move from consuming information via a browser on a desktop to social media and then to apps on mobile devices, AI is another information consumption platform shift. It changes where and how we will consume and interact with information.

With the last 3 platform shifts, the business model of the Internet remains the same: create content, generate traffic and then sell things, subscriptions or ads. With AI, for the first time in a long time, the fundamental business model is going to change. Human eyeball traffic is unlikely to be the currency of the Internet’s future. We already can see glimpses of that future. It’s represented in SciFi. When George Jetson asks his helpful robot Rosie for a recipe for cookies, the response isn’t 10 blue links to hunt through. It’s a recipe for cookies. Most of us are increasingly living in some version of that future now with tools like ChatGPT, and it seems inevitable that more and more commerce will be facilitated by AI-powered agents working on our behalf. 

As that happens, new questions will arise. What happens to small businesses? What happens to brands? Brands, of course, are just shortcuts for humans to be able to assess quality and value. What do they mean in the world of agentic commerce? I don’t know what the future business model of the Internet will look like, who the winners and losers will be, but I do believe Cloudflare will help shape it. We estimate 80% of the leading AI companies already rely on us. A huge percentage of the Internet sits behind us. The agents of the future will inherently have to pass through our network and abide by its rules. And as they do, we will help set the protocols, guardrails and business rules for the Agentic Internet of the future…

…One of the things that has really set us apart is — and this is thanks to our over time, just significant investment in public policy and the side of the house that maybe doesn’t always get as much attention. But I think we have been thought leaders in thinking about what does the future business model of the Internet look like…

…At banks, the research departments they’re a little nervous because they’re seeing ticks down in the amount of research that people are paying for because the AI companies are slowing that up. So that’s open conversations with financial services companies. We’re seeing challenges with brands that are worried about what does a brand mean in the future of agentic commerce. We’re seeing challenges from small businesses. And I think one of the things that I am passionate about is how do we make sure that as this new paradigm, as this new platform emerges, how do we make sure that everybody has a fair shot to be able to participate in it.

Cloudflare’s management is seeing companies increasingly adopting Cloudflare’s Workers developer platform for running AI inference, and building AI agents and applications; management has always been investing behind demand for Cloudflare, not ahead of it; Workers is not facing any form of capacity constrain

Our Workers developer platform continues to deliver outsized growth with the world’s most innovative companies increasingly adopting Workers for running AI inference tasks as well as building AI agents and full stack applications…

…[Question] do you think that you’re capacity constrained in Workers?

[Answer] I don’t think we’re capacity constrained because of somewhat the nature of how we’ve architected Cloudflare and the philosophy of how we make CapEx and network investments. We always have tried to invest behind demand, not ahead of demand.

Cloudflare’s management believes Cloudflare can get the utilisation rate of its GPUs up to 70%-80%, given the company’s excellent track record with utilising CPUs; Cloudflare has been able to generate revenue from its hardware deployment even before it starts paying for the equipment

It’s been remarkable to see over the last 15 years, how our team has been able to squeeze as much as possible out of the CPU capacity that we have, where we can run that CPU capacity at 70% to 80% utilization and get more out of every CapEx dollar we spend. But what’s fascinating is we’re sort of speed running the last 15 years now with GPUs, where we’re figuring out how to make GPUs multi-tenant, how to make them load and unload models more quickly and driving the utilization of GPUs up substantially. And so that is still well below what we have with CPUs, but we see no reason that we can’t get GPUs also up to that 70%, 80% utilization…

…The supply chain within Cloudflare is so optimized to a large degree because we use off-the-shelf equipment and parts that we can deploy hardware, especially in Tier 1 cities and generate revenue even before we start to pay for the equipment. So not only do we have the flexibility that Matthew described really well at length, our reaction time to deploy hardware where we need it is really, really fast.

Cloudflare’s management sees the biggest competition for the company from winning inference workloads is the hyperscalers; management thinks Cloudflare can show much better TCO (total cost of ownership) than the hyperscalers when it comes to inference workloads; Cloudflare can become very sticky for inference workloads once customers realise there’s a different way to run these workloads from the traditional way of doing it with the hyperscalers; AI inference is still a tiny portion of Cloudflare’s revenue today, even though management is excited about its potential; management does not see any concentration risk in its AI-native business; management has found that the first product from Cloudflare that AI companies are often interested in is security-related, because the AI companies’ cost-to-serve queries is high, so they want to block out fraudulent queries; of the 80% of leading AI companies that rely on Cloudflare’s infrastructure, many of them are using Cloudflare’s security products; management thinks a particular strength of Cloudflare is being able to bring the inference workloads close to users, resulting in lower latency; management thinks that many inference workloads in the future will be run on the edge (i.e. on-device) and if it can’t be done, then it will be run on the network, which suits Cloudflare’s strength

[Question] On competition for Cloudflare in the enterprise for securing those inference workloads and winning those inference workloads in particular. Matthew, I would love to hear you comment how do you think competition is evolving in the enterprise as you build out some of the breadth and depth of your functionality?

[Answer] I think that the primary competition for inference workloads continues to be the hyperscalers. And it continues to be the model of do you want to do this work yourself and have to optimize yourself or do you want to hand it off to Cloudflare. And I think in the cases where we’re in the conversation, we’re able to show that there’s just a much better TCO, total cost of ownership, a much lower cost, much better performance when we manage that for you. And so there’s kind of a standard way people do things, which is the hyperscaler way. We’re having to teach them that there is a different way that’s out there…

…I think that we are finding, though, that once somebody learns that there’s a better way that Cloudflare is very, very sticky, and we keep those customers over the long term…

…Even though we’re excited about AI and AI inference, it is still a relatively de minimis portion of our overall revenue, growing fast, but not — I don’t see any current concentration risk that’s there. And what we’re seeing is actually sometimes it’s not the inference products that initially get interest from the AI native companies. It’s actually the security products. And the reason why is the cost of AI, every query can be so high that making sure that you don’t have fraudulent queries running through your system is critical in order to make sure that you can continue to operate cost effectively. And so many of the AI companies, we estimate that about 80% of AI companies use us in one way or another. But a lot of the times, that’s using us for actually securing some of our — really our Act 1 products. And then we are working on getting more and more of them to use the inference products as well.

In terms of what we can do that others can’t do, I think you’re absolutely right that being able to be close to users is important for a latency perspective. And that’s — and when you have human computer interaction, especially with something that is seems almost alive when you’re interacting with it. Every millisecond counts because it breaks that illusion if things slow down, especially as you get to things like voice communication and other things that need to have kind of a natural rhythm to them. And so I think we’re well positioned for that…

…It’s clear to me that there is something very, very real here that it is going to be transformative that a lot of inference will run on your handset or your driverless car directly there, but that if it can’t run there, it needs to run somewhere else, the next best place for it to run is in the network. And Cloudflare is the only network that gives you that capability on a global basis today. And I think that, that’s going to continue to allow us to win workloads regardless of what happens to AI generally.

Cloudflare’s management started the NET Dollar project because they think a common currency would be needed in agentic commerce transactions; management thinks NET Dollar fits well with the regulatory regimes of the US and other parts of the world; Cloudflare has other irons in the fire apart from NET Dollar when it comes to facilitating payments in agentic transactions; management believes there will be multiple different ways to pay in agentic transactions, and they want Cloudflare to be in the center of that

So as we have really interacted with AI companies, but also the merchants and media companies and the real long tail of the Internet, much of which sits behind us. What we realized was that as we move into a world of agentic commerce, we’re going to need a currency to pay for the commerce that is done between agents that is really designed specifically for that task. And that’s the spirit with which we started the NET Dollar project…

…I think we’re approaching it in a thoughtful way and are confident that we can execute in a way that is both going to help facilitate agent-to-agent commerce and be something that it fits well within any of the regulatory regimes that we have both in the U.S. and around the rest of the world…

…We want to be the Babel fish of AI, sort of the universal translator, whether you’re using MCP, the Anthropic protocol or Google’s version of it or Microsoft’s version of it, Cloudflare supports all of those. And so I think in addition to the excitement that we’ve seen around NET Dollar, I am equally excited about the partnerships that we’re doing with Coinbase around X402, with Visa, Mastercard, American Express, around how you can create agent-to-agent payments. And I think that Cloudflare is a network, and what you want networks to be able to do is facilitate the ability for connection to happen and do it regardless of what makes sense. So we think there are potentially some advantages to what we’re building with NET Dollar, but we’re not all in on any one of these things…

…We also believe that there are going to be multiple different ways to pay. There are going to be multiple different agentic protocols, and they are going to be hopefully many, many, many AI companies interacting with many media and businesses to create a more frictionless and AI-powered future of commerce. And I think that we see ourselves in the center of that.

Cloudflare’s management is seeing good progress with Pay Per Crawl; media companies have gotten markedly better deals with AI companies with Pay Per Crawl; 

I think you’re asking about the product around us thinking about how do we help media companies figure out a new business model for the future. I think that, yes, I think that’s going just extremely well. Like the number of media companies that are signed up and engaged is powerful. We’re hearing from them about how the deals that they are able to do with AI companies have gotten markedly better, and we are getting a lot of praise for that.

Mastercard (NYSE: MA)

Mastercard is building the foundation for agentic commerce in partnership with key players such as OpenAI and Google; the Mastercard Agent Pay feature enables agents to facilitate transactions over Mastercard’s payment network; Mastercard processed its first agentic payment in 2025; US Bank and Citibank cardholders can now use Agent Pay, with more US issuers able to use Agent Pay in November, followed by a global rollout in early 2026; merchants can use Agent Pay without any significant need for integration; Mastercard has a partnership with Walmart for Agent Pay; agents can use Mastercard’s inside tokens to deliver personalised agentic commerce experiences to consumers; management thinks the runway for agentic commerce is long

With our global acceptance reach, trusted brand and services capabilities, we’re instrumental in creating the foundation for agentic commerce. We’re now working with key players such as OpenAI on their agentic commerce protocol and with Google and Cloudflare to set industry standards, all to drive safety and security.

To Mastercard Agent Pay, we’re enabling agents to facilitate transaction over a Mastercard’s payment network in a secure and scalable way. You already have agents registered and have tools in place for easy onboarding as others are ready. Our first agentic transaction took place on our network this quarter at a pivotal moment in payments, and that’s just the start. U.S. Bank and Citibank cardholders can now use Agent Pay. The rest of our U.S. issuers will be enabled in November with a global rollout to follow early next year.

And the beauty of it all, we’ve made it easy for merchants across the globe to benefit on day 1 with the same trust and security they are used to do from us today. Our acceptance framework enables any Mastercard merchant to participate without significant development or integration, a no-code approach…

…We have strong partnerships with the players I just mentioned and many more, including Walmart, to accelerate the adoption of agentic commerce using cards through Mastercard Agent Pay…

…Agents through Mastercard’s inside tokens can make agentic commerce even more personalized. By harnessing our proprietary data, we will be able to provide agents with predictive insights to help drive smarter decisions and recommendations.

The shift we’re seeing in commerce is creating further opportunity for our capabilities, more consulting, more loyalty, more security and so on. The runway for agentic focused services in consumer and business use cases is long, and we’re well positioned to capture this opportunity.

Mastercard’s management is seeing consumer search behaviour change because of AI chatbots; management thinks agentic commerce is a significant paradigm shift for the payments ecosystem because the agent is now an extra party that has entered the loop and this increases complexity for merchants; in agentic commerce, it’s important to determine the identity of an agent, and this is what Mastercard Agent Pay can do; in agentic commerce, the consumer identity also needs to be determined, as in a traditional online transaction; there are tricky aspects on agentic commerce to solve, such as handling a challenged transaction, and this is something Mastercard can do; management thinks agentic commerce will be very hard to handle for local payment networks, and this will be an opportunity for Mastercard to win share; the transition from physical payment to online payment unlocked new suite of services Mastercard could provide, and management expects a similar thing to happen with the transition to agentic commerce

What we’re seeing is behavioral change, driven and powered by generative AI and bots and so forth, where search behavior is changing. So, that’s on the consumer side, if we start right there. So, consumers are migrating their search increasingly so to their favorite chatbot and they’re asking their queries there, and they get potentially better answers, who knows…

…It’s really quite a significant paradigm shift for the payment ecosystem, because in the payment ecosystem, what happens is there’s now an extra party that has entered the realm, and that is the agent. So, that comes with a lot of those aspects you just talked about in your question, is there’s legal questions, there’s a security question…

…Some of the things that need to happen in a world of agentic commerce is, the first is, is this a real bot? Is this a bot that we believe matches up to Mastercard’s safety and security standards? So, we will certify and register bots out there. So that’s what Mastercard Agent Pay does. So, nothing really new from us on a perspective, but it’s a new party. Not really visible to the consumer in that way, but certainly driving some complexity potentially for merchants, for issuers, for every other party because that is just a new flow for the transaction…

…The merchant needs to know that the agent on the other side that we have certified is actually the agent. So, we have to pass through that information and ensure that the circle closes. We’re doing that. Well, there’s still the question of what is in focus today very much so is the consumer, the person they claim to be. So, consumer authentication needs to continue, but it now needs to flow through a somewhat more complicated transaction. So, all of this is happening…

…If you have asked an agent to buy you something in a chat, and then in the end, you challenge that transaction, who can prove who’s right. Is it the consumer? Is it the merchant? What happens? What do you do on return policies and various other things. Those are all complexities that we’re pretty good at solving in today’s world, and they were pretty busy solving in the future world, and that comes down to some of the aspects that you’ve talked about in your question. Where is the legal and regulatory framework on this yet? This is not something that’s specifically contemplated, but that will evolve over time…

…On the point of challenging a transaction. We’ve bought a company a couple of years ago called Ethoca, and what they do is they provide transaction detail at the moment of a charge back to a consumer that says, “Hey, you actually did this transaction because you were here at this time doing the following.” And the same can be done with this audit trail that would be capturing out of the chat that I talked about earlier. That is one example…

…One thing that I think is a pretty obvious opportunity is, this is going to be very hard to do for local payment networks. So, if you look around various kind of local payment systems that exist in Europe, in Asia and so forth. Big markets for us is an opportunity for us to continue to drive up our switching ratio as we’ve done in years, and this gives us another, kind of, field to execute on. I think that’s the first thing to say…

…You think back about the days where everything was in store and what kind of services portfolio we had and the opportunities we had to apply services and drive differentiation for us versus others. And then it went online. There was a whole different set of solutions that were suddenly needed to keep the online transaction safe. And agentic, it’s going to be even more opportunity for us to do that.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about the potential of infusing AI agents within MercadoLibre’s ecosystem; management recently launched Seller Assistant, a chatbot that provides personalized advice and recommendations to sellers; in MercadoPago, management just launched an AI assistant that can help users with a wide range of tasks; management thinks it’s still early in terms of determining OpenAI’s impact on e-commerce but what MercadoLibre needs to do is to develop agentic capabilities first so that it can be utilised if needed

We are extremely excited about the potential of Agent to enhance discovery, service and productivity within our ecosystem. There are several examples of things that we are doing on that regard. We just launched our own Seller Assistant, which is a conversational tool that gives sellers personalized advice and recommendations on how to manage data activity in our platform. In FinTech, as you probably know, we just launched our first AI assistant that can help our users with a wide range of tasks like making or scheduling money transfer through a conversation platform, asking for questions on the user’s operation and so on…

…[Question] I wanted to hear from you how you are thinking about OpenAI’s recent move into e-commerce?

[Answer] We need to continue to focus ourselves in building the best agentic experience within our platform, and that will give us optionality on what to do next and how to move forward. I think it’s early to make comments on OpenAI and their partnership with Etsy, Shopify, and so on. We need to understand how this will develop in the long run, what role agent will play in the relationship with consumers. And eventually, decide if there’s something different that we need to do for sure. We need to put the technology in place in order to have an agentic experience in MercadoLibre and in Mercado Pago in the near term.

Meta Platforms (NASDAQ: META)

Meta’s management is building an industry-leading amount of compute to be ready for whenever superintelligence arrives; if superintelligence takes longer than expected, the extra compute can be used to accelerate Meta’s core business; Meta’s core business has been able to profitably use much more compute than what’s available; management is seeing very high demand for compute; the worst case for building compute now is that Meta will be growing into the compute that it’s building; management recognises the possibility that Meta could overshoot on building compute capacity, and if so, it will lead to the worst case scenario

We’re also building what we expect to be an industry-leading amount of compute. Now there’s a range of time lines for when people think that we’re going to get superintelligence. Some people think that we’ll get there in a few years. Others think it will be 5, 7 years or longer. I think that it’s the right strategy to aggressively frontload building capacity so that way we’re prepared for the most optimistic cases. That way, if superintelligence arrives sooner, we will be ideally positioned for a generational paradigm shift in many large opportunities. If it takes longer, then we’ll use the extra compute to accelerate our core business which continues to be able to profitably use much more compute than we’ve been able to throw at it. And we’re seeing very high demand for additional compute, both internally and externally. And in the worst case, we were just slow building new infrastructure for some period while we grow into what we build…

…Now I mean, it’s, of course, possible to overshoot that, right? And if we do… the kind of the very worst case would be that we effectively have just prebuilt for a couple of years, in which case, of course, there would be some loss and depreciation, but we’d grow into that and use it over time.

AI recommendation systems are improving the content delivered across Facebook, Instagram, and Threads; AI recommendation systems have led to 5% more time spent on Facebook in 2025 Q3, and 10% on Threads; AI recommendation systems have led to 30% more time spent on video in Instagram in 2025 Q3; improvements in Meta’s AI recommendation systems will also benefit the company with the coming growth of AI-generated content; Facebook is now surfacing twice as many Reels published that day than at the start of 2025; management expects to evolve Instagram’s recommendation systems in 2026 to surface broader content that cater to diverse interests of each person; Meta has produced promising results in creating foundational ranking models and management expects to significantly scale up data and compute for training recommendation models in 2026 to yield better recommendations; management expects Meta to leverage LLMs (large language models) in 2026 to improve understanding of content by the recommendation systems; ranking optimisations made in 2025 Q3 alone led to a 10% increase in time spent on Threads

Across Facebook, Instagram and Threads, our AI recommendation systems are delivering higher quality and more relevant content, which led to 5% more time spent on Facebook in Q3 and 10% on Threads. Video is a particular bright spot with video time spent on Instagram up more than 30% since last year…

…Improvements in our recommendation systems will also become even more leveraged as the volume of AI-created content grows. Social media has gone through 2 eras so far. First was when all content was from friends, family and accounts that you followed directly. The second was when we added all of the creator content. Now as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content on top of those. Recommendation systems that understand all this content more deeply and can show you the right content to help you achieve your goals are going to be increasingly valuable…

…On Facebook, our systems are now surfacing twice as many Reels published that day than at the start of the year.

Looking to 2026, we expect to advance our recommendation systems across several dimensions. On Instagram, one focus is evolving our systems to surface content across a broader set of topics that cater to the diverse interest of each person. This follows a similar approach we’ve implemented on Facebook that has driven good results. We also expect to make significant progress on our longer-term ranking innovations in 2026. We’re seeing promising new results from our research efforts to create foundational ranking models and expect the new model innovations we’re developing as part of this will enable us to significantly scale up the amount of data and compute we use to train our recommendation models in 2026, yielding more relevant recommendations.

Another large focus next year is leveraging LLMs to improve content understanding. We expect this is going to enable our systems to more precisely label the keywords and topics within videos and posts, which will allow our systems to both develop deeper intuition about a person’s interest and retrieve the content that matches them…

…The ranking optimizations we made in Q3 alone drove a 10% increase in time spent on Threads.

Meta’s advertising business has benefited from improvements in AI ranking systems; the unification of different models into simpler, general models have led to improvements in the advertising business in 2025 Q3; management rolled out Lattice, its unified model architecture for advertising ranking models, to app ads in 2025 Q3 and drove a 3% gain in conversions; since the introduction of Lattice and other improvements in 2023, Meta has reduced the number of ads ranking and recommendation models by around 100, and the reductions have led to performance improvements; management expects Meta to achieve additional gains as it consolidates another 200 models over the coming years; management is innovating on run time models used for advertising inference; a new run time advertising ranking model was piloted in 2025 Q3 that uses more compute and data than prior models, and it drove a lift in conversions on Instagram of more than 2%; management has improved the performance of the Andromeda model architecture in 2025 Q3, driving a 14% increase in advertising quality on Facebook surfaces

Our ads business continues to perform very well, largely due to improvements in our AI ranking systems as well. This quarter, we saw meaningful advances from unifying different models into simpler, more general models, which drive both better performance and efficiency…

…We are driving performance gains through ongoing improvements in our larger scale ads ranking models. For example, we continue to broaden the adoption of Lattice, our unified model architecture. In Q3, we rolled out Lattice to app ads, which drove a nearly 3% gain in conversions for that objective. 

Since introducing Lattice back in 2023, along with other back-end improvements, we have now cut the number of ads ranking and recommendation models by approximately 100 as we consolidated smaller and more specialized models into larger ones that use the Lattice architecture to generalize learnings across surfaces and objectives. We continue to observe performance improvements as we combine models and expect to drive additional gains as we consolidate another 200 models over the coming years into a smaller number of highly capable models…

…We’re innovating on our run time models we use downstream of them for ads inference. For example, we began piloting a new run time ads ranking model in Q3 that leverages more compute and data than our prior models to select more relevant ads. In testing, we’ve seen this new model drive a more than 2% lift in conversions on Instagram.

We also significantly improved performance of Andromeda in Q3 by combining models across retrieval and early-stage ranking into a single model, driving a 14% increase in ads quality on Facebook surfaces.

Meta’s end-to-end AI-powered advertising tools, which are under Advantage+, is now handling $60 billion in annualised run rate revenue; management rolled out a streamlined campaign creation flow for Advantage+ lead campaigns in 2025 Q3, so end-to-end automation is turned on from the beginning; the number of advertisers using at least 1 of Advantage+’s video generation features grew 20% sequentially in 2025 Q3; management has added more generative AI features to Advantage+ to help advertisers optimise and improve ad creatives; management introduced AI generated music in Advantage+ in 2025 Q3; management continues to think a fully automated AI advertising product, where advertisers just have to tell the system what its objectives are, and the AI figures out everything else, is still important; advertisers who run lead campaigns using Advantage+ are seeing a 14% lower cost per lead; a lot of advertisers only use Advantage+ for a portion of their campaigns, so management thinks there are share gains to be made

Now the annual run rate going through our completely end-to-end AI-powered ad tools has passed $60 billion…

…In Q3, we completed the rollout of our streamlined campaign creation flow for Advantage+ lead campaigns. So now advertisers running sales app or lead campaigns have end-to-end automation turned on from the beginning, allowing our systems to look across our platform to optimize performance by automatically choosing criteria like who to show the ads to and where to show them. The annual run rate of revenue running through our end-to-end automated solutions has now reached $60 billion following the implementation of the new streamlined creation flow, as we continue to see more advertisers leverage the performance benefits of our solutions.

Within our Advantage+ creative suite, the number of advertisers using at least 1 of our video generation features was up 20% versus the prior quarter as adoption of image animation and video expansion continues to scale. We’ve also added more generative AI features to make it easier for advertisers to optimize their ad creatives and drive increased performance. In Q3, we introduced AI generated music so advertisers can have music generated for their ad that aligns with the tone and message of the creative…

…I mean there’s one opportunity that we just usually talk about on these calls, but hasn’t come up as much here is just the ability to make it so that advertisers are increasingly just going to be able to give us a business objective and give us a credit card or bank account and like have the AI system basically figure out everything else that’s necessary. Including generating video or different types of creative that might resonate with different people that are personalized in different ways, finding who the right customers are. All of these — all of the capabilities that we’re building, I think, go towards improving all of these different things. So I’m quite optimistic about that…

…Advertisers who run lead campaigns using Advantage+ are seeing a 14% lower cost per lead on average than those who are not…

…A lot of advertisers only use our end-to-end automated solutions for a portion of their campaigns so we can grow share there. And to capture that opportunity, we’re focused on driving continued performance improvements and addressing some of the key use cases that we still need in order to grow adoption.

Meta AI has more than 1 billion monthly actives, with usage increasing as the underlying models improve; the majority of Meta AI’s responses to queries in the US now show related Reels; users have created over 20 billion images with Meta AI; the launch of Vibe within Meta AI in September has led to a 10x increase in media generation in Meta AI; Meta AI is still powered by Llama 4

More than 1 billion monthly actives already use Meta AI and we see usage increase as we improve our underlying models…

…We’re increasingly leveraging first-party content into Meta AI results with the majority of Meta AI’s responses to Facebook Deep Dive queries in the U.S. now showing related Reels. We’re also seeing a lot of traction with media generation. People have created over 20 billion images using our products. And since launching Vibes within Meta AI in September, we have seen media generation in the app increased more than tenfold…

…A lot of people use Meta AI today. I mean, as I said in my comments upfront, there’s more than 1 billion people who use it on a monthly basis. And what we see is that as we improve the quality of the model, primarily for post-training Llama 4 at this point. We are — we continue to see improvements in usage.

Meta sees more than 1 billion active threads happening everyday with business accounts across its messaging platforms; management thinks Meta’s Business AI will help tens of millions of businesses scale the conversations and improve sales at low cost; business messaging continues to be a significant opportunity for Meta; Click-to-WhatsApp ads revenue was up 60% year-on-year in 2025 Q3; management has broadened Business AI access in the initial test markets of Philippines and Mexico, and strong usage has been seen, with millions of conversations between people and Business AIs taking place since July; in the US, management is rolling out the ability for merchants to add their Business AIs to their website 

Every day, people have more than 1 billion active threads with business accounts across our messaging platforms, ranging from product questions to customer support. Our business AIs will enable tens of millions of businesses to scale these conversations and improve their sales at low cost and the better our models get, the better this is going to work for all businesses…

…Business messaging remains a significant opportunity for us. We’re seeing strong growth across our portfolio of solutions, including with Click-to-WhatsApp ads, which grew revenue 60% year-over-year in Q3.

We’re also making good progress on our business AI efforts, where we’ve been focused on building a turnkey AI that helps businesses generate leads and drive sales. We’ve been opening access in recent months to more businesses within our initial test markets, the Philippines and Mexico. And we’ve seen strong usage with millions of conversations between people and Business AIs taking place since July. This month, we expanded availability within WhatsApp and Messenger to all eligible businesses in Mexico and the Philippines, respectively. In the U.S., we’re also starting to roll out the ability for merchants to add their Business AIs to their website so we can support the full sale funnel from ad to purchase.

Retention at Vibes is looking good so far, with usage growing fast weekly; management sees Vibes as new content type enabled by AI; the launch of Vibe within Meta AI in September has led to a 10x increase in media generation in Meta AI

This quarter, we also launched Vibes which is the next generation of our AI creation tools and content experiences. Retention is looking good so far. And its usage keeps growing quickly week over week…

…I think that Vibes is an example of a new content type enabled by AI, and I think that there are more opportunities to build many more novel types of content ahead as well…

…And since launching Vibes within Meta AI in September, we have seen media generation in the app increased more than tenfold.

The response to Meta’s 2025 line of AI glasses has been great; sales of the new Ray-Ban Meta glasses and Oakley Meta Vanguards are both good; the new Meta Ray-Ban Display glasses that come with the neural band as an interaction touch-point, sold out within 48 hours; management wants to invest to increase manufacturer of the Meta Ray-Ban Display glasses; management thinks there’s huge opportunity ahead with the Meta Ray-Ban Display glasses; management thinks that if the smart glasses continue on their current trajectory, then Meta’s ongoing investments in Reality Labs (via operating losses) will generate a good return; the return on investment of the smart glasses will come from both the hardware sales and new AI-enabled services that are layered on top; management will continue investing in virtual reality hardware products, such as the Orion

At Connect, we announced our 2025 line of AI glasses, and the response so far has been great. The new Ray-Ban Meta glasses and Oakley Meta Vanguards are both selling well as people love the improved battery life, camera resolution, new AI capabilities and the great design.

And there’s our new Meta Ray-Ban Display glasses, our first glasses with a high-resolution display and the Meta Neural Band to interact with them. They sold out in almost every store within 48 hours with demo slots fully booked through the end of next month. So we’re going to have to invest in increasing manufacturing and selling more of those. This is an area where we are clearly leading and have a huge opportunity ahead…

…[Question] On wearables, in particular, do you think you’ll be able to sell enough hardware to recoup your investment?

[Answer] The work on Ray-Ban Meta and the Oakley Meta product is going very well. I think, yes, I mean, at some point, if these continue going as well as it has been, then I think it will be a very profitable investment. I think that there’s some revenue that we get from basically selling the devices and then some that will come from additional services from the AI on top of it. So I think that there’s a big opportunity. Certainly, the investment here is not just to kind of build just the device. It’s also to build these services on top. Right now, a lot of people get the devices for a range of things that don’t even include the AI even though they like the AI. But I think over time, the AI is going to become the main thing that people are using them for and I think that that’s going to end up having a big business opportunity by itself.

But as products like the Ray-Ban Meta and Oakley Metas are growing, we’re also going to keep on investing in things like the more full field of view, product form of the Orion prototype that we showed at Connect last year. So those things are obviously earlier in their curve towards getting to being a sustaining business. And our general view is that we want to build these out to reach many hundreds of millions or billions of people and that’s the point at which we think that this is going to be just an extremely profitable business.

Meta’s management is focused on preserving maximum long-term flexibility for Meta’s AI capex; Meta Superintelligence Labs’ compute needs account for the largest chunk of Meta’s capex growth in 2026; when management was planning for 2025’s capex, they had investments they thought would be paying off in 2026, and those are already paying off through the course of 2025; one of the ways management looks at the ROIC (return on invested capital) of AI capex is growth in conversions relative to impressions, and Meta is putting out conversion growth that is faster than impressions; the new model architectures Meta has been deploying in its advertising systems has enabled Meta to deploy more data and compute to drive ads performance management expects this to continue in 2026; management wishes they had more compute capacity today than what’s available and they know that at least some of the capacity can be put towards positive ROI use-cases in the core business

Our primary focus is deploying capital to support the company’s highest order priorities including developing leading AI products models and business solutions. As we make significant investments in infrastructure to support this work, we are focused on preserving maximum long-term flexibility to ensure we can meet our future capacity needs while also being able to respond to how the market develops in the years ahead. We’re doing so in several ways, including staging data center sites so we can spring up capacity quickly in future years as we need it as well as establishing strategic partnerships that give us option value for future compute needs…

…I will say that the growth in 2026 CapEx relative to 2025 comes from growth in each of the core areas, MSL, core AI as well as non-AI spend. So all of those areas are growing, but the MSL AI needs are growing the most…

…[Question] Can you help us a little to understand some of the early quantifiable signals you’re seeing on AB tests from some of these improvements to come that sort of make you most excited and give you confidence you’re going to get ROIC from all this CapEx?

[Answer] In terms of the core AI pipeline, I think, we talked about last year when we were going into the 2025 budget process, we had a road map of resource investments across both head count and compute that we thought would pay off in 2026. And it’s really a very broad range of sort of different ads ranking and performance efforts. And we’re continuing to see that those have paid off through the course of the year. There is a long list of specific efforts, but 1 of the measures that we look at to monitor this is how are we driving ad performance, how are conversions growing?

Conversions is a complex metric for us because advertisers optimize for so many different conversions on different values. But when we control for that and look at value-weighted conversion rates, we’re seeing very strong year-over-year growth in conversion — weighted conversions continue to grow faster than impressions.

We also talked about some of the new model architecture over the course of the year and the degree to which the new model architecture is enabling us also to take advantage of having more data and more compute to drive ads performance. So we expect that, that’s going to be a continued story in 2026. We are, in fact, at the beginning of our 2026 budgeting process now, and we see a similar list of revenue investments that we’re excited to be able to invest in. And so we think that, that’s going to be a big part of our ability to continue to drive strong revenue performance throughout the year…

…We’re certainly seeing that we wish we had more capacity today than we do. We would be able to put it towards good use certain not only with the MSL team appreciate having more capacity, but we’d be able to put it towards good and ROI-positive use in the core business as well.

Meta’s management  has repeatedly seen a pattern of Meta building compute capacity based on an aggressive assumption, only to see even higher demand for compute; Meta’s core business keeps having the ability to use more compute in profitable ways than what’s available

To date, we keep on seeing this pattern where we build some amount of infrastructure to what we think is an aggressive assumption. And then we keep on having more demand to be able to use more compute, especially in the core business in ways that we think would be quite profitable, then we end up having compute for.

Meta does not use its large models for inference work because that is too expensive; Meta gets the large models to transfer knowledge to smaller models for inference work

We don’t use our larger model architectures like GEM for inference because their size and complexity would make it too cost prohibitive. The way that we drive performance from those models is by using them to transfer knowledge to smaller lightweight models that are used at run time.

Meta’s management  is unsure of the margin-profile of the new products Meta may develop with AI

[Question] You mentioned the prior 2 content cycles, and obviously, you’ve been able to generate very attractive margins on them. As we get into the AI cycle, obviously, some concerns on the investment. But can you talk a little bit about how you’re thinking about tools that could be coming out for users? I know there’s some new competition. And then secondly, how do you think about margins in this content cycle? Any reason to think they would be different versus prior cycles.

[Answer] I think it’s too early to really understand what the margins are going to be for the new products that we build. I mean, I think certainly, every — each product has somewhat different characteristics. And I think we’ll kind of understand how that goes over time. I mean, my general goal is to build a business that maximizes value for the people who use our products and maximizes profitability, not margin. So I think we’ll kind of just try to build the best things that we can and try to deliver the most value that we can for most people.

Meta’s management  thinks being the best at a given capability in the AI world will drive the greatest returns; management thinks it’s unlikely that one company will become the best at all capabilities; management wants Meta to develop novel capabilities with AI

I think the art of product development here is looking at the list of technology capabilities and figuring out what new products are going to be useful and prioritizing those. But fundamentally, I would sort of expect this exponential curve in new technology capabilities that are going to become available. And the other thing that I expect is that I think being the best in a given area will drive great returns rather than — this is not like a check-the-box exercise of like, okay, we can generate some kind of content and someone else can. I think that like the company that is the best at each of these capabilities, I think, will get a large amount of the potential value for doing that. So there are lots of different capabilities to build. I’m not sure that any one company is going to be the best at all of them. I doubt that’s going to be the case. But a lot of what we’re trying to do is not like not kind of do some things that others have done. We’re really trying to build novel capabilities.

Meta’s management  thinks a lot of AI apps today are still really small, but there’s huge opportunity

But if you look at it today, the companies that are building apps, I mean, a lot of the apps are still relatively small. And I think that’s obviously going to be a huge opportunity.

Meta’s management  thinks AI is different from past technological developments because AI allows new capabilities to be introduced fast, and new products and businesses can be built around these capabilities

I think what we haven’t really seen as much in the history of the technology industry is the rate of new capabilities being introduced because around each of these capabilities, you can build many new products that I think each will turn into interesting businesses.

Microsoft (NASDAQ: MSFT)

Microsoft and OpenAI have a new agreement; Microsoft’s investment in Open AI has 10x-ed in value; under the new agreement, Open AI has a $250 billion contract with Azure while Microsoft has model and product IP rights to 2032; management does not think AGI will be achieved any time soon, but a lot of value from AI can still be derived

We closed a new definitive agreement with OpenAI, marking the next chapter in what is one of the most successful partnerships and investments our industry has ever seen…

Already, we have roughly 10x-ed our investment. OpenAI has contracted an incremental $250 billion of Azure services, our rev share, exclusive IP rights and API exclusivity for Azure continue until AGI or through 2030. And we have extended the model and product IP rights through 2032…

…I don’t think AGI as defined at least by us in our contract is ever going to be achieved anytime soon. But I do believe we can drive a lot of value for customers with advances in AI models by building these systems.

Azure has the most expansive data center fleet for the AI era and is adding capacity at scale; Azure will increase AI capacity by >80% in FY2026 and will double its total data center footprint in 2 years as management sees strong demand; Azure announced the most powerful AI data center in the world in 2025 Q3 and it will start operations in 2026 and scale to 2 gigawatts; Azure has the world’s first large-scale cluster of NVIDIA GB300s; Azure is building a fungible GPU fleet that’s continuously modernised for all stages of the AI lifecycle (from pretraining to inference) and for workloads that go beyond generative AI; management thinks Azure has the best ROI (return on investment) and TCO (total cost of ownership) for customers; Azure increased the token throughput of GPT-4.1 and GPT-5 by 30% per GPU in 2025 Q3 (FY2026 Q1); Azure is supporting sovereign AI needs; Azure has customers in 33 countries who are developing their AI capabilities within local borders, such as OpenAI and SAP in Germany; Azure has Azure AI Foundry to help customers build own AI apps and agents; Azure AI Foundry offers enterprises access to 11,000 models (including GPT-5 and Grok 4) which is more than any competitor; Azure has 80,000 customers; Azure AI Foundry also provides other tools beyond models for developers to customize and manage AI applications and agents; real production-scale AI deployments are driving Azure’s overall growth; Azure took share again in 2025 Q3 (FY2026 Q1)

We have the most expansive data center fleet for the AI era, and we are adding capacity at an unprecedented scale. We will increase our total AI capacity by over 80% this year and roughly double our total data center footprint over the next 2 years, reflecting the demand signals we see. Just this quarter, we announced the world’s most powerful AI data center, Fairwater in Wisconsin, which will go online next year and scale to 2 gigawatts alone.  And we have deployed the world’s first large-scale cluster of NVIDIA GB300s. We are building a fungible fleet that’s been continuously modernized and spans all stages of the AI life cycle from pretraining to post training to synthetic data generation and inference. And it also goes beyond GenAI workloads to recommendation engines, databases and streaming. We’re optimizing this fleet across silicon systems and software to maximize performance and efficiency. 

It’s this combination of fungibility and continuous optimization that allows us to deliver the best ROI and TCO for us and our customers. For example, during the quarter, we increased the token throughput for GPT-4.1 and GPT-5, two of the most widely used models by over 30% per GPU.

We also have the most comprehensive digital sovereignty platform. Azure customers in 33 countries are now developing their own cloud and AI capabilities within their borders to meet local data residency requirements. In Germany, for example, OpenAI and SAP will rely on Azure to deliver new AI solutions to the public sector…

We are building Azure AI Foundry to help customers build their own AI apps and agents. We have 80,000 customers, including 80% of the Fortune 500. We offer developers and enterprise access to over 11,000 models, more than any other vendor, including as of this quarter, OpenAI’s GPT-5 as well as xAI’s Grok 4…

…Beyond models in Foundry, we are providing everything developers need to design, customize and manage AI applications and agents at scale. Our new Microsoft Agent Framework helps developers orchestrate multi-agent systems with compliance, observability and deep integration out of the box…

…These kinds of real production scale AI deployments are driving Azure’s overall growth. And once again, this quarter, Azure took share.

Ralph Lauren used Azure AI Foundry to build a conversational shopping experience; Open Evidence used Azure AI Foundry to build a clinical assistant; KPMG used the Microsoft Agent Framework in Azure AI Foundry to connect agents with internal data

For example, Ralph Lauren used Foundry to build conversational shopping experience in its app, enabling customers to describe what they’re looking for and get personalized recommendations. And OpenEvidence used Foundry to create its AI-powered clinical assistant which surfaces relevant medical information to physicians and help streamline charting…

…KPMG used the framework to modernize the audit process, connecting agents to internal data with enterprise-grade governance and observability.

Microsoft has 900 million MAU (monthly active users) of AI features across its products; Microsoft’s family of Copilot apps now has 150 million MAU (was 100 million in 2025 Q2); management sees Copilot becoming the UI (user interface) for agentic AI; a chat feature released in Microsoft 365 just 9 months ago already has tens of millions of users; adoption of chat is up 50% sequentially in 2025 Q3 (FY2026 Q1), and usage intensity is increasing; management introduced Agent Mode in 2025 Q3 (FY2026 Q1), which can turn prompts into full Powerpoint slides or Excel spreadsheets; Agent Mode is ranked best-in-class by 3rd-party benchmarks; adoption of Microsoft 365 Copilot is growing super fast; more than 90% of the Fortune 500 are using Microsoft 365 Copilot; a number of large companies each purchased over 15,000 Microsoft Copilot seats in 2025 Q3 (FY2026 Q1); Lloyds Banking Group deployed 30,000 Microsoft Copilot seats in 2025 Q3 (FY2026 Q1), saving each employee 46 minutes daily; enterprises are coming back to purchase even more seats of Microsoft 365 Copilot after the first purchase; PwC employees interacted with Microsoft 365 Copilot over 30 million times in 6 months, saving millions of hours on employee productivity

We now have 900 million monthly active users of our AI features across our products. And our first-party family of Copilots now has surpassed 150 million monthly active users across the information work, coding, security, science, health and consumer.  

When it comes to information work, we continue to innovate with Microsoft 365 Copilot. Copilot is becoming the UI for the agentic AI experience. We have integrated chat and agentic workflows into everyday tools like Outlook, Word, Excel, PowerPoint and Teams. Just 9 months since release, tens of millions of users across Microsoft 365 customer base are already using chat. Adoption is accelerating rapidly, growing 50% quarter-over-quarter, and we continue to see usage intensity increased. 

This quarter, we also introduced Agent Mode, which turns single prompts into export quality Word documents, Excel spreadsheets, PowerPoint presentation and then iterate to deliver the final product much like agent mode in coding tools today. We’re thrilled by the early response, including third-party benchmarks that rank it best-in-class…

…Customers continue to adopt Microsoft 365 Copilot at a faster rate than any other new Microsoft 365 suite. All up more than 90% of the Fortune 500 now use Microsoft 365 Copilot. Accenture, Bristol-Myers Squibb, EY Global and the U.K.’s Tax and Payments and Customs Authority all purchased over 15,000 seats this quarter. Lloyds Banking Group has deployed 30,000 seats, saving each employee an average of 46 minutes daily. And a large majority of our enterprise customers continue to come back to purchase more seats. Our partner, PwC, alone added 155,000 seats this quarter and now has over 200,000 deployed across its global operations. In just 6 months, PwC employees interacted with Microsoft 365 Copilot over 30 million times, and they credit this agentic transformation with saving millions of hours on employee productivity.  

Microsoft’s management is observing a growing list of software companies, including Adobe and Asana, building their own agents that connect with Copilot; management is seeing customers building their own agents that connect with Copilot; the number of agent users doubled sequentially in 2025 Q3 (FY2026 Q1); management has announced App Builder, a new Copilot agent that turns prompts into apps and agents in Microsoft 365

We are seeing a growing Copilot agent ecosystem with top ISVs like Adobe, Asana, Jira, LexisNexis, SAP, ServiceNow, Snowflake and Workday, all building their own agents that connect to Copilot. And customers are also building agents for their mission-critical business processes and workflows using tools like Copilot Studio and integrating them into Copilot. The overall number of agent users doubled quarter-over-quarter. And just yesterday, we announced App Builder, a new Copilot agent that lets anyone create and deploy task-specific apps and agents in minutes grounded in Microsoft 365 context.

Github Copilot is the most popular AI-pair programmer now with >26 million users; tens of thousands of developers at AMD use GitHub Copilot and they are saving months of developer time; Github now has 180 million developers, and is growing its fastest rate ever; 80% of new developers start on Github with Copilot; GitHub Copilot had 500 million pull requests merged over the past year; management has released Agent HQ; management sees GitHub Copilot and Agent HQ as the organising layer for all coding agents 

GitHub Copilot is the most popular AI pair programmer now with over 26 million users…

…Tens of thousands of developers at AMD use GitHub Copilot, accepting hundreds of thousands of lines of code suggestions each month and crediting it with saving months of development time…

GitHub is now home to over 180 million developers and the platform is growing at the fastest rate in its history, adding a developer every second. 80% of new developers on GitHub start with Copilot within the first week. Overall, the rise of AI coding agents is driving record usage with over 500 million pull requests merged over the past year.

And just yesterday, at GitHub Universe, we introduced Agent HQ. GitHub Copilot and Agent HQ is the organizing layer for all coding agents, extending GitHub privatives like PRs, issues, actions to coding agents from OpenAI, Anthropic, Google, Cognition, xAI as well as OSS and in-house models. GitHub now provides a single mission control to launch, manage and review these agents, each operating from its own branch with built-in controls, observability and governance.

Half of Microsoft’s cloud and AI-related capex in 2025 Q3 (FY2026 Q1) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs, driven by strong AI- and Azure-related demand; there is a difference between Microsoft’s total capital expenditure and cash expenditure because of the use of finance leases; Microsoft’s AI capital expenditure for CPUs and GPUs are backed by signed-contracts and the useful lives of the GPUs are quite matched with the duration of the contracts; Microsoft’s AI capital expenditure for long-lived assets are not backed by contracts, but management is confident these assets will be useful over their lifespans; when building AI infrastructure, management’s priority is for Microsoft’s internal workloads, such as Copilot and AI research

Capital expenditures were $34.9 billion, driven by growing demand for our Cloud and AI offerings. This quarter, roughly half of our spend was on short-lived assets, primarily GPUs and CPUs, to support increasing Azure platform demand, growing first-party apps at AI solutions, accelerating R&D by our product teams as well as continued replacement for end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond, including $11.1 billion of finance leases that are primarily for large data center sites. And cash paid for PP&E was $19.4 billion. As a reminder, the difference between total CapEx and cash paid for PP&E is primarily due to finance leases as well as the normal timing of goods received, but not yet paid…

…Increasingly, we talked about this short-lived assets, both GPUs and CPUs, Again, we talk about all these workloads are burning both in terms of app building. Now when that happens, short-lived assets generally are done to match sort of the duration of the contracts or the duration of your expectation of those contracts. And so I sometimes think when people think about risk, they’re not realizing that most of the lifetimes of these and the lifetime of the contracts are very similar. And so when you think about having revenue and the bookings and coming on the balance sheet, the depreciation of short-lived assets, they’re actually quite matched, Mark…

… We’re continuing to do that also using leases. Those are very long-lived assets, as we’ve talked about 15 to 20 years. And over that period of time, do I have confidence that we’ll need to use all of that, it is very high…

…Because when you think about real priorities that you have to fill first, it’s obviously the increasing usage and adoption and sales we’ve seen of M365 Copilot and the usage of Copilot chat, which we’ve seen very different patterns, which we’re encouraged by. It’s the adoption of security features. It’s the GitHub momentum.  And so when you’re thinking about it, that is where and it is a priority for us to allocate resourcing there first. And so you are right to ask how do I think about that. We’ve worked very hard to try to mitigate it as best we can, but we have been short in Azure, and we’ve been clear on it. And I would say the other 2 priorities that I haven’t mentioned maybe as much before is also just making sure our product teams and the AI talent that we’ve been able to hire into the company really over the past 1.5 years have access also to significant capacity because we’re seeing it make the product better in a loop that is adding great benefit today into products people are using today for real-world work. And so we are making that a priority to make sure our research teams have that as well as our product engineering teams. And yes, it does impact Azure directly. That is the place where you see that prioritization. But I think it’s probably hard for me to give an exact number, but it is safe to say that the number could be higher.

Azure grew revenue by 40% in 2025 Q3 (FY2026 Q1) (was 39% in 2025 Q2); Azure’s core infrastructure business had better than expected growth; Azure’s AI services revenue was in line with expectations; Azure was capacity-constrained in 2025 Q3 (FY2026 Q1) despite bringing more capacity online; management expects Azure to be capacity-constrained through at least FY2026; management will continue to balance capacity-additions between Azure’s revenue growth, and Microsoft’s internal-needs for compute; the demand signals that management is seeing is accelerating faster than they expected; management is seeing demand increasing across many places and they are investing in capacity with confidence in usage patterns and in bookings

In Azure and other Cloud services, where we continue to see accelerating demand, revenue grew 40% and 39% in constant currency. Results were ahead of expectations, driven by better-than-expected growth in our core infrastructure business, primarily from our largest customers. Azure AI services revenue was generally in line with expectations, and this quarter, demand again exceeded supply across workloads, even as we brought more capacity online…

…In Azure, we expect Q2 revenue growth of approximately 37% in constant currency as demand remains significantly ahead of the capacity we have available. And while we’re accelerating the amount of capacity we’re bringing online, we will continue to balance Azure revenue growth with the growing needs across our first-party apps and AI solutions, our own R&D efforts and the end-of-life server replacements. Therefore, we now expect to be capacity constrained through at least the end of our fiscal year…

…Demand signals across bookings, RPO and product usage are accelerating faster than we expected. We’re investing in infrastructure, AI talent and product innovation to capture that momentum and expand our leadership position…

…Demand is increasing. It is not increasing in just one place. It is increasing across many places. We’re seeing usage increases in products. We are seeing new products launch that are getting increasing usage, and increasing usage very quickly. When people see real value, they actually commit real usage. And I sometimes think this is where this cycle needs to be thought through completely is that when you see these kind of demand signals and we know we’re behind, we do need to spend. But we’re spending with a different amount of confidence in usage patterns and in bookings, and I feel very good about that. 

Azure is expected to grow revenue by 37% in 2025 Q4 (FY2026 Q2) in constant currency, driven by demand that remains significantly ahead of capacity; management now expects capital expenditure in FY2026 to have a higher growth rate than in FY2025 (previous guidance was for capital expenditure growth in FY2026 to moderate from FY2025’s level) because of an increase in spend on GPUs and CPUs

For Intelligent Cloud, we expect revenue of USD 32.25 billion to USD 32.55 billion or growth of 26% to 27%. In Azure, we expect Q2 revenue growth of approximately 37% in constant currency as demand remains significantly ahead of the capacity we have available… As a reminder, there can be quarterly variability in the year-on-year growth rates depending on the timing of capacity delivery and when it comes online as well as from in-period revenue recognition depending on the mix of contracts…

…Capital expenditures. With accelerating demand and a growing RPO balance, we’re increasing our spend on GPUs and CPUs. Therefore, total spend will increase sequentially, and we now expect the FY ’26 growth rate to be higher than FY ’25. 

Microsoft’s management thinks AI models, even when they become more powerful over time, will have spiky intelligence (being really good at only certain areas), and software systems such as GitHub Agent HQ or M365 Copilot or Azure AI Foundry will be needed to smooth out the spikiness

I think your question touches on something that’s pretty important, which is how are these AI systems going to truly be deployed in the real world and make a real difference and make a return for both the customers who are deploying them and then obviously, the providers of these systems. And I think the best way to characterize the situation is that even as the intelligence capability increases, let’s even say, exponentially like model version over model version, the problem is it’s always going to still be jagged, right? I think the term people use is the jagged intelligence, even — or spiky intelligence, right? 

So you may even have a capability that’s fantastic at a particular task, but it may not uniformly grow. So what is required is in fact, these systems, whether it is GitHub Agent HQ or the M365 Copilot system. Don’t think of this as a product. Think of it as a system that in some sense smooths out those jagged edges, and really helps the capability…

…If I am in M365 Copilot, I can generate an Excel spreadsheet. The good news is now an Excel spreadsheet does understand Office JS, has the formulas in it. It feels like, wow, it is a great spreadsheet created by a good model. The more interesting thing is I can go into agent mode in Excel and iterate on that model. And yet, it will stay on rail. It won’t go off rail, it will be able to do the iteration. Then I can even give it to the analyst agent, and then it will even make sense of it like a data analyst would of our Excel model. The reason I say all of that is because that’s the type of construction that will be needed even when the model is magical, all powerful. I think we will be in this jagged intelligence phase for a long time. So one of the fundamental things that these — whether it’s GitHub, whether it’s security, whether it’s M365, the 3 main domains we’re in, we feel very, very good about building these as organizing layers for agents to help customers.

And by the way, that’s the same thing that we want to put into Foundry for our third-party customers. So that’s kind of how people will build these multi-agent systems.

Microsoft’s management believes that AI software can grow the overall revenue-pie for Microsoft, in a similar manner as how cloud computing expanded the overall server market

I should also say one of the things I like about Copilot is, I mean, Copilot ARPU is compared to M365 ARPUs, right? It’s expansive. The same thing that happened between server and cloud like we used to always say, well, is it zero-sum, it turned out that the cloud was so much more expansive to the server market. The same thing is happening in AI because first, you could say, hey, our ARPUs are too low when it comes to M365 or you could say we have the opportunity with AI to be much more expansive. Same thing with tools, right? I mean, tooling — the tools business was not like a leading business, whereas coding business is going to be one of the most expansive AI systems. And so we feel very good about being in that category. 

To deal with customer-concentration risk from OpenAI, in the event OpenAI cannot follow-through on its spending-commitments, management is (1) building fungible data centers that can serve a broad base of customers including Microsoft itself, (2) only selectively building out data centers for OpenAI, and (3) having internal needs for AI infrastructure, such as Copilot; management walked away from building certain capacity for OpenAI (which Oracle won the contract for) because they wanted to avoid customer-concentration, and they did not want to build capacity that was specific to only one company

[Question] We seem to be entering into a new era where the contractual commitments from a small number of AI natives are just incredibly large, not only in absolute terms, but sometimes relative to the size of the companies themselves. For instance, contracts worth hundreds of billions of dollars that are 20x their current revenue scale. Philosophically, how do you evaluate the ability of those companies to follow through on these commitments?

[Answer] It’s great to have the hit first-party apps in the beginning because you can build scale that then if it’s a fungible and that’s where the key is. You don’t want to build for a digital native in — as if you’re just doing hosting for them. You want to build. That’s where — I think some of the decision-making of ours is probably getting better understood. What do we say yes to, what do we say no to. I think there was a lot of confusion, hopefully by now, anyone who switched on would figure this out. And so that’s, I think, one thing we’re doing on the third party. But the 1 — first party is probably where a lot of our leverage comes and it’s not even about one hit app on our first-party even. Our portfolio of stuff which I just walked through in the earlier answer, gives us, again, the confidence that between that mix, we will be able to use our fleet to the maximum. And remember, these assets, especially the data centers and so on are long assets, right? There will be many refresh cycles for any one of these when it comes to the gear. So I feel that once you think about all those dimensions, the concentration risk gets mitigated by being thoughtful about how you really ensure the build is for the broad customer base…

…When you think about concentration risk or delivering to any customer, you have to remember that because we’re talking about this very large flexible fleet that can be used for anyone and for any purpose, 1P, 3P, and including our commercial cloud, by the way, which I should be quite clear on, it is pretty flexible in every regard…

…[Question] There’s talk that another hyperscaler came in and took away the business that was rightfully Microsoft’s. I’m sure that there is a different point of view here. I’m wondering if you could offer some perspective.

[Answer] Just always goes back to, I think, the core principle, which is build a fleet that is fungible across the planet and works for third-party and first-party and research. So that’s essentially what we have done. And so when some demand comes in shape, that don’t fit that goal, where it’s too concentrated, not just by customer, by location, by type of skewing, right? I think Amy mentioned some very key things. When you think about the margin profile of a hyperscaler, you’ve got to remember this, the AI accelerator piece, but there’s compute, there’s storage. And so if all of the demand just comes for just one [ meter ] that’s really not a long-term business we want to be in. That’s even from a third party. We have to balance it with all of our first-party stuff because that’s after all a different margin stack for us. And then we have to fund our own R&D and model capability because in the long run, that’s what’s going to differentiate us. And so I look at all of those. We sort of use all of that to make sure we are saying yes to all the demand that we want, we say no to some of the demand that may be something that we could serve, but it’s not in our long-term interest. And so that’s sort of the decision-making we have done, and we feel very, very good about the decisions. In some sense, I feel even each time we say no to, the day after, I feel better.

Netflix (NASDAQ: NFLX)

Netflix has been using ML (machine learning) and AI (artificial intelligence) for years to recommend titles to viewers; management thinks Netflix’s data, products, and business processes, gives the company a great position to leverage AI; Netflix is beta-testing a conversational search experience for titles that is powered by GenAI (generative AI); Netflix is using GenAI to localise promotional assets; Netflix productions are starting to use GenAI tools when creating content; management has created guidelines for content producers when using AI tool; Netflix is using AI to test new ad formats

For many years now, ML and AI have been powering our title recommendations as well as production and promotion technology. Given our significant data assets and at-scale products and business processes, we are very well positioned to effectively leverage ongoing advances in AI…

…We’re leveraging GenAI to further enhance the member experience by improving the quality of our recommendations and content discovery features. One example is our beta testing of a conversational search experience that allows members to use natural language to explore the catalog and discover the perfect title for that moment. Another is the way we’re using GenAI to localize promotional assets in a variety of languages so titles can more easily travel to audiences who will love them around the globe…

…For example, in Happy Gilmore 2, filmmakers used GenAI coupled with ML and Eyeline’s proprietary volumetric capture technologies to de-age characters 6 during the opening flashback scene. And the producers of Billionaires’ Bunker used various GenAI tools during pre-production, including for pre-visualization to explore wardrobe and set designs. To help our creative partners use these new technologies responsibly, we recently released production guidance for creators…

…In Q4, we are using AI to test new ad formats, to generate the most relevant ad creative and placement for members, and for faster development of media plans. With these advancements, we’ll be able to test, iterate, and innovate on dozens of ad formats by 2026. 

Netflix’s management thinks that video-generating AI apps such as Sora will mostly impact UGC (user-generated content) platforms in the near term; management thinks AI will mostly help great story-tellers better tell their stories, but it will not make lousy story-tellers great, just like how listeners still gravitate largely towards human-created music rather than AI-created music

[Question] What are your thoughts on the impact from Sora 2 and other new AI content creation apps in terms of increased competition from short-form video, do you think it creates new competition from an engagement standpoint?

[Answer] What we’ve seen so far from these content creation apps is that it’s likely to have a lot more impact on UGC creators the most in the near term. In other words, AI content replacing viewing of existing user-generated content, that starts to make sense. Before we do, it takes a great artist to make something great. Writing and making shows and films well is a rare commodity, and it’s only done successfully by very few people. So AI can give creatives better tools to enhance their overall TV movie experience for our members. But it doesn’t automatically make you a great storyteller if you’re not. So if music is a leading indicator of all this, AI-generated music has been around for a long time, and there’s a lot of it. And it’s a pretty small part of total listening and established artists like Taylor Swift continue to be more popular than ever. So even in a world filled with AI music, AI seems to be mostly a tool for musicians to take — to make — to take their sound in new directions. And so we’re confident that AI is going to help us and help our creative partners tell stories better, faster in new ways, we’re all in on that. But we’re not chasing novelty for novelty sake here, and we’re investing in what we believe delivers value for creators and members alike. So we’re not worried about AI replacing creativity, but we’re very excited about AI creating tools to help creativity.

PayPal (NASDAQ: PYPL)

PayPal has partnerships with Perplexity, Google, and OpenAI for agentic commerce; PayPal has its own agentic commerce service where they can access consumers through multiple LLMs (large language models) with one integration; management thinks agentic commerce will take time but that consumer behaviour will shift; the presence of agentic commerce has not changed any of PayPal’s priorities; management thinks PayPal is well positioned to win in payments for agentic commerce from the merchant perspective (with the agentic commerce service), the consumer perspective (with the largest wallet ecosystems), and the LLM perspective (it would take a long time for LLMs to build the merchant ecosystem that PayPal has already built); some investment from PayPal would be needed for the agentic commerce partnerships

We continue to partner with leaders across the agentic space, including Perplexity earlier this year. And in September, we announced our expansive multiyear partnership with Google to create new AI shopping experiences. This morning, we announced a significant partnership with OpenAI to expand payments and commerce in ChatGPT, including adding PayPal branded checkout for shoppers and payment processing for merchants using Instant Checkout. This is a big win for PayPal and our customers. Today, we also announced our own agentic commerce services, which help merchants sell through multiple AI platforms, including Google, OpenAI and Perplexity. Merchants will have one integration to access consumers through multiple LLMs. Agentic commerce will take time, but we do believe consumer behavior will shift. PayPal is building for that future…

…Our strategy we’ve laid out very clearly is that we want PayPal to be available anywhere and everywhere that consumers want to pay. And we want merchants to be able to sell to consumers anywhere and everywhere. And we’ve talked about this even back at Investor Day where we laid out we want it to be online. We want it to be in-person and we want it to be agentic. And so agentic is just an evolution of this strategy…

…We actually think we’re extremely well positioned to win here. Let me just lay out a couple of the different components. So first, on the merchant side, merchants are going to need to figure out how to integrate with each of these LLMs. And that’s hard because there’s multiple LLMs that are out there. And whether you’re a large enterprise or a small business, you really don’t have the bandwidth to go figure out how to integrate with each and every one of these LLMs, make your catalog available, understand the identity and fraud protection that comes with each of these different elements. And so what we announced today was our PayPal agentic commerce services… We give them seller protection. We give them the ability to scale across all the different LLMs.

From the consumer standpoint, we’re, again, very well positioned. We’ve got the largest wallet ecosystems that are out there and our ability to give consumers the trust, the safety, the buyer protection and the ability to get access and make purchases on any of the LLMs they want to is a huge win. They get to use the wallet that they know and love and have a great end-to-end experience, which includes not only the purchase through the LLM, but also then all the things that happen afterwards, whether it’s package tracking or customer service or returns. So that’s again, a big win for consumers…

…For the LLMs themselves, it would take over a decade if they wanted to go and try to build the same kind of merchant ecosystem of the head, the torso and tail of merchants that PayPal has established over the last couple of decades. And so instead, they get to partner once with us and get access to tens of millions of merchants with identity, authentication, fraud protection and payment processing on a global scale…

…These partnerships do entail some level of investment, whether that’s in product and tech or around co-marketing, things that really drive usage and habituation around the product. And I mentioned in my prepared remarks that we would be reinvesting — begin reinvesting some of our margin dollars in the fourth quarter to really amplify some of our product initiatives. And between the push into agentic and that some of those investments are likely to be a near-term headwind to how fast TM dollars or earnings grow next year.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

Demand from AI continues to be very strong and management wants to invest to support TSMC’s customers’ growth; management now expects capex for 2025 to be US$40 billion to US$42 billion, slightly higher than previous expectation for US$38 billion to US$42 billion (2024’s capex was US$29.8 billion); most of the capex for 2025 will be for advanced process technologies; TSMC’s capital expenditure is always in anticipation of growth in future years

As the structural AI-related demand continues to be very strong, we continue to invest to support our customers’ growth. We are narrowing the range of our 2025 CapEx to be between USD 40 billion and USD 42 billion as compared to USD 38 billion to USD 42 billion previously. About 70% of the capital budget will be allocated for advanced process technologies, about 10% to 20% will be spent for specialty technologies, and about 10% to 20% will be spent for advanced packaging, testing, mask making and others.

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years…

TSMC’s management thinks recent developments in the AI market are very positive; management sees explosive growth in token volume, and they think this shows increasing consumer AI model adoption and thus more leading-edge silicon demand; TSMC is using AI internally to improve productivity, and management thinks enterprise AI is another source of demand; management is seeing the emergence of sovereign demand for AI; management has received very strong demand signals from TSMC’s customers and the customers’ customers; management’s conviction in the AI megatrend is strengthening

Recent developments in AI market continue to be very positive. The explosive growth in token volume demonstrated increasing consumer AI model adoption which means more and more computation is needed, leading to more leading-edge silicon demand. Companies such as TSMC, we are leveraging AI internally to drive greater productivity and efficiency to create more value. As such, enterprise AI is another source of demand. In addition, we continue to observe the rising emergence of sovereign AI. We are also happy to see continued strong outlook from our customers. In addition, we directly received very strong signals from our customers’ customers, requesting the capacity to support their business. Thus, our conviction in the AI megatrend is strengthening, then we believe the demand for semiconductor will continue to be very fundamental.

TSMC’s management is disciplined when planning for capacity; TSMC’s lead-time has now increased to 2-3 years because of heightened complexity in process technologies; management thinks TSMC has the deepest and widest look at demand in the semiconductor industry; when planning for AI capacity, management is talking to TSMC’s customers’ customers, which is different from past capacity-planning exercises for other platforms such as smartphones and PCs, where TSMC would talk to only its customers

In order to raise a structural increase in the long-term market demand profile, TSMC employs a disciplined [ in the ] capacity planning system. Externally, we work closely with our customers and our customers’ customer to plan our capacity. We have more than 500 different customers across all the market segments. In addition, as process technology complexity increases, the engagement lead time with customer is now at least 2 to 3 years in advance. Therefore, we probably get the deepest and widest look possible in the industry…

…[Question] Now cloud AI is granting a lot faster than the prior opportunities like smartphones and PCs. Yes, I think the demand for cloud AI is also may be harder to forecast. So just wanted to maybe get a bit more color from you that now to the prior rounds of capacity expansions, what is TSMC doing differently versus before?

[Answer] I believe we are just in the early stage of the AI application. So very hard to make the right forecast at this moment. What do we do differently? There’s a big difference because right now, we pay a lot of attention to our customers’ customer. We talk to and then discuss with them and look at their applications, maybe in the search engine or in social media application. We talk with them and see how they view the AI application to those functions. And then we make a judgment about what AI going to grow. And so this is quite the difference. As compared with before, we only talk to our customers and have an internal study. This is different.

TSMC’s A16 process technology is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; A16 is scheduled for volume production in 2026 H2

We also introduced A16 feature in our best-in-class super power rail, or SPR. A16 is best suited for specific HPC product with compressed signal route and dense power delivery networks.

TSMC’s management now sees the possibility of the revenue CAGR from AI accelerators in the five years ending 2029 to be higher than previous guidance of mid-40s percent because demand is “insane”

[Question] I think we gave a guidance of mid-40s data center AI growth CAGR earlier this year until 2029. Anything that you see which should kind of change that number?

[Answer] The demand actually continue to be very strong in a more — more stronger than we saw the 3 months ago, okay? So in today’s situation, we have talked to customers and then we talk to customers’ customer. So the CAGR previously we announced is about mid-40s, but it is still it’s a little bit better than that. We will update you probably in beginning of next year. So we have a more clear picture. Today, the number are insane.

TSMC’s management continues to see very strong demand for CoWoS (chip on wafer on substrate), driven by AI; management is working hard to narrow the gap between supply and demand for CoWoS; advanced packaging is already close to 10% of TSMC’s revenue

Talking about the CoWoS capacity, all I can say is continue the 3 months ago, we are working very hard to narrow the gap between the demand and supply. We are still working to increase the capacity in 2026. The real number, we probably update you next year. Today, all I want to say about the AI everything related, frontend and backend capacity is very tight. We are working very hard to make sure that the gap will be narrow, but what I can say is we are working very hard…

…Advanced packaging revenue is approaching close to 10% and is significant in our revenue, and it’s important for our customer.

TSMC’s management thinks AI’s growth will still be very positive for TSMC even without access to the China market

I have confidence on my customers, both in graphic or in ASIC, they are all performing well. And so if the China market is not available, but I still think the AI’s growth will be very dramatically and as I said, very positive, and I have confidence that our customers’ performance, and they will continue to grow, and we will support them…

…[Question] So even with immediate obscurity from China for the time being you are still confident that a 14% CAGR or even higher can be achieved in the coming years?

[Answer] You are right.

The amount of TSMC’s wafer-content in a 1 gigawatt AI center differs according to each project

When customers say that 1 gigawatt, they need about — invest about $50 billion, how much of TSMC’s wafer inside? We are not ready to share with you yet because it’s different from different projects… 

…I just want to say that right now, it’s not only 1 chip. Actually, it’s many chip together to form a system, right?

It makes no difference to TSMC’s revenue and gross margin whether it’s helping its customers manufacture GPUs or ASICs (application specific integrated circuits) for AI

[Question] From a TSMC angle, does it matter whether it’s — that demand is coming through a GPU or an ASIC? Does it have an impact on your revenue or gross margin mix?

[Answer] whether with it’s GPU or it’s an ASIC, it’s all using that our leading-edge technologies. And from our perspective, we are working with our customers, and we all know that they are going to grow strongly in the next several years. So no differentiation in front of TSMC. We support all kinds of types.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks Tesla is the leader in real-world AI; management thinks Tesla vehicles have the highest intelligence density of any car; there’s no other company apart from Tesla that is designing AI chips as well as vehicles

I think it’s important to emphasize that Tesla really is the leader in real-world AI. No one can do what we can do with real-world AI. I have pretty good insight into AI in general. I think that Tesla has the highest intelligence density of any AI out there in the car, and that is only going to get better…

…I don’t think there really isn’t anyone that’s doing this — the entire stack all the way through real world — kind of calibrating against the real world where you’ve got cars and robots in real world that like we know what the chip needs to do, and we know what — just as importantly, we know what the chip doesn’t need to do…

…Obviously, you can do reasoning on the server, that takes whatever. But then in a car, you need to make real-time decisions. So putting all that into the computer that’s in the car, that’s the challenge…

…I’m confident in saying that Tesla has — Tesla AI has the highest intelligence density. When you look at the intelligence per gigabyte, I think like Tesla AI is probably, in order of magnitude better than anyone else. And it doesn’t have any choice because that AI has got to fit in the AI for computer.

Millions of existing Tesla vehicles can become fully autonomous with a software update; management now has clarity on achieving full autonomy; version 14 (v14) of Tesla’s FSD software is broadly available now and current users have been amazed by it; Tesla’s Robotaxi service is now operating in 2 markets; Robotaxi’s coverage area in Austin has expanded by 3x since the initial launch; management thinks Tesla’s Robotaxi fleet bleeds in with other vehicles, unlike those of its competitors with many extra sensors; management thinks that demand for Tesla vehicles will expand significantly as people experience FSD at scale; FSD’s adoption is making decent progress, with the total paid FSD customer base being 12% of the current flee; Tesla groups Robotaxi’s costs within the Services and Other revenue line; management expects to have no safety drivers for Robotaxi in large parts of Austin by end-2025, even when operating with an abundance of caution; management expects Robotaxi to be in 8-10 metro areas by end-2025; management expects Robotaxi to be in Nevada and Florida and Arizona by end-2025; Robotaxis in Austin without anyone in the driver seat have covered more than 0.25 million miles; Robotaxis in Bay Area have crossed more than 1 million miles; customers are happy with Robotaxi and there are no notable issues; total miles driven by supervised FSD has crossed 6 billion and the overall safety remains excellent; Tesla will be working on a V14-light version FSD software that is compatible with Hardware 3; a big reason why autonomy is safer than human driving is because a large part of human driving accidents are caused by texting-while-driving; the autonomous driving software shipped to customers and Robotaxi are very similar; updated editions of V14 FSD will have reasoning capabilities

We have millions of cars out there that with a software update become full self-driving cars…

…We see now as a clarity on achieving full self-driving, unsupervised full self-driving…

…With version 14 of the — of self-driving, which people — you can see the reactions of people online. They’re quite amazed. Actually, anyone in the U.S. can get version 14 if they just go and select, I want the advanced software in their car. So if you’re listening right now and you’d like to try it out, just go in Settings and say, I want the advanced software, and you will get version 14…

…We’re now operating our Robotaxi in 2 markets, Austin and most Bay Area cities. We’ve already expanded our coverage area in Austin 3x since the initial launch and are on pace to continue expanding further.

Unlike our competitors, our Robotaxi fleet blends in the markets we operate in since they don’t have extra sensor sets or peripherals, which make them stick out. This is an underappreciated aspect of our current vehicle offerings, which are all designed for autonomous driving.

We feel that as experience — as people experience the supervised FSD at scale, the demand for our vehicles, like Elon said, would increase significantly.

On the FSD adoption front, we’ve continued to see decent progress. However, note that total paid FSD customer base is still small, around 12% of our current fleet. We’re moving — we’re working with regulators in places like China and EMEA to obtain approvals so that we can get FSD in those regions as well…

…Note that while small, our Robotaxi costs are included within Services and Other, along with our other businesses like paid supercharging, used car, parts and merchandise sales, et cetera…

…We are expecting to have no safety drivers in at least large parts of Austin by the end of this year. So within a few months, we expect to have no safety drivers at all at least in parts of Austin. We’re obviously being very cautious about the deployment. So our goal is to be actually paranoid about deployment because obviously, even one accident will be front page headline news worldwide. So it’s better for us to take a cautious approach here. But we do expect to have no safety drivers in the car in Austin within a few months. I think that’s perhaps the most important data point.

And then we do expect to be operating Robotaxi in, I think, about 8 to 10 metro areas by the end of the year. It depends on various regulatory approvals…

…We expect to be operating in Nevada and Florida and Arizona by the end of the year…

…We continue to operate our fleet in Austin without anyone in the driver seat, and we have covered more than 0.25 million miles with that. And then in the Bay Area, where we still have a person in the driver seat because of the regulations, we crossed more than 1 million miles. So — and we continue to see that the fleet — Robotaxi fleet works really well. Customers are really happy, and there’s no notable issues…

…Customers have used FSD supervised for a total of 6 billion miles as of yesterday. So that’s like a big milestone. And overall, the safety continues to be very good…

…Once the V14 release series is fully done, we are planning on working on a V14 light version for Hardware 3 probably expected in Q2 next year…

…The reason you’ve seen like there’s been an uptick in accidents pretty much worldwide is because people are texting and driving. So Autopilot actually dramatically improves the safety here because if somebody is looking down their phone, they’re not driving very well. So that’s really the game changer…

…In terms of like what we ship to customers versus Robotaxi, it’s mostly the same. Obviously, customers have some more features like they can choose the car wants to park in a spot or drive something like that, which is not super relevant for Robotaxi. But there’s only a few minor changes like those ones. But the majority of the algorithms and the architecture, everything is the same between those 2 platforms…

…We’ll be adding reasoning to — I don’t know, Ashok, is that like reasoning in like 14.3, maybe 14.4, something like that?… Yes, by end of this year for sure.

Tesla’s management is still very optimistic about the potential of Tesla’s Optimus autonomous robot; Tesla will unveil Optimus V3 in 2026 Q1; most of the real-world AI Tesla has developed for fully autonomous driving can be transferred to Optimus; management thinks Optimus can be a great surgeon; management thinks bringing Optimus to market is incredibly difficult; Optimus robots are already walking around Tesla’s offices; it’s really difficult engineering-wise to create the hands and fingers of Optimus that can mimic human hands and figures; it’s hard to manufacture Optimus at scale because the supply chain currently does not exist, so Tesla has had to be very vertically integrated and manufacture very deep into the supply chain; management thinks Tesla is uniquely positioned to win in autonomous robots because success in autonomous robots depends on 3 things, namely, scaled manufacturing technology, real-world AI, and a dextrous hand, and Tesla is the only company that can achieve all 3; management thinks Optimus can be 5x more productive than humans; many of the people working on Optimus in Tesla now were working on Tesla vehicles in the past; Optimus’s management reviews involve a tight loop between manufacturing and engineering design so that the overall manufacturing processes for Optimus can be good; Optimus 2 was impossible to manufacture; Tesla will have rolling changes for the Optimus design even after start of production

We’re also on the cusp of something really tremendous with Optimus, which I think is likely to be or has potential to be the biggest product of all time…

…We look forward to unveiling Optimus V3 probably in Q1. I think it will be ready for — to show off…

…The real-world intelligence we’ve developed for the car, most of that transfers to Optimus. So it’s a very good starting point…

…Optimus will be an incredible surgeon, for example, I imagine everyone had access to an incredible surgeon…

…Bringing Optimus to market is an incredibly difficult task to be clear…

…We do have Optimus robots that walk around our offices at our engineering headquarters in Palo Alto, California, basically 24 hours a day, 7 days a week. So any visitors that come by, you actually — you can stop one of the Optimus robots and ask it to take you somewhere, and it will literally take you to that meeting room or that location in the building…

…It’s difficult to create a hand that is as dextrous and capable as the human hand, which is an incredible — the human hand is an incredible thing that the more you study the human hand, the more incredible you realize the human hand is and why you need 5 — 4 fingers and a thumb, why the fingers have certain degrees of freedom, why the various muscles are of different strengths, the fingers are of different lengths. And it turns out actually that those are all there for a reason. And so making the hand and forearm, because most of the actuator — just like the human hand, the muscles that control your hand are actually primarily in your forearm. The Optimus hand and forearm is an incredibly difficult engineering challenge. I’d say it’s more difficult than the rest of — from an electromechanical standpoint, the forearm and hand is more difficult than the entire rest of the robot…

…Trying to make 1 million Optimus robots per year, that manufacturing challenge is immense, considering that the supply chain doesn’t exist. So with cars, you’ve got an existing supply chain. With computers, you’ve got an existing supply chain. With a humanoid robot, there is no supply chain. So in order to manufacture that, Tesla actually has to be very vertically integrated and manufacture very deep into the supply chain, manufacture the parts internally because there just is no supply chain…

…If I put myself in the position of a start-up trying to make a humanoid robot, I’m like, I don’t know how to do it without an immense amount of manufacturing technology. So — that’s why I think like Tesla is in almost a unique — I think a unique position when you consider manufacturing technology scaling, real-world AI and a truly dextrous hand. Those are generally the things that are missing when you read about other robots that just don’t have those 3 things. So I think we can achieve all those things — those 3 things with an immense amount of work. And that is the game plan…

…Optimus at scale is the infinite money glitch. It’s like this is — it’s difficult to express the magnitude of — like if you’ve got something like that — like if Optimus, I think, probably achieve 5x the productivity of a person per year because it can operate 24/7, it doesn’t even need to charge. It can operate tethered. So it’s plugged in the whole time…

…4-plus years back, we were in a finance meeting with Elon and Elon said, hey, our car is a robot on wheels. And that’s where we started developing. In fact, most of the engineering team, which is working on Optimus has come from the vehicle side. And that’s why when we talk about manufacturing prowess, we have the wherewithal because the same engineers who worked back in the day on drive units are working on actuators now. So that’s where we can — if there is any company which can do it at scale, that is going to be us…

…The Optimus reviews at this point are there’s the engineering review and then there’s the manufacturing review being done simultaneously with an iterative loop between engineering design and manufacturing because then we see — we design something and we say like, oh man, that’s really difficult to make. We need to change that design to make it easier to manufacture. So we’ve made radical improvements to the design of Optimus while increasing the functionality but making it actually possible to manufacture. 

Like I’d say, Optimus 2 is almost impossible to manufacture, frankly…

…The hardware design will not actually be frozen even through start of production. There will be continued iteration because a bunch of the things that you discover are very difficult to make. You only find that pretty late in the game. So we’ll be doing rolling changes for the Optimus design even after start of production.

Tesla’s A14 chip is manufactured by Samsung; Tesla is going to manufacture its A15 chip with both TSMC and Samsung; the A15 chip has 40x better performance than A14 because Tesla designed the hardware to address all the pain points in software; the A15 chip deleted a lot of components that were in the A14 chip and this has greatly improved the performance of the A15 chip; management thinks Samsung’s US fab is slightly more advanced than TSMC’s US fab; management wants to have an oversupply of A15 chips because the chips that do not go into vehicles and Optimus can be used for Tesla’s data centers; Tesla uses a combination of its A-series chips and NVIDIA chips for AI training; Tesla is not looking to replace NVIDIA, but management notes that NVIDIA’s chips need to accommodate a wide range of use cases, which disadvantages it against Tesla’s self-designed chips which need to accommodate only Tesla’s use cases; management thinks Tesla’s A15 chip will have the best performance per watt and best performance per dollar for AI

Samsung is worth noting, does manufacture our AI4 computer and does a great job doing that. So now with the AI5, and here’s I need to make a point of clarification relative to some comments I’ve made publicly before, which is we’re actually going to focus both TSMC and Samsung initially on AI5…

…By some metrics, the AI5 chip will be 40x better than the AI4 chip, not 40%, 40x because we have a detailed understanding of the entire software and hardware stack. So we’re designing the hardware to address all of the pain points in software…

…With the AI5, we deleted the legacy GPU or the traditional GPU, which is — it’s in AI4. But AI5 does not have — we just deleted the legacy GPU because it basically is a GPU. So we also deleted the image signal processor. And there’s like a long list actually of deletions that are very important. As a result of these deletions, we can actually fit AI5 in a half reticle and with good margin for the traces from the memory to the Tesla Trip accelerators, the ARM CPU cores and the PCI-X sort of the PCI blocks. So this is a beautiful chip. I’ve hoarded so much life energy into this chip personally. And I’m confident this will be — this is going to be a winner next level…

…Technically, the Samsung fab has slightly more advanced equipment than the TSMC fab. These will both be made in the U.S., one — TSMC in Arizona, Samsung in Texas…

…Our goal — explicit goal is to have an oversupply of AI5 chips because if we have too many AI5 chips for the cars and robots, we can always put them in the data center…

…We already use AI for training in our data centers. So we use a combination of AI5 and NVIDIA hardware. So we’re not about to replace NVIDIA to be clear, but we do use both in combination, AI4 and NVIDIA hardware. And the AI5 excess production, we can always put in our data centers…

…The challenge that they have is that they’ve got to satisfy a large range — a lot of requirements from a lot of customers, but Tesla only has to satisfy requirements from one customer, that’s Tesla. That makes the design job radically easier and means we can delete a lot of complexity from the chip. Like I can’t emphasize how important this is. So like when you look at the various logic blocks in the chip, as you increase the number of logic blocks, you also increase the interconnections between the logic blocks. So you can think of it like there’s highways, like how many highways do you need to connect the various parts of the chip. And especially if you’re not sure how much data is going to go between each logic block on the chip, then you kind of end up having giant highways going all over the place. It’s a very — like it becomes an almost impossibly difficult design problem. And NVIDIA has done an amazing job of dealing with almost an impossibly difficult set of requirements. But in our case, we’re going for radical simplicity…

…I think AI5 will be the best performance per watt, maybe by a factor of 2 or 3 and the best performance per dollar for AI, maybe by a factor of 10.

Tesla has a world simulator for reinforcement learning for autonomous driving that is indistinguishable from actual video; Tesla will be increasing the parameter count for its autonomous driving AI model by an order of magnitude

Our world simulator for reinforcement learning is pretty incredible, like — when you see the Tesla Reality Simulator, it’s — you can’t tell a difference between the video that’s generated by the Tesla Reality Simulator and the actual video, it looks exactly the same. So that allows us to have a very powerful reinforcement learning loop to further improve the Tesla AI.

We’re going to be increasing the parameter count by an order of magnitude. That’s not in 14.1. There are also a number of other improvements to the AI just that are quite radical. So it’s — this car will feel like it is a living creature. That’s how good the AI will get with the AI4 computer just before AI5.

Tesla’s management thinks Tesla vehicles can become a giant distributed AI inference fleet

We could actually have a giant distributed inference fleet and say like, well, if they’re not actively driving, let’s just have a giant distributed inference fleet. At some point, if you’ve got like tens of millions of cars in the fleet or maybe at some point, 100 million cars in the fleet, and let’s say they had at that point, I don’t know, a kilowatt of inference capability of high-performance inference capability, that’s 100 gigawatts of inference distributed with power and cooling — with cooling and power conversion taken care of. So that seems like a pretty significant asset.

The AI models Tesla and xAI are developing are very different, with Tesla’s models being much smaller

So the xAI, Grok is like a giant model that you could not possibly squeeze Grok onto a car. That’s for sure. It is a giant beast of a model. It’s — with Grok is trying to solve for artificial general intelligence with a massive amount of AI training compute and inference compute. So for example, Grok 5 will actually only run effectively on a GB300. That’s how much of a beast that Grok 5 is. So — whereas Tesla’s models are, I don’t know, maybe about less than 10% of the size, maybe closer to 5% the size of Grok. So yes, they’re really coming at the problem from very different angles. xAI and Grok are — they’re competing with Google Gemini and OpenAI ChatGPT and that kind of thing. So — and some of it is complementary. I mean for example, for Grok voice, being able to interact with Grok in the car is cool. Grok — for Optimus voice recognition and voice generation is Grok. So that’s helpful there. But they are coming at it from kind of opposite ends of the spectrum.

Visa (NASDAQ: V)

Visa has begun deployment of the next generation of VisaNet, its core processing platform; more than half of the new code base for VisaNet was written with the help of generative AI

We have begun deployment of the next generation of VisaNet, the core processing platform in our Visa as a Service stack. It offers a cloud-ready micro services distributed modular architecture that uses open languages and technologies, enabling easier scaling, configuration and faster feature deployment. Over half of the new code base was built with the assistance of generative AI, improving development speed, security and maintainability. We have specific modules in market today with plans to roll out additional modules and markets.

The Visa Scam Disruption product detects scam activity at the network level and uses AI to monitor merchants; Visa Scam Disruption has been launched for only a year, but it has helped law enforcers to dismantle more than $1 billion in fraud attempts from 25,000 scam merchants

We continue to enhance our risk management capabilities, including Visa scam disruption, which proactively detects scam activity at the network level that no single issuer, acquirer or a merchant could see alone and leverages AI-enhanced merchant monitoring, external intelligence feeds and our global expertise. Just a year since launch, we have worked closely with our clients and law enforcement to dismantle more than 25,000 scam merchants representing more than $1 billion in fraud attempts.

Visa is now powering live agentic transactions; Visa recently released the Visa Trusted Agent Protocol to help merchants verify agents and avoid malicious bots; there is minimal integration required from merchants to utilise Visa Trusted Agent Protocol; Visa recently launched its MCP (model context protocol) server, which allows AI systems to interface with the Visa Intelligent Commerce APIs; management thinks Visa is leading in setting standards for agentic commerce; with Visa Intelligent Commerce, management has put out a set of capabilities for AI-ready cards to allow consumers to easily set spending limits and conditions for agentic transactions; the Visa Trusted Agent protocol is an open standard; management thinks agentic commerce will accelerate adoption of traditional e-commerce and mobile commerce, and be a net positive for Visa in both the transactions-driven business, and the value-added services business; management thinks there will be 3 phases to agentic commerce, (1) consumers using agents for discovery then making purchases on merchant sites, (2) consumers using agents for discovery and making purchases with the agents, and (3) consumers empowering agents to search for things on their behalf and buy; management thinks Visa Trusted Agent Protocol can be the base layer for everyone in the agentic commerce ecosystem to leverage on ;see Point 32 for more on agentic commerce; 

I’m pleased to announce that we are now powering live agentic transactions and recently released a merchant agent toolkit to make it easy for developers to embed our solutions into workflows and agentic processes. Just 2 weeks ago, we announced the Visa Trusted Agent Protocol, a framework that enables safer agent-driven checkout by helping merchants verify agents and avoid malicious bots. And since it’s built on existing messaging standards, minimal integration is required for merchants…

…We recently launched our MCP server, providing access for AI systems to interface with our Visa Intelligent Commerce APIs…

…In this third wave of agentic commerce, we’ve been leading in terms of our role of setting the standards. I think one great example of that is Visa Intelligent Commerce, where we put out a set of capabilities for AI-ready cards, leveraging tokenization, AI-powered personalization, leveraging our data token service. We put out a set of standards with payment instructions that are going to allow customers like you and I to easily set spending limits and conditions to provide clear guidance for agent transactions and also our payment signals, which are going to share those data payloads in real time with Visa, enabling us to help set transaction controls, manage disputes and Chargebacks and those types of things…

…I think what differentiates the Visa Trusted Agent Protocol is 2 things. One is it’s open. It’s an open set of standards, and we think that an open framework is critical to drive mass adoption in the way that’s needed for agentic commerce. And the second is it’s easy to integrate. We built it on existing web infrastructure so that it’s going to be easy for merchants to integrate into existing messaging standards and get up and running quickly…

…[Question] What extent you see agentic as more of a substitute for traditional e-commerce versus being additive to the TAM of the overall payments industry.

[Answer] I think the base case is it continues to accelerate the adoption of e-commerce and mobile commerce as we all know it. I think there’s an upside case on that where you could actually see users buying from a much larger and more diverse set of merchants than they do today in traditional e-commerce given the power of these agents and their ability to go out and search the world’s inventory based on whatever it is that you prefer for your agent. That might be value. That might be price. That might be inventory. That might be speed of delivery and so on and so forth. I think that could ultimately result in consumers buying more things from more merchants, which ultimately means more transactions on Visa. I also think there’s a significant upside in the delivery and the relevance of our portfolio of value-added services for the entire ecosystem, especially as you said, they have to work through a number of things that involve potential fraud and disputes and chargebacks and things like that…

…It’s still early days. And I think what you’re likely to see in the evolution of agentic commerce is not different or dissimilar to what we saw in e-commerce. I think early on, you’re seeing consumers use these agents and these platforms for discovery. They’re shopping. They’re looking for what might be available for any given gift I’m trying to buy or any clothing item that I might try to buy. But then I might jump to the actual merchant site to make the purchase.

Then the next step of what you’re starting to see is the integration of the buy capabilities into that shopping journey. We’re just starting to see that in the marketplace today. We’ve been working on that for many, many months with the ecosystem.

And then I think the ultimate kind of user experience and the promise of agentic commerce will be truly empowering agents to go out to search for things on our behalf and ultimately make purchases and buy things without human intervention. That, we haven’t really seen in the marketplace today, but we’re working very hard with the platform players to ensure that the capabilities are in place to enable that…

…I think it’s where the Visa Trusted Agent Protocol can form a base layer for everyone to build on and everyone to ultimately leverage.

Visa Protect for A2A (Account-to-Account), which enables consumers to pay businesses directly from their bank accounts, is using AI to reduce fraud in Brazil; Visa Protect for A2A’s pilot in Brazil scored nearly $500 billion of Pix volume from Visa’s bank partner over a 6-month period and identified over $90 million of fraud; the fraud could have been prevented with a detection rate of more than 80% with Visa Protect for A2A

Our award-winning product, Visa Protect for A2A, is delivering value with AI. Our pilot in Brazil scored nearly $500 billion of our bank partner’s Pix volume over a 6-month period and identified over $90 million of fraud, which could have been prevented with a detection rate of more than 80%. We believe Visa Protect for A2A can play an important role in Brazil by providing real-time fraud monitoring on Pix, helping to reduce fraud for our bank partners and ensure a safer payment experience for buyers and sellers.

Visa’s management thinks tokenisation is the critical building block for agentic commerce

Tokenization, I think, is the critical building block that ultimately will help Agentic commerce reach its promise. And if you go back — I know you asked about the Trusted Agent Protocol, but if you go back to the Visa Intelligent Commerce set of products and standards that we put out, tokenization as a platform is what enables the bulk of that functionality and ultimately is what’s going to enable us all to have safe, secure, trusted transactions with agents on our behalf. So tokenization, critical building block of that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Netflix, PayPal, TSMC, Tesla, and Visa. Holdings are subject to change at any time.