Shorting Stocks Is Incredibly Tough To Do

It’s not just the fundamentals of a business that you have to get right.

Occasionally, the investing lesson that shorting stocks is an immensely tough way to invest in the stock market is reinforced for me.

I published Why It’s So Difficult To Short Stocks and Shorting Stocks Is Hard, Really Hard in April 2020 and February 2024, respectively. In these articles, I shared how Luckin Coffee (OTC: LKNCY) and Herbalife (NYSE: HLF) made life treacherous for their short-sellers because their stock prices rose strongly in the interim before sinking. In particular, Luckin Coffee’s stock price rose even while management was committing fraud. I wrote in Why It’s So Difficult To Short Stocks:

“It turns out that fraudulent transactions at Luckin could have happened as early as April 2019. From 1 April 2019 to 31 January 2020, Luckin’s share price actually increased by 59%. At one point, it was even up by nearly 150%.

If you had shorted Luckin’s shares back in April 2019, you would have faced a massive loss – more than what you had put in – even if you had been right on Luckin committing fraud. This shows how tough it is to short stocks. Not only must your analysis on the fundamentals of the business be right, but your timing must also be right because you could easily lose more than you have if you’re shorting.”

I was reminded of Luckin Coffee by Intellego Technologies (SSE: INT). Based in Sweden, Intellego Technologies presumably offers dosimeters. When a surface requires disinfection with UV radiation, it is important to know if the surface has received the adequate dosage. This is where Intellego Technologies’ dosimeters are claimed to be able to help; they are devices that change colour after exposure to a certain amount of UV radiation.

Earlier this week, the company’s CEO was arrested on “suspicion of gross fraud”, and SEK 100 million of its cash reserves were seized by Swedish authorities. The trading of Intellego Technologies’ shares on the Swedish stock market was also suspended.

Although not much is known yet of the apparent misdeeds conducted by the CEO, the “gross fraud” is “related to [Intellego Technolgoies’] press releases and quarterly reports in 2025.”

Intellego Technologies held its IPO in June 2021. Its stock price closed at less than SEK 5 on the day of its listing. At the start of 2025, Intellego Technologies’ stock price was SEK 41. It climbed to SEK 78 at the end of June, before closing at a peak of SEK 213 in early September. Just prior to the trading suspension, Intellego Technologies’ stock price had fallen to SEK 47.

An investor back in June 2025 who thought the company had committed fraud and thus shorted its shares at the end of the month, had to endure a gain of nearly 200% in the stock price from SEK 78 to SEK 213 before the collapse to SEK 47 occurred. And this happened even when the investor is most likely right about Intellego Technologies being a fraud. This echoes what happened with Luckin Coffee in April 2019 to April 2020, and just reinforces the lesson for me on how incredibly tough shorting stocks is. Both your analysis on the fundamentals of a business and your timing must be right, because you could easily lose more than you have when you’re short selling.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

A Framework For Investing In Oil & Gas Companies

There’s a way to invest in oil & gas companies without having to make guesses on oil prices.

I have avoided investing in oil and gas companies for years, knowing how closely their stock prices track oil prices, a variable I cannot predict. But I still have a framework for investing in these companies: Buy them really cheaply. This framework is inspired by the successes of investors Bill Browder and the late Charlie Munger.

In Browder’s excellent 2015 book Red Notice, which I had discussed previously in this blog, he shared his experiences investing in two Russian oil & gas companies, Sidanco and Gazprom. These are the instructive excerpts (emphases are mine):

“[On Sidanco] According to his data, Sidanco had six billion barrels of oil reserves. By multiplying the price of the 4 per cent block by twenty-five I got the price of the whole company: $915 million. I divided that by the number of barrels of oil in the ground, which told me that Sidanco was trading at $0.15 per barrel of oil reserves in the ground, which was crazy because at the time the market price for a barrel of oil was $20

I pulled out a piece of paper and drew two columns. I titled the first Sidanco and the second Lukoil, and wrote down every fact about each company that I could find in the magazine. When I was done, I looked over the accumulated information. There was practically no difference between the two companies. Little infrastructure had been developed since the fall of the Soviet Union, and they both had the same rusting oil derricks and used the same leaky pipelines, and they both had the same unproductive workers who were paid the same measly salaries. The only obvious difference between them was that Lukoil was well known and had lots of broker reports written about it, whereas Sidanco had none. When we compiled the information from these reports and compared them to the information on Lukoil from the magazine, they matched up perfectly. This led me to believe that the information on Sidanco was reliable too. 

This was a remarkable discovery. Everyone knew that Lukoil was a steal, since it controlled the same amount of oil and gas as British Petroleum but was ten times cheaper. Now here was Sidanco, sitting on a bit less oil than Lukoil, but not much, only it was six times cheaper than Lukoil. In other words, Sidanco was sixty times cheaper than BP! This was one of the most obvious investment ideas I had ever seen. My fund bought 1.2 per cent of the company starting at $4 per share, spending roughly $11 million. It was the largest single investment decision I had ever been involved with in my life…

Finally, a little more than a year later, something did. On 14 October 1997, BP announced they were buying 10 per cent out of Vladimir Potanin’s 96 per cent block of Sidanco for a 600 per cent premium to the price we had paid a year earlier. It was a home run…

[On Gazprom] In terms of output and strategic significance, Gazprom was one of the world’s most important companies. Yet the entire market value of the company – $12 billion – was smaller than your average mid-size US oil and gas firm. In terms of hydrocarbon reserves, Gazprom was eight times the size of ExxonMobil and twelve times bigger than BP, the largest oil companies in the world – yet it traded at a 99.7 per cent discount to those companies per barrel of reserves

In a world where people fight tooth and nail to make 20 per cent, we’d just found something that might generate 1,000 per cent, or even 5,000 per cent. It was so obvious that the fund increased its investment in Gazprom right up to the 20 per cent limit, the largest percentage for a single stock that the fund allowed…

By 2005, Gazprom was up a hundred times from the price at which the Hermitage Fund had purchased its first shares. Not 100 per cent – one hundred times.

Coming to Munger’s investment, it involved a company called Belridge Oil. In the late 1970s, Munger invested in Belridge Oil at US$115 per share when its market capitalisation was US$110 million. At the time, the land Belridge Oil owned was sitting on 380 million barrels of oil reserves. The company’s market capitalisation meant that its oil reserves were valued at less than US$0.30 per barrel at a time when oil prices were around US$5 to US$6 per barrel. Around two years after Munger invested in Belridge Oil, the company was acquired by Shell for around US$3,700 per share, giving him a spectacular return of more than 3,000% in a short period of time.

To be clear, the Gazprom situation was hairy, and the successful outcome of Munger’s Belridge Oil investment came with a massive dollop of luck. Gazprom’s managers were stealing the company’s assets, and Browder had to rope in Russia’s government to intervene before the company’s stock price could surge. And after Munger invested in Belridge Oil, the price of oil increased to US$30 per barrel by 1980. But the core strategy in both cases was highly rational: Invest in oil & gas companies with oil reserves that are valued at massive discounts to prevailing oil prices.

I will continue to avoid investing in an oil & gas company if the investment thesis requires me to have a view on the future price of oil. But if I can find an oil & gas company with proven oil reserves that are valued at a tiny fraction of the prevailing price of oil, taking cues from Browder and Munger, I would be very interested as the huge discount removes the need for guesswork on oil prices.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Problems With Oracle’s AI Growth?

Oracle’s management is projecting a 14x increase in AI revenue over the next five years. But the picture is not as rosy as it seems.

Oracle Corporation’s (NYSE: ORCL) management stunned stock market participants earlier this week during the company’s conference call for the release of its first-quarter earnings for FY2026 (fiscal year ending 31 May 2026). Management announced stupendous future growth for Oracle’s Cloud Infrastructure business, driven by an enormous increase in RPO (remaining performance obligations) because of AI-related demand:

“We have signed significant cloud contracts with the who’s who of AI, including OpenAI, xAI, Meta, NVIDIA, AMD and many others. At the end of Q1, remaining performance obligations, or RPO, now to [US]$455 billion. This is up 359% from last year and up [US]$317 billion from the end of Q4. Our cloud RPO grew nearly 500% on top of 83% growth last year…

…The enormity of this RPO growth enables us to make a large upward revision to the Cloud Infrastructure portion of our financial plan. We now expect Oracle Cloud Infrastructure will grow 77% to [US]$18 billion this fiscal year and then increase to [US]$32 billion, [US]$73 billion, [US]$114 billion and [US]$144 billion over the following 4 years. Much of this revenue is already booked in our [US]$455 billion RPO number, and we are off to a fantastic start this year.”

For context, Oracle ended FY2025 with total revenue of US$57.4 billion, and Cloud Infrastructure revenue of merely US$10.2 billion. The newly expected windfall for Cloud Infrastructure drove Oracle’s stock price 36% higher the day after its FY2026 first-quarter earnings.

But when I looked at the details of Oracle’s RPO and financials, I found potentially serious problems with the company’s AI-growth story. 

Problem 1: Risky customer?

During the earnings conference call, Oracle’s management did not name the customers responsible for the massive increase in the company’s RPO. But a subsequent article from the Wall Street Journal revealed that OpenAI had recently signed a US$300 billion, five-year deal with Oracle – in other words, nearly 95% of Oracle’s sequential US$317 billion increase in RPO in the first quarter of FY2026 came from just OpenAI.

Intense customer-concentration alone can be a headache for any company. But when the customer is itself burning lots of cash, it can be a thunderclap headache. OpenAI’s leaders expect the company to earn around US$13 billion in revenue this year, but its deal with Oracle works out to an annual average spend of US$60 billion. Moreover, The Information reported earlier this month that OpenAI’s leaders are now forecasting significantly higher cash burn over the next few years than recently expected:

“OpenAI projected its cash burn this year through 2029 will rise even higher than previously thought, to a total of [US]$115 billion. That’s about [US]$80 billion higher than the company previously expected…

…The company projected it will burn more than [US]$8 billion this year, or roughly [US]$1.5 billion higher than its prior projection from earlier this year. Cash burn will more than double to more than $17 billion next year—[US]$10 billion higher than what the company earlier projected.

And in 2027 and 2028, the company projects to burn roughly [US]$35 billion and [US]$45 billion, respectively. In the prior projection, the company said its 2028 cash burn would be [US]$11 billion, meaning the new estimate is more than four times higher.”

OpenAI’s spending plans with Oracle will have to depend on the largesse of would-be investors and lenders, so there’s no guarantee that OpenAI will have access to funding in the future. In the meantime, Oracle will have to procure the AI hardware (mostly AI chips) ahead of time. This brings me to the second potential problem.

Problem 2: Risky finances? 

Purchasing AI hardware requires capital. Lots of capital. And Oracle’s not in the best financial shape for this

As of 31 August 2025, Oracle had US$11.0 in cash and marketable securities, but a staggering US$91.3 billion in debt, giving a high net-debt position of US$80.3 billion. If Oracle’s operating lease liabilities are included, the net-debt position rises further to US$94.4 billion. Oracle’s trailing operating cash flow and net income are US$21.5 billion and US$12.4 billion, respectively. Using the lower net-debt figure gives Oracle net-debt-to-operating-cash-flow and net-debt-to-net-income ratios of 3.7 and 6.5. These ratios suggest Oracle is unable to increase its debt significantly without risking its financial health. To be clear, the ratios are high not because Oracle’s trailing operating cash flow and net income are temporarily compressed; Table 1 below shows Oracle’s operating cash flows and net incomes for FY2021-FY2025.

Table 1; Source: Oracle earnings releases

Oracle’s management was asked during the FY2026 first-quarter earnings conference call about the capital expenditures needed to fulfill the company’s RPO. Management was coy and suggested that Cloud Infrastructure’s projected growth would happen in an asset-light way: 

As I mentioned in the prepared remarks, and as I’ve said very clearly beforehand, we do not own the property. We do not own the buildings. What we do own and what we engineer is the equipment. And that’s equipment that is optimized for the Oracle Cloud. It has extremely special networking capabilities. It has technical capabilities from Larry and his team that allows us to run these workloads much, much faster. And as a result, it’s much cheaper than our competitors. and depending on the workload.

Now because of that, what we do is we put in that equipment only when it’s time and usually very quickly, assuming that our customer accepts it, we’re already generating revenue right away. The faster they accept the system and that it meets their needs, the faster they start using it, the sooner we have revenue. This is, in some ways, I don’t want to call it asset-light from the finance world, but it’s asset pretty light.”

I disagree with management’s “asset pretty light” characterisation. Earlier, I mentioned that Cloud Infrastructure’s revenue was expected to increase from US$10.2 billion in FY2025 to US$18 billion in FY2026. During the earnings conference call, management projected US$35 billion in capital expenditure in FY2026, up 65% from US$21.2 billion in FY2025. I think it’s reasonable to assume that most of the US$35 billion in expected capital expenditure for FY2026 will be for the Cloud Infrastructure business, so we’re looking at a capital-expenditure-to-revenue-ratio of nearly 2 (US$35 billion over US$18 billion). That’s hardly “asset pretty light”

Exacerbating the problem for Oracle is that its operating cash flow in FY2025 was just US$20.8 billion, meaning it had negative free cash flow during the year. Unless Oracle’s operating cash flow increases by nearly 70% in FY2026, the company will have to raise capital externally for its projected capital expenditures. I already mentioned that Oracle’s heavy net-debt position is an obstacle to any large future increases in debt. This said, issuing shares could work, given Oracle’s current market capitalisation of US$922 billion. Oracle’s high price-to-earnings (P/E) ratio of 76 also makes issuing shares a palatable option. Nonetheless, there could still be material dilution given the potentially significant capital expenditures needed to support Oracle’s RPO. Coming back to the possibility of Oracle’s operating cash flow increasing by nearly 70% in FY2026, I think it’s very, very unlikely because of the lower margin of Cloud Infrastructure, which brings me to the third potential problem. 

Problem 3: Margin pressure?

Cloud Infrastructure has been Oracle’s fastest-growing business in the past few years. Table 2 shows the changes in Cloud Infrastructure revenue and  Oracle’s total revenue for FY2023-FY2025. 

Table 2; Source: Oracle earnings releases

Cloud Infrastructure revenue is likely all reported under Oracle’s Cloud services and license support segment. What has happened over the same period shown in Table 2 is that the Cloud services and license support segment’s operating expense has grown much faster than its revenue, as illustrated in Table 3, suggesting that Cloud Infrastructure is a lower-margin business for Oracle.  

Table 3; Source: Oracle earnings releases

This brings into question how much Oracle’s net income and cash flow can benefit from the rapid projected-growth in Cloud Infrastructure revenue. If Cloud Infrastructure’s revenue indeed grows as management expects, there’s no doubt that Oracle’s net income will grow – but to what extent remains to be seen. It’s worth noting that with Oracle’s shares carrying a P/E ratio of 76 at the moment, the market is expecting stellar net income growth. 

Conclusion

Larry Ellison, Oracle’s founder, chairman, and chief technology officer, once said

 “Why do we do these things? George Mallory said the reason he wanted to climb Everest was because it’s there. I don’t think so. I think Mallory was wrong. It’s not because it’s there. It’s because we’re there, and we wonder if we can do it…

…So how do I get off this merry-go-round? How do I stop when I’m winning? It’s hard for me to quit when I’m losing, and it’s hard for me to quit when I’m winning. It’s just hard for me to quit. I’m addicted to competing.” 

I wouldn’t count out any business leader with such a ferocious competitive spirit. But there are potential problems with Oracle’s AI-growth story, namely, (1) high revenue-concentration from a risky customer in OpenAI, (2) having a debt-laden balance sheet while having to invest heavily in AI chips, and (3) margin-compression from lower-margin AI-related services. I wonder how this will all work out. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Meta Platforms. Holdings are subject to change at any time.

3 Things That May Drag a REIT’s Distribution Per Unit Lower

Look under the hood at a REIT’s financials to understand if its future distributions are at risk of a decline.

Singapore-listed REITs, or real estate investment trusts, are a favourite investment vehicle for many Singaporean investors.

REITs provide investors with the opportunity to invest in property and also provide much more liquidity than investing directly in real estate. In addition, many REITs in Singapore have juicy trailing distribution yields that can be as high as 9%. And with interest rates likely on the decline, REITs can also benefit from lower interest expenses and so can have higher distributions to unit holders.

Given the above, I did some research on REITs recently to see if there were any that might be attractive opportunities. But as I conducted my study, I noticed some common issues about REITs that may end up being a drag on their future distributions per unit. 

Here’s what I found.

Management fees that are paid in units 

A common theme I noticed about REITs in Singapore is that most of them pay the bulk of the REIT manager’s fees with units in the REIT. 

Take a look below at Keppel DC REIT’s (SGX: AJBU) financial statement for the first half of 2025. They show the adjustments made to Keppel DC REIT’s net profit to determine the income available for distribution. One of the adjustments made is the management fees paid in units.

Source: Keppel DC REIT 2025 first-half financial statement

To prop up a REIT’s distribution per unit (DPU), a REIT’s manager can choose to receive its fees in units of the REIT instead of cash, since doing so means cash can be returned to shareholders. But this will be a drag to a REIT’s DPU over the longer term for two reasons.

First, once the REIT’s manager opts to receive all or a bigger portion of its fees in cash, the amount available for distribution to unitholders will decline (all other things remaining constant). Second, because the REIT manager opted to receive its fees in units in the past, the REIT had to issue new units, which resulted in a higher unit count; future distributions are thus divided across a larger unit base.

Keppel DC REIT is not the only REIT that does this. In fact, it is common practice across REITs in Singapore. 

Sasseur REIT (SGX: CRPU) is another example. Take a look at the REIT’s condensed income statement for the first half of 2025:  

Source: Sasseur REIT 2025 first-half earnings presentation slide

The base fee of Sasseur REIT’s manager that is paid in cash increased by S$0.4 million from the first half of 2024 to the first half of 2025. This is because the REIT manager opted to receive 30% of its base fee in cash in the first half of 2025, rather than the 20% it chose for the first half of 2024. This is a real case of how the DPU of a REIT can decline simply because a REIT’s manager chooses to receive less of its fees in cash.

The way I see it, the DPU of Sasseur REIT is being propped up by the REIT manager’s decision to receive some or all of its fees in units rather than cash. Once this goes away, DPU will be pressured.

Another example is Frasers Logistics & Commercial Trust (SGX: BUOU). These are the adjustments the REIT made to obtain its distributable income:

Source: Frasers Logistics & Commercial Trust’s FY2025 first-half financial statement

For Frasers Logistics & Commercial Trust, the column on the right of the table refers to the first half of FY2024 and the column on the left refers to first half of FY2025. In both periods, the REIT added a significant amount of “management fees paid in units” to net income, which puffed up distributable income. But in the first half of 2024, 100% of the REIT’s management fees were paid in units while in the first half off 2025, only 43% was so. Because of this change, Frasers Logistics & Commercial Trust had a lower upward adjustment to distributable income in the first half FY2025, which was one of the reasons its DPU to shrank year-on-year.

Watch for capital distribution

Another thing to look out is whether the DPU is being bumped up by one-off or short-term capital distributions. We can return to Frasers Logistics & Commercial Trust as an example.

Source: Frasers Logistics & Commercial Trust’s FY2025 first-half earnings presentation

The image above shows that Frasers Logistics & Commercial Trust’s total distributable income were bumped up by capital distributions in both the first half of FY2025 and the first half of FY2024.

The problem is capital distributions are one-off, or short-term distributions, and are dependent on gain on divestments. A REIT’s manager may decide to retain or distribute capital gains depending on the REIT’s performance for the year in order to “smoothen” out distributions. But as investors, we should note that these are not long-term solutions and eventually this capital distribution buffer may run out.

We can look at Far East Hospitality Trust* (SGX: Q5T) as an example:

Source: Far East Hospitality Trust’s 2025 first-half earnings presentation

Far East Hospitality Trust’s distribution to stapled security holders include distributions from other gains. The stapled trust divested one of its properties in 2022 and has since been distributing the divestment gains to unitholders at around S$8 million annually; the manager of the trust had decided to distribute the divestment gains over a few years to smoothen the trust’s distribution per stapled security (DPS). But as with all capital distributions, the well will eventually run dry, and absent the capital distribution, the DPS will likely drop.

*Far East Hospitality Trust is technically not a real estate investment trust. Instead, it is a stapled trust consisting of Far East Hospitality Real Estate Investment Trust and Far East Hospitality Business Trust. But for the purposes of this article, there’s no need to split hairs.

Interest rate sensitivity

When looking at how sustainable a REIT’s DPU is, we also need to look at its interest rate sensitivity. REITs have been reporting rising finance costs in the last few of years as their debt gets refinanced at higher rates or as their floating rate debt gets repriced. The rising finance costs have been a drag to their DPU.

This why I prefer a REIT with a high interest coverage ratio. A higher interest coverage ratio means that a change in interest rates would have a smaller impact on distributable income.

Imagine a REIT with an interest coverage ratio of 5. All else equal, a 20% increase in finance costs will only lead to a 5% decline in DPU. Comparatively, a REIT with an interest coverage ratio of 3 will suffer a 7% decline in DPU. 

CapitaLand Integrated Commercial Trust (SGX: C38U) has an interest coverage ratio of 3.3, shown in the table below, which looks fairly low to me and suggests that the REIT’s DPU is sensitive to interest rate hikes. But we are in a fairly high interest rate environment at the moment, so it is perhaps more common for interest coverage ratios to be on the lower end of the spectrum during this time.

Source: CapitaLand Integrated Commercial Trust 2025 first-half earnings presentation

Final Takeaways

Singaporean investors invest in REITs for steady income.

Although Singapore-listed REITs have historically performed decently, investors still need to assess if a REIT’s DPU can be sustained in the long-term. To do so, they can peer under the hood and determine if the REIT’s DPU could be pressured by (1) a higher unit base, (2) the end of capital distributions, and (3) the end of fees being paid in units.

It is also important to assess how sensitive a REIT’s DPU is to interest rate spikes. Rates may be likely to be on a downtrend in the near future, but there may be times ahead when interest rates spike again.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q2)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q2 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the second quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees AI as being a critical part of Airbnb’s long-term product vision; management thinks travel planning in the future cannot be done without AI

We couldn’t talk about long-term product vision without talking about AI…

…I think you can’t do travel planning without AI going forward.

Airbnb’s management thinks they have chosen the hardest part to start with AI in the travel industry, which is customer service; customer service is the hardest part because the stakes are high; management has built a custom AI agent for Airbnb based on 13 different models; the custom AI agent has been rolled out in the US in English and it has reduced the need for human contact by 15%; management will roll out the custom AI agent in more languages in 2025 H2; the custom AI agent will become more personalised and agentic in 2026

We’ve chosen a very specific way to approach AI. A lot of companies have chosen what I would say is the lower stakes part of travel, which is travel planning and inspiration. For AI, we actually start with the hardest problem, which is customer service. Customer service is the hardest problem because the stakes are high, you need to answer this quickly and the risk of hallucination is very, very high, and you cannot have a high hallucination rate. And when people are locked out, they want to cancel reservation, they need help, you need to be accurate. And so what we’ve done is we built a custom model or we’ve built a custom agent built on 13 different models that have been tuned from tens of thousands of conversations. 

We rolled this out throughout the United States in English. And this has reduced, as I mentioned in the opening remarks, 15% of people needing to contact a human agent when they interact instead with this AI agent. We’re going to now, over the course of this year, bring this to more languages.

And throughout next year, it’s going to become more personalized and more agentic. So what this means is that when you reach out to an agent, the AI agent, it will not only tell you how to cancel your reservation, it will know which reservation you want to cancel, it cancel it for you, and it can be agentic as in it can start to search and help you plan and book your next trip.

Airbnb’s management will introduce AI into travel search in 2026

Next year, we’re going to bring AI into travel search.

Airbnb’s management sees the company becoming an AI-first application over the next few years; management is seeing that in the 2-3 years since ChatGPT’s introduction, there have been no other top apps in app stores that can be considered as AI-native, Airbnb included; management thinks that in the next few years, the top apps in app stores will mostly be AI-native

Over the next couple of years, I think what you’re going to see is Airbnb becoming an AI-first application. And this leads to the bigger question around AI. Over the last almost 3 years since ChatGPT spun out, if you look at the top 50 apps in the App Store, almost none of them are AI apps. The #1 app in the App Store, I think, as we speak, is ChatGPT. And if you go through 2 through 50, maybe only 1 or 2 others are AI-native applications. So you’ve got basically AI apps and kind of non-AI native apps. And Airbnb would be a non-AI native application. Over the next couple of years, I believe that every one of those top 50 slots will be AI apps. either start-ups or incumbents that transform into being AI native apps. And I think at Airbnb, we are going through that process right now of transitioning from a pre-generative AI app to an AI native app. We’re starting to customer service. We’re bringing into travel planning. So it’s really setting the stage.

Airbnb’s management is open to the idea of opening Airbnb to 3rd-party AI agents, but it appears their preference is to be the leading destination for people to come and book travel

[Question] On the AI side, do you anticipate — there’s — it seems like there’s going to need to be a choice made whether to be open to agents and kind of agent agentic traffic and who will own that relationship versus being more of a closed platform. And given that you have much of your traffic today is direct and that you have a lot of exclusive supply, you probably have your choice in the matter.

[Answer] As far as whether or not we integrate with AI agents, I think that’s something that we’re certainly open to. Remember that to book an Airbnb, you need to have an account, you need to have a verified identity. Almost everyone who books uses our messaging platform. So I don’t think that we’re going to be the kind of thing where you just have an agent or operator book your Airbnb for you because we’re not a commodity. But I do think it could potentially be a very interesting lead generation for Airbnb. So I think it could be really interesting, but I don’t think it’s like a commodity like booking a flight.

Alphabet (NASDAQ: GOOG)

AI Mode for Search has launched in US and India and is going well; AI Overviews now has more than 2 billion monthly users; overall queries and commercial queries on Search continue to grow year-on-year, driven by AI features within Search; AI features within Search are leading users to search more, especially among younger users; AI Overviews are leading to 10% more queries globally, with the growth increasing over time; AI Overviews are now powered by Gemini 2.5, with the fastest Search response times; management is seeing strong growth in multimodal Search, especially with younger users; AI Mode now has more than 100 million monthly active users in the USA and India; management will soon introduce Deep Search into AI Mode; Search users in the USA can now access agentic AI-powered calling to local businesses; SearchLabs users can now try on clothes virtually and early results are promising, especially among Gen Z users, and management will soon roll out this feature to all US users; management does not manage Google Search based on paid clicks and CPC targets; paid clicks on Google Search was up 4% year-on-year in 2025 Q2; management continues to see monetisation of AI Overviews being similar to traditional Search

AI Mode has launched in the US and India and is going well, while AI Overviews now have over two billion monthly users across more than two hundred countries and territories and forty languages…

…Overall queries and commercial queries on Search continue to grow year over year, and our new AI experiences significantly contributed to this increase in usage. We are also seeing that our AI features cause users to search more as they learn that Search can meet more of their needs. That’s especially true for younger users…

…We know how popular AI Overviews are because they are now driving over ten percent more queries globally for the types of queries that show them, and this growth continues to increase over time. AI overviews are now powered by Gemini 2.5, delivering the fastest AI responses in the industry. We also saw strong growth in the use of multimodal search, particularly the combination of Lens or Circle to Search, together with AI overviews. This growth was most pronounced among younger users.We also saw strong growth in the use of multimodal search, particularly the combination of Lens or Circle to Search, together with AI overviews. This growth was most pronounced among younger users.

Our new end-to-end AI search experience, AI Mode, continues to receive very positive feedback, particularly for longer and more complex questions. It’s still rolling out but already has over one hundred million monthly active users in the US and India. We plan to keep enhancing the AI Mode experience for users by shipping great features fast. That includes our advanced research tool, Deep Search, and more personalized responses…

…Just last week, we brought a new agentic capability directly into Search for all US users with AI-powered calling to local businesses. Finally, shopping. In Q2, we introduced a virtual try-on experience for SearchLabs users in the US. Now people can try billions of clothing products on themselves virtually. Early results and engagement have been extremely positive, particularly with Gen Z users, and we’ll be bringing this functionality to all US users imminently…

…We actually don’t manage to pay clicks and CPC targets. Some of the product and policy changes we make actually drive better monetization at the expense of paid clicks. You will actually see in the 10-Q paid clicks were up 4% year on year, but a number of factors affect these metrics from quarter to quarter, such as a few examples, advertiser spending, product changes, policy changes, user engagement, and so on…

…You’re referring to the AI overview… When it comes specifically to the monetization of it, we talked about it before. We see monetization at approximately the same rate, which gives us actually a really strong base on which we can then innovate and drive actually more innovative and new and next-generation ad formats.

Alphabet’s management is using AI to improve Youtube Shorts’ content recommendation and dubbing and this helps to widen the audience-reach of creators; management is rolling out new AI tools for creators on Youtube Shorts; management is seeing the price and volume of advertising in Shorts increase, driven partly by AI-powered ad creative resizing tools, better advertising targeting, and higher viewer engagement

We now average over 200 million daily views on YouTube Shorts. AI is helping improve our recommendations and auto-dubbing, which translates to better returns for creators and brands by dramatically increasing the potential audiences they can reach. And today, we began rolling out a whole draft of new AI tools for creators on YouTube Shorts…

…We introduced Veo3, photo-to-video, and generative effects to Shorts, making content creation easier and offering unexplored avenues for creativity.

We’re seeing both the volume and the price of ads in Shorts increase, particularly in developed markets. The feed-based nature of the product allows for more ad opportunities on average, and this growth is further supported by ad formats native to Shorts, AI-powered ad creative resizing tools, improved ad targeting, and the rise in viewer engagement.

Google Cloud revenue run rate is now more than $50 billion; nearly all generative AI unicorns use Google Cloud, with some high-profile startups using TPUs specifically; Google Cloud saw strong customer demand, driven partly by its AI products; management has integrated AI agents into Google Cloud’s products and technology and traditional enterprises are using these agents; management has introduced an open-source AI agent development kit; the kit has 1 million downloads in less than 4 months; Google Cloud is now partnering with OpenAI; AI features have helped accelerate Google Cloud subscriptions

Cloud had another great quarter of strong growth in revenues, backlog, and profitability. Annual revenue run rate is now more than $50 billion…

…Nearly all Gen AI unicorns use Google Cloud, and it’s why a growing number, including leading AI research labs like SAFE Superintelligence and Physical Intelligence, use TPU specifically…

…Next, Google Cloud. We see strong customer demand driven by our product differentiation and our comprehensive AI product portfolio. Four stats show this. One, the number of deals over $250 million doubling year over year. Two, in the first half of 2025, we signed the same number of deals over $1 million that we did in all of 2024. Three, the number of new GCP customers increased by nearly 28% quarter over quarter. More than eighty-five thousand enterprises, including LVMH, Salesforce, and Singapore’s DBS Bank, now build with Gemini, driving a 35x growth in Gemini usage year over year…

…We’ve also integrated AI agents deeply into each of our cloud products. Wayfair is leveraging our databases integrated with AI to streamline data pipelines and deliver more personalized customer experiences. Mattel is leveraging our Gemini-powered data agents and BigQuery to review and act on product feedback more quickly. Target is using our Gemini-powered threat intelligence and security operations agents to improve cybersecurity. Capgemini is utilizing our AI software engineering agents to deliver higher quality software faster by automating tasks from code generation to testing. And BBVA says Gemini and Google Workspace are saving employees nearly three hours per week by automating repetitive tasks. It’s now rolling it out to one hundred thousand employees globally.

We are also focused on building a flourishing AI agent ecosystem. We introduced an open-source agent development kit, which now has over a million downloads in less than four months. We also introduced AgentSpace, an open and interoperable enterprise chat, search, and agent platform. Gordon Foodservice is bringing AgentSpace to its US employees, which is enabling better, more efficient decision-making. And over one million subscriptions have been booked for AgentSpace ahead of its general availability…

…On the second part with respect to OpenAI, we are very excited to be partnering with them on Google Cloud…

…On the first thing on subscriptions, you know, we’ve definitely, yeah. Google One has been an attractive value proposition powered by storage. But with now, our AI plans, including both Pro and Ultra, and particularly with the 2.5 series of models, they’ve definitely seen accelerated transactions.

Alphabet is expanding its Gemini 2.5 family of hybrid reasoning models; Gemini 2.5 models have industry-leading performance in nearly all major benchmarks; Alphabet recently debuted the extremely fast Flash Lite model; Gemini recently achieved a gold-medal-level performance in the International Math Olympiad; Alphabet has the best models today at every price point; 9 million developers have now built for Gemini; over 70 million videos have been generated with Veo3 since May 2025; the Gemini app has a new feature that turns photos into videos, and users love it; the photo-to-video feature on the Gemini app is now in Google Photos too; the number of tokens per month processed by Alphabet has doubled since May 2025 to 980 trillion; the Gemini app now has 450 million monthly active users (MAUs), and daily requests are up 50% from 2025 Q1; more than 50 million people used AI meeting notes in June 2025 alone in Google Meets; Google Workspace’s new video product, Google Vids, has reached nearly 1 million MAUs; AI Overviews are now powered by Gemini 2.5, with the fastest Search response times; Gemini usage in Google Cloud grew 35x year-on-year in 2025 Q2; Alphabet’s infrastructure provides the best performance and cost for both training and inference when the Gemini models are used

We continue to expand our Gemini 2.5 family of hybrid reasoning models, which provide industry-leading performance in nearly every major benchmark. In addition to improving our popular workhorse model, Flash, we debuted an extremely fast Flash Lite version. We achieved gold medal level performance in the International Math Olympiad using an advanced version of Gemini with DeepThink. We can’t wait to bring DeepThink to users soon. We have some of the best models available today at every price point. Our 2.5 models have been a catalyst for growth, and nine million developers have now built for Gemini.

I also want to mention Veo3, our state-of-the-art video generation model. It’s been a viral hit with people sharing clips created in the Gemini app and with our new AI filmmaking tool, Flow. Since May, over seventy million videos have been generated using Veo3, and we recently introduced a feature in the Gemini app to turn photos into videos, which people absolutely love. It’s also rolling out to Google Photos users starting today…

…At I/O in May, we announced that we processed four hundred and eighty trillion monthly tokens across our surfaces. Since then, we have doubled that number, now processing over nine hundred and eighty trillion monthly tokens—a remarkable increase.

The Gemini app now has more than four hundred and fifty million monthly active users, and we continue to see strong growth in engagement, with daily requests growing over fifty percent from Q1.

In June alone, over fifty million people used AI-powered meeting notes in Google Meet. And powered by Veo3, our new short video product in Workspace called Google Vids reached nearly one million monthly active users…

…AI overviews are now powered by Gemini 2.5, delivering the fastest AI responses in the industry…

…More than eighty-five thousand enterprises, including LVMH, Salesforce, and Singapore’s DBS Bank, now build with Gemini, driving a 35x growth in Gemini usage year over year. Our models are served on our AI infrastructure, which offers industry-leading performance and cost efficiency for both training and inference.

Waymo recently launched in Atlanta, doubled its Austin footprint, and expanded its Los Angeles and San Francisco Bay Area footprints by 50%; Waymo now has teen accounts in Phoenix for riders aged 14-17; Waymo has now autonomously driven more than 100 million miles on public roads

Last month, Waymo launched in Atlanta, more than doubled its Austin service territory, and expanded its Los Angeles and San Francisco Bay Area territories by approximately fifty percent. Waymo also launched teen accounts, starting with riders aged fourteen to seventeen in Phoenix…

…The Waymo driver has now autonomously driven over 100 million miles on public roads, and the team is testing across more than ten cities this year, including New York and Philadelphia.

Google Lens searches grew 70% year-on-year in 2025 Q2; most of Google Lens’ searches are incremental, and there’s healthy growth in shopping searches; Circle to Search is now on more than 300 million Android devices; gamers can now use Circle to Search while playing games

Google Lens searches are one of the fastest-growing query types on search and grew 70% since this time last year. The majority of Lens searches are incremental, and we’re seeing healthy growth in shopping queries using Lens. And you can obviously take this to the next level by moving from image to video-based capabilities like SearchLive.

Then there’s Circle to Search, which is now on over 300 million Android devices. We’ve been adding capabilities to help people explore complex topics and ask follow-up questions without switching apps. For example, gamers can now use Circle to Search while playing mobile games to see an AI Overview or answers.

Advertisers that use AI Max in Search campaigns typically see 14% more conversions; Alphabet’s latest Smart Bidding Exploration update allows advertisers to bid more often for less obvious but higher value queries; campaigns with Smart Bidding Exploration typically see 19% more conversions; Depop used DemandGen on Youtube Shorts to drive 80% brand lift and double its click-through rates; management has launched AssetStudio to help advertisers generate creatives; more than 2 million advertisers now use Alphabet’s AI-powered asset generation tools, up 50% from a year ago

Last quarter, we introduced AI Max in Search, a new suite of AI-powered features and existing search campaigns. Advertisers that activate AI Max in Search campaigns typically see 14% more conversions. On media buying, Smart Bidding Exploration, the biggest update to bidding strategy in a decade, brings better performance to advertisers by allowing them to bid on less obvious but potentially higher value queries more often. Campaigns using Smart Bidding Exploration see a 19% increase in conversions on average.

DemandGen continues to drive revenue growth and deliver measurable impact for our customers. As an example, Depop, Etsy’s resale clothing marketplace, used the Shorts-only DemandGen campaign to drive new customers to the site. Shorts drove 80% brand lift and double click-through rates versus benchmarks.

On creatives, we launched AssetStudio using our latest models to help businesses large and small generate creative assets. Small businesses benefit from top-quality assets and deployment scaling capabilities, but larger businesses can go faster from proof of concept to launch and resize at lower costs. Over two million advertisers now use Google’s AI-powered asset generation tools to run ads, a 50% increase on this time last year.

Google Cloud had 32% revenue growth in 2025 Q2 (was 28% in 2025 Q1) driven by growth in core GCP products and AI products; AI products revenue growth was at a much higher rate than Google Cloud’s overall revenue growth; Google Cloud operating margin was 20.7% (was 17.8% in 2025 Q1 and was 11.3% in 2024 Q2); even as Google Cloud’s capex ramps up, management continue to drive productivity and efficiency improvements; Google Cloud’s backlog was up 18% sequentially in 2025 Q2, and up 38% year-on-year, to $106 billion; Google Cloud still has more AI demand than capacity in 2025 Q2 (as it did in 2025 Q1)

Turning to the Google Cloud segment, which delivered very strong results this quarter. Revenues increased by 32% to $13.6 billion in the second quarter, reflecting growth in GCP across core and AI products at a rate that was much higher than cloud’s overall revenue growth, and growth in Google Workspace driven by an increase in average revenue per seat and the number of seats. Google Cloud operating income increased to $2.8 billion, and operating margin increased from 11.3% to 20.7%. 

The expansion in cloud operating margin was driven by strong revenue performance and continued efficiencies in our expense base, partially offset by higher technical infrastructure usage costs, which includes the associated depreciation. As we ramp our AI investments, we continue to focus on driving improvements in productivity and efficiency to offset growth in technical infrastructure-related expenses, particularly from higher depreciation.

Google Cloud backlog increased 18% sequentially in Q2 and 38% year over year, reaching $106 billion at the end of the quarter. This growth was driven by strong demand for our products and services from both new and existing customers…

…We have been working hard to increase capacity and have improved the pace of server deployment. We expect to remain in a tight demand-supply environment going into 2026.

Alphabet’s management thinks that AI agents are currently too slow, costly, and brittle, but Alphabet is making progress on those fronts; management thinks AI agents will be used more broadly in 2026; management has rolled out agent coding journeys for internal use and Alphabet’s software engineers are doing more agentic workflows in software engineering

The forward-looking trajectory, I think, will really unlock these agentic experiences. We see the potential. We’re able to do them, but they’re a bit slow and costly and take time and sometimes are brittle. But we’re making progress on all of that. And I think that’s what will really unlock. And I expect 2026 to be the year in which people kind of use agent experiences more broadly…

…We are now beginning to roll out agent coding journeys for our software engineers within the company. And it’s been exciting to see just over the last few months, particularly over the last few weeks, people are definitely doing more agentic workflows in software engineering as well internally.

Alphabet’s management is very excited about the potential of smart glasses as the next-generation device for AI experiences, but they think smartphones will still be central for a few more years at least

We are super excited about our investment in glasses, and found experiences have taken a dramatic step up compared to the last iteration. So I think it’ll be an exciting new emerging category. But I still expect phones to be at the center of the experience for the next two to three years at least.

Alphabet’s management sees some overlap in use cases between AI Mode and Gemini app, but there are also unique use cases to each product; for AI Mode, people are using it for searching, whereas in the Gemini app, people are using it for long conversations, sometimes in almost therapy-like sessions; management thinks of AI Overviews as more for information-retrieval and Gemini app as more of a personal assistant; management is open to the possibility of merging AI Overviews with the Gemini app in the future, but for now, they want to meet users where there are

On AI mode versus Gemini standalone app, broadly, there are some use cases where you can get a great experience in both places. But there are use cases that are very specific. I think where the queries are information-oriented, but people really wanted to rely on the information, but have the full power of AI. I think AI mode really shines in that. You can go there and you know it’s backed up. The Gemini models are using Search deeply as a tool. And so it’s on-ground and in that Search experience, and I think users are responding very positively to it. Whereas in the Gemini standalone app, you see everything from people can have a long conversation or chat just kind of pass time, in the Gemini app. You’ve seen early cases where people may get into it in a therapy-like experience…

…Search is more information-focused. And we think of the Gemini app as more your assistant, more personal, proactive, and powerful assistant for every aspect of your daily life. And so you can imagine wanting to call deeply or create a long video, etc. Like, you know, those things can be done by the Gemini app today better. Over time, like we’ve always done, we’ve gone through these evolutions before, like, as you point out. You know, we can understand user intent better and abstract some of the complexity for our users. At one point, people used to go to, you know, query separately for text differently from images, differently from videos, etc. And we kind of made it all seamless with universal search. So we have the experience of being able to bring together experiences in a way that makes sense for users. And do the heavy lifting for them. But I think, you know, when you’re in this early stage of new emerging paradigms, I think we want to make sure we can meet them where they are expecting today.

Amazon (NASDAQ: AMZN)

Amazon’s management has rolled out Deep Fleet, an AI that improves robot travel efficiency by 10%; Deep Fleet helps improve delivery times for customers while saving costs, and improves workplace safety for employees; management will be introducing a lot more in the area of robotics and generative AI in the coming years

We deployed our 1 millionth robot across our global fulfillment network and unveiled innovations in our last-mile innovation center, such as automated package sorting and a transformative technology that brings packages directly to employees in an ergonomic height. We rolled out Deep Fleet, our AI that improves robot travel efficiency by 10%. At our scale, it’s a big deal. Deep Fleet acts like a traffic management system to coordinate robots’ movements to find optimal paths and reduce bottlenecks. For customers, it means faster delivery times and lower costs. For our team members, our robots handle more of the physically demanding tasks, making our operations network even safer. This combination of robotics and generative AI is just getting started. And while we’ve made significant progress, it’s still early with respect to what will roll out in the next few years

AWS grew 17.5% year-on-year in 2025 Q2, and is now at a $123 billion annualised revenue run rate (was $117 billion in 2025 Q1); AWS continues to help organisations of all sizes transition to the cloud; AWS’s AI business continues to have a multi-billion annual revenue run rate and growth rate of triple-digits year-on-year; AWS’s AI business currently has more demand than supply; AWS has launched EC2 instances that are powered by NVIDIA’s latest chip architecture, the Grace Blackwell; AWS is starting to release powerful applications at the top layer of the AI stack; management still sees 85% of global IT spend being on-premises and that the spend will flip to the cloud over the next 10-15 years, with acceleration for the flip coming from companies’ excitement over AI; management is confident that AWS is well-positioned to capture the flip from on-premises to the cloud; AWS saw growth in both generative AI business and non-generative AI offerings in 2025 Q2; management will continue to invest more capital in compute capacity for AWS as they see an unusually large opportunity in generative AI; management thinks AWS is growing slower than Azure and GCP because AWS is much larger; the supply constraints AWS is facing are mostly in power, but also in chips and components; management thinks the supply constraint will get better each quarter, but will take a few quarters to fully resolve

In Q2, AWS grew 17.5% year-over-year and now has over $123 billion annualized revenue run rate. We continue to help organizations of all sizes accelerate their transition to the cloud, signing new agreements with companies, including PepsiCo, Airbnb, Peloton, NASDAQ, London Stock Exchange, Nissan Motor, GitLab, SAP, Warner Bros. Discovery, 12 Labs, FICO, Iberia Airlines, SK Telecom and NatWest. In the rapidly evolving world of generative AI, AWS continues to build a large, fast-growing triple-digit year-over-year percentage multibillion-dollar business with more demand than we have supplied for at the moment…

…We’ve also launched Amazon EC2 instances powered by NVIDIA Grace Blackwell Super chips, AWS’ most powerful NVIDIA GPU accelerated instance…

…You’re starting to see AWS release more powerful applications at the top layer of the AI stack…

…Remember that 85% to 90% of worldwide IT spend is still on-premises versus in the cloud. In the next 10 to 15 years, that equation is going to flip, further accelerated by companies’ excitement for leveraging AI. So AWS’s significantly broader functionality, stronger security and operational performance, a much deeper experience helping enterprises modernize their infrastructure bodes well for the AWS business moving forward…

…During the second quarter, we continue to see growth in both our generative AI and non-generative AI businesses as companies turn their attention to newer initiatives bring more workloads to the cloud, restart or accelerate existing migrations from on-premise to the cloud and tap into the power of generative AI…

…We will continue to invest more capital in chips, data centers and power to pursue this unusually large opportunity that we have in generative AI…

…[Question] On AWS, we’re seeing significantly faster cloud growth among the #2 and #3 players in the space. I totally appreciate that AWS is coming off of a bigger base. But beyond that, do you think the output gap is due more to customer demand or infrastructure supply for both?

[Answer] Year-over-year percentages and growth rates are always a function of the base in which you operate. And we have a meaningfully larger business in the AWS segment than others. I think the second player is about 65% of the size of AWS. And we — when we look at the results over the last number of quarters, there are sometimes where — as far as we can tell, we’re growing faster than others and sometimes others are growing faster than us. But it’s still like if you look at second place player you’re talking about, it’s a — it’s still a pretty significant segment market segment leadership position that we have…

…Some of the constraints and they kind of exist in multiple places, the single biggest constraint power. But you also see constraints off and on with chips and then some of the components that — once you have the chips to actually make the servers, the sometimes you have new generations of chips that are a little bit later than they’re supposed to be and sometimes you get the chips and the yield you get in making servers isn’t what you expect when you get to ramp…

…I don’t believe that we will have fully resolved the amount of capacity we need for the amount of demand that we have in a couple of quarters. I think it will take several quarters. But I do expect that it’s going to get better each quarter, and I’m optimistic about that.

AWS’s in-house AI chip, Trainium 2, is landing capacity in larger quantities; Trainium 2 is the backbone for Anthropic’s newest generation Claude models and other Amazon offerings such as Amazon Bedrock; management thinks the real costs for AI in the future will be for inference, which will take up 80%-90% of AI costs at scale, and Trainium 2 has 30%-40% better price performance than GPUs for inference; management is already working on Trainium 3; management thinks a lot of AI compute and inference will ultimately run on Trainium 2, using the historical analogy of developments in CPUs, where customers want better price performance than Intel’s leading x86 CPUs and where AWS met the demand through its Graviton chips; management thinks that price performance is going to matter to companies as they scale their AI applications

 Our custom AI chip, Trainium2 is landing capacity in larger quantities and has impressively emerged as the backbone for Anthropic’s newest generation Claude models and many of our most essential offerings like Amazon Bedrock…

…If you look at where the real costs are, they’re going to ultimately be an inference today, so much of the cost in training because customers are really training their models and trying to figure out to get the applications into production. But at scale, 80% to 90% of the cost will be an inference because you only train periodically, but you’re spinning out predictions and inferences all the time. And so what they’re going to care a lot about is they’re going to care about the compute and the hardware they’re using. And we have a very deep partnership with NVIDIA and will for as long as I can foresee, but we saw this movie in the CPU space with Intel, where customers are anchoring for better price performance. And so we built just like in the CPU space, where we built our own custom silicon and building Graviton which is about 40% more price performance than the other leading x86 processors.

We’ve done the same thing on the custom silicon side in AI with Trainium and our second version of Trainium2 is really — it’s become the backbone of Anthropic’s next Claude models they’re training on top of, and it’s become the backbone of Bedrock and the inference that we do. So I think a lot of the inference, it’s about 30% and 40% better price performance than the other GPU providers out there right now, and we’re already working on our third version of Trainium as well. So I think a lot of the compute and the inference is going to ultimately be run on top of Trainium2…

…Price performance is going to matter to people as they get to scale. 

Amazon Bedrock is AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps; Bedrock recently added Anthropic’s Claude 4 and it is the fastest-growing model ever; Amazon’s own frontier model, Amazon Nova, is the 2nd-most popular foundation model in Bedrock

In Bedrock, we’ve recently added Anthropic’s Claude 4 and is the fastest-growing model ever in Bedrock. We’ve also continued to see strong adoption of Amazon Nova, our own Frontier model, and it’s now the second most popular foundation model in Bedrock.

Amazon’s management is seeing that AWS customers are excited about AI agents, but lack the tools to build them; AWS released Strands, an open-source software to build AI agents; Strands already has 2,500 stars on GitHub and 300,000 downloads on PyPI; management is seeing that AWS customers are struggling to deploy AI agents securely in a scaled way and management recently released the Agent Core feature to solve the problem; management is seeing excitement from customers about Agent Core; AWS Transform is an AI agent that reduces mainframe modernization time lines from years to months; management recently released Kiro, an agentic integrated development environment coding agent; several hundred thousand developers are already using Kiro in the first couple of weeks; Kiro allows developers to do vibe coding but makes it much easier to go from prototyping to production; Kiro has event-driven hooks that help developers catch things that are easy to miss; it’s early days for Kiro, but management thinks there’s a chance for Kiro to transform how developers build software

As people have become excited about building agents, they’re realizing they lack the tools to build them. In May, we released Strands, an open-source way to more easily build agents, has taken off with a wide range of customers with already 2,500 stars on GitHub and over 300,000 downloads on PyPI.  Customers are also struggling with deploying agents into production in a secure and scalable way. It’s holding up enterprises scaling agents. To help solve that problem, Bedrock just released Agent Core. Agent Core is a set of building blocks that gives customers the industry’s first secure serverless run time to provide both synchronous and asynchronous execution, aging identity and boundaries, a memory service, a gateway to translate services to MCP compatible interfaces, built-in code execution and web browser tools, and an observability service. Customers are excited about Agent Core, and it frees them up to start deploying agents more expansively…

…AWS Transform as an AWS agent that dramatically reduces mainframe modernization time lines from years to months completes VMware TC2 conversions up to 80x faster. It makes it simple to move from .NET windows to .NET Linux implementations, reducing licensing costs for .NET applications by up to 40%. We’ve also just released Kiro, our new Agentic integrated development environment coding agent. There’s a lot of buzz around Kiro with several hundred thousand developers using and requesting access in the first couple of weeks, 100,000 used in the first 5 days of the preview. What struck a cord for developers is that Kiro allows them to do Vibe coding where developers use natural language to chat with a coding agent to build code. But unlike other coding agents, where developers don’t really have any structure to build on top of, Kiro allows developers to use natural language to build spec and then automatically updates that spec as they continue to vibe code or interact with Kiro. This makes it much easier to go from prototyping to production. Customers also like Kiro’s event-driven hooks that act like an experienced developer catching things developers might miss. When developers save a React component, hooks update that test file. When they modify API endpoints, Hooks refresh readme files. When they’re ready to commit security hook scan for leak credentials. It’s still very early for Kiro, but it seems clear we’re on to something customers love and Kiro has a chance to transform how developers build software.

Amazon’s management has seen very positive feedback in the early rollout of Alexa Plus, Amazon’s generative-AI-powered assistant, to millions of users in the US; management thinks the current Alexa Plus experience is so much better than the prior experience; Alexa Plus can take actions for users; Alexa Plus will be rolled out broadly in the US in the coming months, and internationally in the later part of 2025; usage of Alexa Plus is much more expansive than before; management thinks Alexa Plus’s economic opportunity could come in three ways, (1) driving more shopping on Amazon, (2) a surface for advertising, and (3) subscriptions

We’re excited about our progress with Alexa Plus, our next-generation assistant powered by generative AI. We’ve been rolling out early access to U.S. customers to start millions of customers have access now. We’re seeing very positive feedback, and we’ll continue to iterate on the experience…

…The Alexa Plus experience is so much better than I think our prior Alexa experience. She’s much more intelligent than her prior self. She’s much more capable and I would say unlike the other chat bots that are out there today who are good at answering questions, but really can’t take any action for you. Alexa Plus can take a lot of action for you, which is very compelling. So I can ask Alexa to play music for me or play video for me to move my music from one device to another or if I’m listening to a song, that’s on — that’s in a movie, I can ask Alexa Plus to actually put that movie scene on — of the song I’m playing, and it will put it on my Prime video on Fire TV or if I have guests coming over. I can say, Alexa draw the curtains, put the light on the porch and the driveway, increase the temperature by 5 degrees and put on music that would be great for a dinner party. And she does all that just through using natural language…

…We’ve been rolling out Alexa Plus starting in the U.S. It’s with millions of customers now. The rest in the U.S. coming in the next couple of months and it’s starting the international rollout more broadly later in the year…

…The usage is much more expansive than what they were using before and the number of calls they’re making is meaningfully higher…

…if you build the world’s best personal assistant, that has a lot of utility for customers, and therefore, it gets used a lot. So it means everything from people are excited about the devices that they can buy from us that has Alexa Plus enabled in it. People do a lot of shopping and it’s really — it’s a delightful shopping experience that will keep getting better. I think over time, there will be opportunities as people are engaging more multiturn conversations to have advertising play a role to help people find discovery and also as a lever to drive revenue. And I think over time, you could also imagine, as we keep adding functionality that there could be some sort of subscription element beyond what there is today. Today, Prime members get Alexa Plus for free and non-Prime members pay $9.99 a month for Alexa Plus. So I think it’s very — it’s still very early days, but we’re very encouraged by the experience we’re providing and you can bet we’re going to be iterating on it constantly.

AWS’s backlog is $195 billion in 2025 Q2, up 25% year-on-year (was $189 billion in 2025 Q1, up 20% year-on-year)

[Question] I’ll stick with AWS to start with. Could you just disclose the backlog number?

[Answer] I’ll just start off to give you the backlog figures. So at the end of the quarter, at June 30, that was $195 billion, so that’s up about 25% year-over-year.

Amazon’s management thinks the AI space is still very early and is currently very top-heavy, with a small number of very large frontier models being trained with very large amounts of compute, and with a small number of very large-scale AI applications, with chatbots and coding agents being the largest categories and ChatGPT being a standout by far; some of the training and the large-scale AI applications are being served by AWS; there is a long-tail of small AI applications that are in pilot mode or being developed; there are a very significant number of enterprises and startups building AI applications on AWS

I think it is so early right now in AI. If you look at what’s really happening in the space, you have — it’s very top heavy. So you have a small number of very large frontier models that are being trained that spend a lot on computing, a couple of which are being trained on top of AWS and others are being trained elsewhere. And then you also have, I would say, a relatively small number of very large-scale generative AI applications. The one category would be chatbots with the largest by a fair bit being ChatGPT, but the other category being really, I’ll call it, coding agents. So these are companies like Cursor, Versall, Lovable and some of the companies like that. Again, several of which run significant chunks on top of AWS…

…You’ve got a very large number of generative AI applications that are in pilot mode — or they’re in pilots or that are being developed as we speak and a very substantial number of agents that also people are starting to try to build and figure out how to get into production in a broad way, but they’re all — they’re quite early. And many of them that are out there are they’re significant, but they’re just smaller in terms of usage relative to some of those top heavy applications…

…We have a very significant number of enterprises and startups who are running applications on top of AWS’ AI services.

Amazon’s management thinks that companies that are developing AI applications are currently not paying close attention to where their AI applications are operating relative to the locations of the rest of their data and infrastructure; management thinks that companies will eventually want to run their AI applications close to where their data is, and this is a strength for AWS because so many applications and data are on AWS than anywhere else

Because we’re at a stage right now where so much of the activity is training and figuring out how to get your gender of AI applications into production. People aren’t paying as close attention as they will and making sure that those generative AI applications are operating where the rest of their data and infrastructure. Remember, a lot of general AI, inference is just going to be another building block like compute, storage and database. And so people are going to actually want to run those applications close to where the other applications are running, where their data is. There’s just so many more applications and data running in AWS than anywhere else. And I’m very optimistic about as we get to a bigger scale what’s going to happen to AWS on the AI side.

Amazon’s management thinks AI is the biggest technology transformation of our lifetime; management sees AI impacting every single area within Amazon, and they want to embrace the change

I think that AI is the biggest technology transformation for a lifetime…

…It’s also going to change very substantially the way we work. And if you think about it, the way that we do coding, the way that we do analytics, the way that we do research, the way that we do finance and measure — I mean, really, the way we do business process automation, the way we do customer service. Every single area that I can think of in the way we work is likely going to be impacted in some meaningful way by AI. And I think when you have a big shift like that, you have 2 macro choices. You can either decide that you’re going to embrace it. and you’re going to help shape it and you’re going to figure out how to build the right tools to allow you to take advantage of the technology or you can wish it away and have it shape you. And the posting that you’re referencing, Ron, that I made was just really being clear with the team that we’re going to pursue that former approach. We are going to embrace it. We’re going to try and shape it.

Apple (NASDAQ: AAPL)

Apple’s management recently announced new AI capabilities, such as live translation and Workoutbuddy; management opened up access to Apple’s on-device foundation models; management sees AI as a profound technology and is embedding it across Apple’s devices and platforms; management is significantly increasing Apple’s AI investments; management is integrating AI across Apple’s platforms, and have released 20 Apple Intelligence features; management expects to release a personalised Siri in 2026; management reiterated their expectation to release a personalised Siri in 2026

And we were excited to share some updates across our AI work. We announced even more capabilities coming later this year, including live translation and Workout Buddy. In addition to those new features, we announced new support for a number of languages, and we opened up access to the on-device foundation models at the core of Apple Intelligence…

…We see AI as one of the most profound technologies of our lifetime. We are embedding it across our devices and platforms and across the company. We are also significantly growing our investments…

…With Apple Intelligence, we’re integrating AI features across our platforms in a way that is deeply personal, private and seamless, right where users need them. We’ve already released more than 20 Apple Intelligence features, including visual intelligence, cleanup and powerful writing tools. We’re making good progress on a more personalized Siri, and as we’ve said before, we expect to release these features next year…

…We’re making good progress on a more personalized Siri, and we do expect to release the features next year, as we had said earlier, our focus from an AI point of view is on putting AI features across the platform that are deeply personal, private and seamlessly integrated. 

Apple’s chips in Apple’s devices allow users to run AI models on-device; when greater AI capabilities than the on-device models can provide are needed, the requests are routed through Apple’s private cloud compute 

Apple silicon is at the heart of all of these experiences, enabling powerful Apple Intelligence features to run directly on device. For more advanced tasks, our servers, also powered by Apple silicon, deliver even greater capabilities while preserving user privacy through our private cloud compute architecture. We believe our platforms offer the best way for users to experience the full potential of generative AI. Thanks to the exceptional performance of our systems, our users are able to run generative AI models right on their Mac, iPad and iPhone.

Apple’s capex for FY2025 year-to-date (FY2025 9M0) is notably higher; the higher capex is because of AI investments, which includes Apple’s 1st-party data centers for private cloud compute; management expects Apple’s capex to grow substantially in the future because of AI-related investments

[Question] Just on the CapEx, it’s up notably year-to-date. Could you just comment on your capital spending plan this year and next and provide some qualitative color in terms of what’s driving that growth?

[Answer] It’s a combination of factors. I would say, a pretty significant driver as Tim talked about, is the fact we are increasing our investment significantly in AI. So that is certainly a component of it. As you know, we’ve been investing in private cloud compute, which is also in our first-party data centers. The other piece, as you know, is we do have a hybrid strategy where in cases we do use third parties to make capital investments, and we also invest in our own. So you are going to see an increase in CapEx…

…[Question] CapEx is clearly moving higher. I know you guys don’t guide specifically to that number. But just kind of qualitatively, should we — as you lean in more on AI, should we really start to see that CapEx, which is running close to about $4 billion annualized today, really start to move appreciably higher? 

[Answer] we are increasing our investment significantly in AI. You are going to continue to see our CapEx grow. It’s not going to be exponential growth, but it is going to grow substantially. And a lot of that’s a function of the investments we’re making in AI. As we mentioned, we also have other items that fall under that category, facilities and some of our retail store investments. But I would say a lot of the growth is really being driven by AI.

Arista Networks (NYSE: ANET)

Arista Networks’ management has even more conviction now with the AI and Cloud Titans opportunity and has raised company’s revenue guidance for 2025; management thinks it’s a once-in-a-lifetime opportunity with the AI and Cloud Titans; management’s goal of $750 million in back-end AI networking revenue in 2025 is well on track; back-end AI networking revenue is purely incremental revenue for Arista Networks; management expects total AI-related networking revenue to exceed $1.5 billion in 2025 and to grow for years; Arista Networks recently lost its fifth big AI customer, which was a sovereign AI customer, but management thinks the company will still be able to achieve $750 million in back-end AI networking revenue in 2025, and $1.5 billion in total AI-related networking revenue; management is seeing a lot of activity in its four big AI customers and has been surprised at the level of activity, albeit still small, in enterprises and neoclouds; management thinks the 25-30 enterprise and neocloud customers Arista Networks recently won will help the company reach its goal of $750 million in back-end AI networking revenue in 2025

Our conviction with AI and Cloud Titans and enterprise customers has only strengthened. We began the year with a pragmatic guide of 17% or $8.2 billion annual revenue. But as the year has progressed, we recognize the potential to build a truly transformational networking company, addressing a massive total available market. This feels to us like a unique once-in-a-lifetime opportunity. We, therefore, raised our 2025 annual growth to 25%, now targeting $8.75 billion in revenue, which is an incremental $550 million more due to our increased momentum that we are experiencing across AI, cloud and enterprise sectors…

…Our stated goal of $750 million back-end AI networking is well on track and gaining from nearly 0 revenue 3 years ago in 2022 to production deployments this year in 2025…

…The back-end AI is all incremental revenue and incremental market share to Arista…

…We do expect an aggregate AI networking revenue to be ahead of the $1.5 billion in 2025 and growing in many years to come…

…On AI, I don’t need to tell you that despite losing one of our key anchor customers, the fifth customer was a sovereign AI customer that’s pretty much out of these numbers. We were still able to, we believe, achieve $750 million in back-end targets revenue and exceed $1.5 billion for the year. Exact numbers, we’ll know when we finally ship. We can’t give you those specifics now. But despite losing one customer, we’re having a lot of activity in the four big ones. And it’s pleasantly a surprise to us to see the advent of enterprise and even some neo clouds. The numbers are small. It’s not as big as the large titans, but it’s all adding up…

…To make that number or actually to exceed that number, you may have noticed that I pointed out that we now have in an aggregate, I think last time we said 15 and now we’re saying 25 to 30 enterprise and Neocloud customers. So they’re not big individually, but together, they add up to contribute as well for the loss of the fifth customer and the slowness of the fourth.

Arista Networks’ management sees AI data centers as consisting of all 3 of scale-out front-end networks, scale-up back-end networks and scale-out back-end networks; management sees scale-up back-end networks being built today predominantly with NVLink, but they expect a move towards Ethernet or UALink in the coming years; management sees scale-out back-end networks rapidly migrating from Infiniband to Ethernet based on the Ultra Ethernet Consortium specification released in June 2025; management sees Arista’s portfolio of Etherlink and EOS products as important components fo scale-out front-end networks; management thinks Arista Networks’ Etherlink portfolio has the most comprehensive solution for scale-out back-end and scale-out front-end networking; management thinks Arista Networks is the best AI networking platform for all kinds of AI accelerators; scale-up networks are a new and unique requirement, and will be a new incremental market for Arista Networks; management is currently unsure how big the total addressable market (TAM) will be for the new incremental market in scale-up networks; management thinks Arista Networks has the premier scale-out platform

AI centers consist of both scale-out front-end and scale-up/scale-out combination for back-end networks. 

Scale-up back-end networks consist of high-bandwidth, low-latency interconnects that tightly link multiple accelerators within a single rack as a unified compute system with workload parallelism. Today, this is predominantly constructed with NVLink as a compute-attached I/O, but we do expect a move to open standards such as Ethernet or UALink in the next few years.

Scale-out back-end network is dedicated spines interconnecting XPUs across racks, engineered for high bandwidth and minimal latency, thereby resulting in efficient parallel processing of massive training models. Here, InfiniBand is rapidly migrating to Ethernet based on the Ultra Ethernet Consortium specification released in June of 2025.

Scale-out front-end connects the back-end clusters to external clouds, compute resources, storage, wide area networks and data center interconnect to handle data ingestion, orchestration for AI and cloud traffic in a leaf-spine network topology. Arista’s flagship Etherlink and EOS are key hallmarks of scale-out networking with a wide breadth and depth of network protocol support. Introduced in 2024, Arista’s Etherlink portfolio is now 20-plus products with the most comprehensive and complete solution in the industry, especially for scale-out back-end and scale-out front-end networking…

…What is crystal clear to us and our customers is that Arista continues to be the premier and preferred AI networking platform of choice for all flavors of AI accelerators…

…Scale-up is a new and unique requirement, and it particularly is going to come in as people start building more and more AI racks, right? So when you’re building an AI rack and you want to boost the ratings and performance of an individual rack or cluster and your XPU ratings gets bigger and bigger, you often need a very simple interconnect, right? This interconnect in the past has been PCIe Express, CXL and now you’re seeing a lot of NVIDIA NVLink where you can really collapse your system board and XPU socket into an I/O. It’s almost not a network, it’s an I/O. It’s a back-end to a back-end, if I can call it that, right? And so scale-up networks will be an incremental new market as Arista pursues it…

…[Question] You talked about Scale-Up Ethernet to be incremental to your TAM. Curious if you have any sense how big this TAM is in 3 years.

[Answer] I don’t know yet. In terms of port density, in terms of units, if I look at the ratio within a rack versus outside in units, it’s quite high, 8:1, 10:1. But in terms of dollars, I don’t think it’s nearly as much because the level of functionality required is much simpler. So how about we beg that question out for September when we’ll know more?…

…Arista is the premier scale-out spine platform. The 7800 spine, our AI spine is a really flagship franchise platform. It takes advantage of all of the virtual output queuing, the congestion control, the peripheral queuing, the buffering, et cetera, in a way that nobody else in the industry has been able to demonstrate. And oh, by the way, besides being a great AI spine, it’s also a great routing platform for the WAN.

Poor networks lead to inefficient usage of GPUs; good networking is critical when building GPU clusters because 30%-50% of processing time is spent on exchanging data over networks and GPUs

Poor networks and bottlenecks lead to idle cycles on GPUs, wasting both capital GPU costs and operational expenses such as power and cooling. With a 30% to 50% processing time spent in exchanging data over networks and GPU, the economic impact of building an efficient GPU cluster with good networking improves utilization, and this is super paramount.

Arista Networks’ management expects back-end and front-end networks in AI data centers to converge as LLMs (large language models) expand into distributed training and inference, making it increasingly difficult to differentiate between back-end and front-end networks 

As large language models continue to expand into distributed training and inference use cases, we expect to see the back-end and the front-end converge and call us more together. This will make it increasingly difficult to parse the back-end and the front-end precisely in the future.

Most AI accelerators today are NVIDIA GPUs, but Arista Networks is entering early pilots with alternate AI accelerators including those from hyperscalers, AMD, and startups

While majority today is NVIDIA GPUs, we are entering early pilots connecting with alternate AI accelerators, including start-up XPUs, the AMD MI series and in AI and Titan customers who are building their own XPUs.

Arista Networks’ management is seeing enterprises and neoclouds increasingly adopt AI; one of Arista Networks’ neocloud customers is a sovereign AI working with a non-NVIDIA cluster; Arista Networks’ neocloud customers almost always adopt the company’s products for both front-end and back-end deployments

As we continue to progress with our four top AI Titan customers, AI is also spreading its wings into the enterprise and Neocloud sectors, and we are winning approximately 25 to 30 customers to date…

…In fact, one of the Neoclouds is a sovereign AI, which is a non-NVIDIA cluster that they’re working with right now that may factor in 2026…

…In terms of Neoclouds, almost always, the Neocloud is a combination of back and front. It’s never one or just the other, but definitely, the Neoclouds also have a back-end component.

Arista Networks’ management sees the rise of AI agents straining LAN and WAN traffic patterns

The rise in Agentic AI ensures any-to-any conversations with bidirectional bandwidth utilization. Such AI agents are pushing the envelope of LAN and WAN traffic patterns in the enterprise.

Arista Networks’ management is seeing a more balanced deployment of both cloud and AI now as compared to 2-3 years ago when there was raging excitement over just AI

if you recall 2, 3 years ago, maybe it’s hard to remember all of that, I was actually very worried that the cloud spending had a little bit frozen, and all of the excitement and enthusiasm was going towards GPU and how big is your GPU cluster, that kind of thing. We now see it coming back and the pendulum swinging into a more balanced deployment of both cloud and AI.

Arista Networks’ management continues to see very different data-traffic patterns between traditional cloud and AI

As a result of all these AI deployments, as I’ve often said, the traffic patterns of cloud and AI are very different. The diversity of the flows, the distribution of the flows, the fidelity of the flows, the duration, the size and intensity 

Arista Networks is progressing well with its 4 major AI customers; 2 of the customers are quickly-approaching 100,000 GPUs; 1 customer may reach 100,000 GPUs soon; the last customer will take more time to reach 100,000 GPUs; management is no longer just thinking about the number of GPUs with the AI projects of the 4 major AI customers; management expects all 4 of the major AI customers to adopt Arista Networks’ products for back-end deployments in 2026

I think two of our customers have already approached or going to fast — quickly approach 100,000 GPUs. But I don’t think it’s any more about just how big we used to talk about 1 million GPUs and all that. Increasingly, what we are seeing is more and more distributed GPU clusters for training and inference. And so two customers have reached that goal. The third one might reach that goal. The fourth one that I said we just begin with is probably too early to reach that 100,000. That’s probably a goal for next year. So that’s the composition. Two are strong, one is medium and the other still does…

…I won’t measure it anymore just on number of GPUs. I think there’s a lot more to do with locality, distribution, radix and also choice of multi-tenants, optimizations, collective libraries, level of resilience, et cetera. So we’re seeing a lot more complexity run into this than straight number of GPUs…

…[Question] You noted you are seeing good activity with the top 4 hyperscalers. While you indicated that your back-end revenue this year will be primarily driven by two of them, would you expect that all four cloud providers would adopt Arista switches for back-end deployments in 2026?

[Answer] The short answer would be yes. We’ve got some work to do, but the answer is absolutely. All four of them — two of them already have large and the other two will be deployed in the back end. It will also fuel the front end.

ASML (NASDAQ: ASML)

ASML’s management still sees AI (artificial intelligence) as the key growth driver for ASML in 2025, but sees rising uncertainty for 2026, even though the company is preparing for growth

Artificial intelligence is currently the main driver for growth for both Logic and Memory. If we look at Logic, we expect Logic to grow compared to 2024 because our customers are adding capacity in the most advanced nodes. Memory remains very strong because there also our customers are investing in their latest HBM and DDR5 products…

…Going into 2026, there the fundamentals of our AI customers remain strong and we are still preparing for growth. However, as we discussed last time, the level of uncertainty is increasing, mostly due to macroeconomic and geopolitical consideration. And that includes, of course, tariffs…

…As we look ahead to 2026, we continue to see strong demand related to AI for both Logic and Memory, and we see the positive impact of a growing number of EUV layers. On the other hand, as we said before, customers are facing increasing uncertainties based on macroeconomic and geopolitical developments. Further, some customers are navigating specific challenges that might affect the timing of their capital expenditure. Against this backdrop, while we are still preparing for growth in 2026, we cannot confirm it at this stage.

ASML’s management is seeing more DRAM customers shifting towards EUV and having more EUV layers in the latest and future nodes, because of AI

Obviously AI is largely driving the latest nodes, both on Logic and on DRAM. And of course, that is a big driver for EUV. Because EUV is more and more significant on those leading nodes. For instance, if you look at DRAM, we do see that customers are more and more shifting towards EUV and have more and more layers on the latest nodes, but also on future nodes for DRAM. So that’s, of course, a positive for EUV…

…What is very positive about the last few months is we see basically this increased adoption of EUV happening, I think, especially with DRAM customer. The trend, I think, will be sustained. That’s what our customers tell us. So we see on the latest node quite a jump on EUV layer for some of the customer. And the DRAM road map, the technology road map is so complex that EUV more and more is seen basically as a way to simplify a bit the process flow and to get to the performance needed faster. So if we look at, I would say, the next 3, 4, 5 nodes, and that includes Four-Square by the way, we see a very positive trend with our DRAM customer. And I think we were foreseeing that last year, and we now have many confirmation points of that.

ASML’s management sees strong growth for the semiconductor market in the long-term, driven by AI, although there are some short-term uncertainties; management thinks the shift of ASML’s customers towards advanced Logic and Memory chips will drive demand for advanced lithography; management thinks ASML’s EUV roadmap will enable the company to convert more multi-patterning layers to single exposure in the next few years

I think long term, the semiconductor market remains very strong. And I think a lot of people say that AI is really a great opportunity. We have seen again the fundamentals around AI to be very, very strong. Now, of course, short term, Roger talked about it. Some uncertainty, there’s a lot happening, discussion around tariffs, export control, macroeconomic uncertainties…

…The shift of our customers towards more advanced Logic, advanced Memory will also drive the need for more advanced lithography. This will basically be a good thing for litho intensity. The progress we make on our EUV roadmap with Low NA, High NA, providing the right cost of technology, will continue to allow us basically to convert more multi-patterning layers into single exposure. And we will see that happening in the course of the next few years

Cloudflare (NYSE: NET)

A rapidly-growing AI company moved all of its inference workloads from a hyperscaler to Cloudflare’s platform, choosing Cloudflare as its only inference cloud platform

A rapidly growing AI company expanded their relationship with Cloudflare, signing a 1-year $15 million pool of funds contract for Workers AI. This is the third contract signed with this customer in the last year as they moved all of their inference workloads from a hyperscaler over to make Cloudflare their single inference cloud platform. The continued expansion with this customer demonstrates not only the tremendous value they realized from the Cloudflare platform, but also the truly unmatched scalability, efficiency and speed of Workers AI. Cloudflare is increasingly the platform the most innovative companies are choosing to power the future of AI.

A rapidly-growing AI company signed a 5-year deal with Cloudflare for a number of products that will help the AI company enhance its security posture at scale

A rapidly growing AI company signed a 5-year $4.6 million contract for AI Gateway, Magic Firewall, Magic Transit and application services. As a highly technical company, this customer turned to Cloudflare as a strategic partner to enable accelerated innovation, provide enhanced security, improve performance and offer unmatched scale with our globally distributed connectivity cloud. This contract is just the beginning with this customer. They’re already kicking the tires on our firewall for AI product.

Cloudflare’s management sees publishers as having 2 key business models from the traditional internet, namely, subscriptions and advertising; management is seeing the rise of AI leading to a dramatic decline in online traffic to publishers; it has become 10x harder to get traffic from Google over the past 10 years; pure AI companies can be up to 30x harder for publishers to get traffic from as compared to the Google of old; management thinks the AI-driven internet will kill the subscriptions and advertising business models of yore; management thinks Cloudflare is in a unique position to establish a new business model for the internet because 20% of internet traffic runs through Cloudflare and 80% of leading AI companies are familiar with or users of Cloudflare; Cloudflare has signed deals with many leading publishers to enable publishers to charge AI companies for content; the deals Cloudflare have signed are small but management sees them as highly strategic; management thinks the same rails Cloudflare has built to power payments from AI companies to publishers can also be used to power transactions between AI agents; management is very bullish on the opportunity to help publishers empower agentic transactions; management thinks it’s too early to tell exactly what kind of business models will emerge from an agentic internet; management has been surprised at the positive reaction from AI companies to Cloudflare’s new business to empower transactions between publishers and AI companies

Historically, publishers online have made money primarily in two ways: subscriptions or ads. In either case, the key was generating traffic. In the past, one of the most effective ways to do that was through search. Over the last 25 years, publishers allowed Google and other search engines to copy their content in exchange for sending them traffic. But recently, that traffic has been falling dramatically. Based on the data that Cloudflare has observed, it’s nearly 10x harder to get traffic from Google than it was just 10 years ago. What’s changed? The interface of the web is switching from search to AI. Even at Google, which has represented the dominant interface for discovering the web, most searches now include an AI overview, which Pew Research has found significantly decreases the likelihood of someone clicking on a link and reading original content. Pew’s data aligns exactly with what we’ve observed based on our customers’ traffic. It’s even worse with pure AI companies. Every AI company we’ve tracked is worse than the Google of old with some being as much as 30,000x harder to get traffic from. As the interface of the web switches from search to AI, it’s clear more people will read derivatives of content rather than the original content itself. That means the new AI-driven web will kill the old Webs business model.

Cloudflare is in a unique position to help. More than 20% of the web sits behind us today. But maybe as importantly, around 80% of the leading AI companies know and use us. So in Q2, we partnered with the who’s who of the publishing world from the Associated Press to Ziff Davis and nearly everyone else in between to help invent the new business model for content creators on an AI-driven web. The deals we are signing with these companies aren’t high dollars, but they are highly strategic. The response has been incredibly positive from publishers for sure, but also from the majority of AI companies who understand that original content is the fuel that powers their engines. When seismic shifts happen in ecosystems as important as the web, new business models inherently emerge. We believe we are uniquely positioned to power the business model of content creation in the coming AI-driven web, but the opportunity may actually be much larger than that.

The same rails that we are building to power payments from AI companies to publishers, we believe will be used to facilitate transactions between AI agents, whatever they happen to be doing for you online. The fact that we sit in front of so much of the web and that more than half of our dynamic traffic is already between APIs means that we are strategically positioned to deliver the agentic web of the future. For those of you who have been following us for a while, you know that we talk about our product areas in terms of acts. Act 1 are our reverse proxy products, WAF, DDoS mitigation, et cetera. Act 2 are our forward proxy products, Zero Trust, VPN, network firewall. Act 3 are our Workers developer tools. What we are doing to help publishers empower agentic transactions is a big enough deal to us that we’ve begun to refer to it internally as Act 4…

…[Question] I wanted to dig into like the business model for the Agentic Web. And maybe, Matthew, you could give us a little bit more color and visibility on what that means in reality. What are the business models that you’re looking to enable for your customers?

[Answer] I don’t think we know exactly the answer to that. And my hunch is that there will be a number of different models that emerge and over time, consolidate. The analogy I’ve been thinking about is risk of hubris. When Apple rolled out $0.99 a song, that was a key turning point in the music industry, but it wasn’t the ultimate model that we ended up with. We came closer to something that was $10 a month with Spotify. And so I think that this is going to go through a number of different stages and iterations. And you could imagine something that is a fraction of $0.01 per transaction. You could imagine different sites charging different things. You could imagine sites that charge agents more or sites that actually discount for agents that are there…

…I wasn’t surprised that publishers were excited about what we were doing. And we literally haven’t encountered a publisher that wasn’t 100% all in on what we were proposing. And it’s been amazing to build those relationships. I was surprised by the reaction from the AI companies. I thought that they would kick and scream quite a bit more than they did. And quite the opposite. I think they all understand fundamentally that content, original content, valuable content is the fuel that runs their engines.

Cloudflare’s management thinks it’s important that all AI companies should have a level playing field in being able to get content

The key point, though, and I think this is what is the most important work that we have to do. The key point is that there needs to be a level playing field. It can’t be that one company has a unique advantage in getting content where others don’t. And so what we are now really working on is making sure that as we figure out what the market looks like going forward for this, that it is a level playing field, that new start-ups have an opportunity to exist that just because you’re a legacy provider doesn’t give you some unique access to content that others don’t have, that there’s a way to make sure that if you’re small, you pay less and if you’re big, you pay more.

The large AI foundation model builders use Cloudflare in 2 important ways, namely, for security, and to run inference closer to the edge; Cloudflare is not the right platform for foundation model builders to run massive models at the edge, but it is a great platform to run smaller models; management is investing to improve Cloudflare’s ability to support larger and larger models

Our best estimate is that about 80% of the major AI companies are Cloudflare customers today. And they use us across a couple of different services, and I’ll highlight two. So the first is security. The challenge if you put up a foundational model is every time that somebody runs a request against that model, it has real cost to you and it’s measured in not fractions of pennies, but often in pennies. And so if somebody who can find a way to run requests against your model at a very high volume or in a way that you can’t control or in a way that is automated and not actually what your subscriber is doing or if they can find a way to do things like longer credit cards, the credits and the tokens on these AI models now act almost as a currency that allow people to take stolen credit cards and turn it into effectively cash. All of those are unique security threats that make Cloudflare just a great partner for those AI companies that we can sit in front of. That, I think, is where most of them start with us…

…Because of the fact that we have deployed GPUs across our entire network and made it so that we can do inference as close as possible to their users as we are all going from seeing these ChatGPT-like systems as miracles and starting to take them for granted, there’s a real need for them to get the best performance as possible. And one of the most effective ways of doing that is moving the inference closer to where the user is. At the same time, increasingly, as we see regulations spring up around the world, targeting AI companies, they need to keep the inference tasks as close to users as possible to meet those regulatory needs. And so Cloudflare Workers AI gives them the ability to run inference tasks as close as possible to users. We would not be today the right place for one of the really massive LLMs to run because those, in many cases, will require multiple different machines working in coordination. It is a more complicated task. But for smaller models, we’re finding that Cloudflare is the best place for anyone who’s building that to run that. And over time, we are investing in making our systems able to support larger and larger and larger models.

Coupang (NYSE: CPNG)

Coupang’s management is excited by the potential of automation and AI in helping Coupang improve its customer experience and operational capabilities; management is using AI for personalised customer recommendations, dynamic pricing, inventory forecasting, route optimization and more; management sees AI as a long-term enabler of both topline growth and margin expansion for Coupang; Coupang has started using AI for software development and in early results, more than 50% of new code is written by AI; management expects Coupang’s operations to be improved in the future partly through humanoid robots

We’re also excited by the potential of automation and AI to accelerate our efforts to innovate around the customer experience and drive operational excellence. As we invest further into these capabilities, we see significant opportunities to enhance service levels while simultaneously achieving meaningful cost savings…

…AI has been core to our operations and strategy for years. We’ve leveraged these technologies to improve nearly every aspect of our customer experience and operations from personalized recommendations, dynamic pricing, inventory forecasting, route optimization to name a few. Those applications and that integration has directly contributed to the results that you’ve seen over the last few quarters and years around customer engagement and improved operational efficiency.

Looking ahead, we see AI as a long-term enabler of both top line growth and margin expansion, especially with generative AI and large language models, our focus remains on practical high-impact applications, practical applications that scale with our core offerings and enable us to deliver meaningful gains in customer experience and productivity. One example where we’re seeing immediate impact is around software development, where in our early implementations, while still early, we’re seeing up to 50% of the new code written by AI. We also expect AI to have a transformative impact on our operations over time through enhanced automation and humanoid robotics, among other things.

Coupang has been building its own AI computing infrastructure for some time for its own internal needs; the investments Coupang has been making for computing infrastructure is still relatively small; management is currently running small-scale tests on providing 3rd-party enterprises with access to the AI computing infrastructure that Coupang has built for internal use 

 I think I should note that we’ve been developing our own AI computing infrastructure to service our internal needs for some time now. In addition to the capacity that we source from external providers, the bulk of the investment today, and it’s relatively small, is dedicated to building out that internal capability for higher performance and cost savings. We’re also exploring the potential to provide access to that technology and service that we’re developing internally to external enterprise customers as a test-and-learn initiative, and that’s being done on a very small scale.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing strong growth in Datadog’s AI-native cohort, with meaningful growth in number of AI-native customers, driven by rapid usage growth in their products; there was consistent and steady usage growth in the rest of Datadog’s business

Overall, we saw trends for usage growth from existing customers in Q2 that were higher than our expectations. We experienced strong growth in our AI native cohort. The number of AI native customers are growing meaningfully with us as they see rapid usage growth with their products. Meanwhile, we saw consistent and steady usage growth in the rest of the business.

Datadog’s management has a recent AI-powered innovation in security known as Bits AI Security Analyst; Datadog’s security products can cover new AI attack vectors across the application, model, and data layers

Our security products cover new AI attack vectors across the application, model and data layers. At the AI data layer, Sensitive Data Scanner can now prevent the leakage of sensitive data and training data as well as LLM prompts and responses. At the model layer, we help secure against supply chain attacks in open source models and prevent model hijacking attacks. At the application layer, we help prevent prompt injection attacks and data poisoning in run time.

Datadog’s management has launched fully autonomous AI agents for investing alerts and coordinating incident response, coding assistance, triaging SIEM signals; management has launched a Datadog MCP (model context protocol) server to allow 3rd-party AI agents to interface with Datadog’s platform; management thinks Datadog’s AI agents work really well; management is busy trying to ship the AI agents to as many customers as they can and the initial response to the AI agents has been pretty positive 

We launched fully autonomous AI agents, including Bits AI SRE Agent to investigate alerts and coordinate incident response, Bits AI Dev Agent, an AI-powered coding assistant to proactively fix production issues and Bits AI Security Analyst to triage Datadog Cloud SIEM signals. To further accelerate our users’ incident response, we announced AI Voice Agent for incident response, so users can quickly get up to speed and start taking action on their phones…

…We launched a Datadog MCP server to enable AI agents to access telemetry from Datadog and to act as a bridge between Datadog and MCP compatible AI agents like OpenAI Codex, Cursor and Claude Code by Anthropic. We work together with OpenAI to integrate our MCP server within the OpenAI Codex CLI, and the Datadog Cursor extension now gives developers access to Datadog tools and observability data directly within the Cursor IDE…

…Tthe AI’s actually works surprisingly well… Right now, we’re busy basically shipping it to as many customers as we can and enabling the customers with it, and that’s a big area of focus in the business as well… The initial response is very positive. We’ve had customers purchase it pretty quickly in their trials, and so we feel very good about it.

Datadog now has end-to-end AI and data observability capabilities, such as (1) GPU Monitoring for visibility into GPU fleets across, cloud, on-prem, and GPU-as-a-service platforms, (2) LLM Observability Experiments for understanding how changes to prompts, models or AI providers influence application outcomes, and (3) Agentic Flows Visualization to understand AI agents’ decision paths

We showcased our new end-to-end AI and data observability capabilities. Engineers and machine learning teams can use GPU Monitoring to gain visibility into GPU fleets across cloud, on-prem and GPU-as-a-service platforms such as CoreWeave and Lambda Labs. With AI Agent Console, enterprises can monitor the behavior and interactions of any AI agent used by their teams. We now offer LLM Observability Experiments to help understand how changes to prompts, models or AI providers influence application outcomes. We added a new Agentic Flows Visualization to LLM Observability to capture and understand the decision path of AI agent. And last but not least, and accelerated by our recent acquisitions of MetaPlan, Datadog now offers a complete approach to data observability across the entire data life cycle from iteration to transformation to downstream usage.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks AI is a tailwind for Datadog because increased cloud consumption drives more usage of Datadog; Datadog has hundreds of AI-native customers, including 8 of the top 10 leading AI companies; of Datadog’s AI-native customers, more than a dozen are spending over $1 million per year with Datadog while more than 80 are spending more than $100,000 per year; management continues to see rising customer interest for next-gen AI observability and analysis; 4,500 Datadog customers at the end of 2025 Q2 used 1 or more Datadog AI integrations (was 4,000 in 2025 Q1); management thinks next-gen AI introduces new complexity and new observability challenges; management is incorporating AI into the Datadog platform to deliver more value to customers; Datadog has a large volume of rich, clean, and detailed data; Datadog’s access to data has enabled management to build Toto, Datadog’s foundational model for time series forecasting which shows state-of-the-art performance on all benchmarks; management believes that the growth of Datadog’s AI-native customers is an indication of future opportunity when AI is adopted more broadly; management thinks time series forecasting, the domain of Toto, has very wide applicability, which is a great sign of things to come for Datadog’s efforts in AI

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers of our business. As we think about AI, we are incredibly excited about our opportunities.

First, AI is a tailwind for Datadog as increased cloud consumption drives more usage of our platform. Today, we see this primarily in our AI native group of customers who are monitoring their cloud-native applications with us. There are hundreds of customers in this group. They include more than a dozen that are spending over $1 million a year with us and more than 80 who are spending more than $100,000, and they include 8 of the top 10 leading AI companies…

…We continue to see rising customer interest for next-gen AI observability and analysis. Today, over 4,500 customers use one or more Datadog AI integrations.

Second, next-gen AI introduces new complexity and new observability challenges. Our AI observability products help our customers gain visibility and deploy with confidence across their entire AI stack, including GPU Monitoring, LLM Observability, AI Agent Observability and Data Observability…

…Third, we are incorporating AI into the Datadog platform to deliver more value to our customers. As I discussed earlier, we launched Bits AI SRE Agent, Dev Agent and Security Agent. We are seeing very good results with those with more improvements and new capabilities to come.

Finally, as a SaaS platform focused on our customers’ critical workflows, we have a large volume of rich clean and detailed data, which allows us to conduct groundbreaking research. A great example of that is our Toto, foundational model for time series forecasting, which shows state-of-the-art performance on all benchmarks, even going well beyond specialized observability use cases…

…We believe that the growth of this AI native customer group is an indication of the opportunity to come as AI is adopted more broadly and customers outside the AI native group begin to operate AI workloads in production…

…We got fantastic results in our first release, research output is really like a state-of-the-art model that beats every single other model in a category that has seen quite a bit of action over the years, time series forecasting is — has very wide applicability in a lot of different domains. So I think we — it shows that we can perform at the highest level there, and I think it’s a great sign of things to come in terms of AI automation and AI agents.

AI-native customers accounted for 11% of Datadog’s revenue in 2025 Q2 (was 8.5% in 2025 Q1); AI-native customers contributed 10 percentage points to Datadog’s year-on-year growth in 2025 Q2, compared to 2 percentage points in 2024 Q2; Datadog has revenue concentration in its cohort of AI-native customers, but even excluding its largest AI-native customer (which should be OpenAI), year-on-year revenue growth in 2025 Q2 was stable relative to 2025 Q1; management thinks AI-native customers will continue to optimise cloud and observability usage in the future; the margins on Datadog’s contracts with AI-native customers are the same as with non-AI native customers that operate with the same volume, as the margins are determined by volume; management is unable to tell when the optimisations by AI-native customers will happen, if they even do

We saw a continued rise in contribution from AI native customers in the quarter who represented about 11% of Q2 revenues, up from 8% of revenues in the last quarter and about 4% of revenues in the year ago quarter. The AI native customers contributed about 10 points of year-over-year revenue growth in Q2 versus about 6 points last quarter and about 2 points in the year ago quarter…

…We do see revenue concentration in this cohort in recent quarters. But if we look at our revenue without the largest customer in the AI native cohort, our year-over-year revenue growth in Q2 was stable relative to Q1. We remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms and as they may choose to optimize cloud and observability usage over time…

…This isn’t about the AI and margins, the AI cohort versus non-AI cohorts. We price based on volume and on term. So to the extent you would have an AI customer who’s doing much the same things as our other customers in the use of the product, has similar volumes and similar terms to the non-AI, it would be similar margins…

…[Question] There’s obviously been a lot of talk about AI natives around the business. I know you’ve talked about the potential for optimization for several quarters, but we continue to see really strong growth in that segment. So if you were to see optimization, when would you expect that to happen?

[Answer] If I knew when it was going to happen, I would tell you. The nature of our customers is they grow, they have their own businesses to run. They have their own constraints. We’re here to help them deliver their services, and that’s what we work on every single day. Now every now and then, there’s a renegotiation, a renewal on occasions for customers to figure out what they need to optimize and what they need to do for the future. But we never know whether it’s going to happen this quarter, next quarter, in three quarters next year, never.

Datadog’s management sees two layers to the AI opportunity, where the first layer is composed largely of AI inference and applications that are built on largely traditional compute, and where the second layer relates to a new opportunity for observability in understanding how non-deterministic code and AI-written code is working in production; management thinks the second layer largely consists of the AI-native companies today, but the rest of the market will be going there in the future

On the AI opportunity, so there’s really multiple layers to it. The first layer is largely what we see today, which is, companies that are running their inference stack and the application around it, in cloud environments. So that’s the case of the model makers or if you think of the companies that are doing coding agents, things like that. That is what we see today, and it looks a lot like normal compute. So you have normal machine CPUs, some GPUs, quite a few other components, databases, web servers, things like that. So that’s the bulk of what we see today. And there’s going to be more of it as the AI applications come into production. There are more specialized inference workloads and even training workloads in some situations that rely on instrumenting GPUs. And for that, we have a new product out there that does GPU monitoring that we announced at DASH. But all that I would call the infrastructure layer of AI.

Then on top of that, there’s new problems in terms of understanding what the applications themselves are doing and the applications are largely nondeterministic anymore. They either are run by a model that is nondeterministic by nature or they run in code that was not as carefully written as it used to be. It’s not completely written by humans, just largely written by AI agents, and as a result, you also need to spend a lot more time understanding how that code is working and that largely happens in production. So that’s a brand-new area of observability, which is how do you deal with applications that have not been completely defined in development and that have to be evaluated in production. And what we think is the whole market is going there, not just the AI natives. The AI natives are definitely doing that today, both applications are running on models and code that has been largely written by agents, but the rest of the market is going there, and the best proof point you see of that is the very, very broad adoption today, both of the API-gated AI models and of the coding agents, which you see in every single large enterprise today.

Datadog’s management is seeing lesser need to grow headcount in engineering because of the use of AI tools, but there’s still need to grow headcount in sales

[Question] Many CEOs are either holding headcount flat or down. We’ve seen Meta headcount down from 2 years ago, Microsoft headcount flat, others — Palantir saying they’re going to shrink headcount and 10x revenue. Do you believe you can become more efficient with fewer? Or do you think that, that model doesn’t apply that you’re seeing with other software companies?

[Answer] The spend is shifting a little bit on the engineering side. As I said, we compute — we consume more AI training inference, and so that’s definitely changing a bit of the balance between what you have humans do and what you offload to GPUs. That being said, we’re still completely constrained by the amount of product we can put out there. There’s a ton of opportunity in every single direction we look, whether that’s on the AI automation, whether it’s on the security side, whether that’s in the new areas, just better observability or experimentation that we’re going after, and so for us, this very strong ROI in the adds that we’re making at the moment.

Mastercard (NYSE: MA)

Mastercard’s management is seeing fraudsters use artificial intelligence to attempt mischief while Mastercard is also securing cybersecurity for its clients with artificial intelligence; Mastercard’s AI-powered Decision Intelligence Pro solution leverages data from across the internet to predict fraud; customers are happy to pay for Decision Intelligence Pro

On the cybersecurity side, the stakes are getting higher and higher. The fraudsters are using latest technology, using Artificial Intelligence, generative AI to power their solutions to break through on the fraud side, on the cybersecurity side, and we’re doing exactly the same. So I mentioned this in previous calls, Decision Intelligence Pro is leveraging data out of all sources of the Internet, putting it through a generative AI engine for us predicting frauds. Instead of preventing fraud, we’re going to predicting fraud, which is the latest stage of this. This kind of game, and this is a clear identifiable value for our customers that are very happy to pay for.

Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future is the world’s largest threat intelligence company; Recorded Future has 1,900 customers in 75 countries, and its customers include Fortune 100 companies and governments; it’s still early days, but Mastercard is already putting out more products with Recorded Future; the combination of Recorded Future and Mastercard’s huge troves of data is the magic sauce; Recorded Future is identifying where the threat vectors are so that customers can be more targeted in their response, and this is a winning proposition for customers

On Recorded Future, if I can just remind everybody, thank you for the question, Tien-Tsin. So world’s largest threat intelligence company, 1,900 customers, 75 countries, so very significant. You see a lot of Fortune 100 companies in there as well as G20 governments…

…We’ve hit the ground running. It is still very early days, obviously, but we’re already putting out more products with them. Malware Intelligence is one that I called out in the last quarter around this. The beauty here is, they have a lot of data, which they get from all sources of the internet, as I mentioned earlier. At the same time, we have a lot of data. The combination of that is the magic sauce here…

…What Recorded Future, what Mastercard is now helping our customer with is identifying where the threat vectors actually are. So you can be much more targeted in your response. That is, first of all, more effective from reducing cybersecurity risk. At the same time, it’s more effective from a cost perspective. So that’s a really winning proposition.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management has introduced an AI-powered search experience in e-commerce that includes infinite scroll and it has increased navigation time in key categories

At the same time, our new AI-powered search experience – with infinite scroll – is increasing navigation time in key categories where we expect these shipping enhancements to have the greatest impact.  

MercadoLibre’s product ads is performing well across the board for MercadoLibre on the back of improved UX and tools for sellers, including an AI-powered budget recommendation tool

Product ads is performing well across countries and sites and not only in Argentina, and this is on the back of improved UX and tools for sellers, such as a new question flow focused on benefits of advertising, smarter item selection, improved budget recommendation using AI and some of the things that I was mentioning before.

MercadoLibre’s management has integrated MercadoLibre’s AI platform, Verdi, into the CRM tool of the Acquiring business, leading to faster activation and higher TPV per new merchant; Verdi has also been used to support online payments merchants with their technical integrations with Mercado Pago and to assist instore merchants facing device issues

Operating efficiently remains a top priority. We have integrated our AI platform, Verdi, into our CRM tool to enhance the productivity of our commercial teams, resulting in faster activation and higher TPV per new merchant. We have also deployed Verdi to support online payments merchants with their technical integrations with Mercado Pago and to assist instore merchants facing device issues. This has enabled more autonomous problem resolution and significantly reduced the number of device replacements. 

 MercadoLibre’s management thinks there is a lot of opportunity for AI to help MercadoLibre improve its marketing execution and advertising spend; management sees AI giving MercadoLibre the opportunity to produce multiple creatives for any given campaign; management is using AI to better onboard its advertising customers onto its technology stack

We definitely think there is huge room for AI to help us improve both our marketing execution and our ad spend as well. So on the marketing side, I think there are many, many dimensions in which we are testing and learning about AI. Just to bring one example out there. For instance, when we think about branding and creativities, AI brings us the opportunity to produce multiple creativities for any given campaign and start testing and learning those creativities across the board and with that, deciding who we want to show what in the online world, and that’s something we are already proving, producing content online…

…We are using AI today in order to help our sellers better understand our ad stack to have onboarded into our ad technology to optimize their bidding and so on.

Meta Platforms (NASDAQ: META)

Meta’s management has seen glimpses of AI systems improving themselves; management thinks artificial super intelligence (ASI) is now in sight; management is optimistic about ASI advancing economies and science, but management’s vision is to bring ASI to everyone to enable creativity and culture to flourish; Meta’s new Meta Superintelligence Labs consists of some of its existing AI teams and a new lab building the next generation of models; Alexandr Wang, Nat Friedman, and Shengjia Zhao will be the important leaders of Meta Superintelligence Labs; management thinks people are excited to join Meta Superintelligence Labs because the company has the ingredients required to build leading models and deliver them to billions of people; management believes that ASI will improve every aspect of Meta’s business; management has seen that the most aggressive predictions for AI timelines have been the most accurate ones; some teams in Meta have used Llama 4 to build autonomous AI agents to improve Facebook’s algorithm in a small way; management is telling the entire company to take ASI seriously; management thinks Meta is the best company in the world at building world class technology and distributing it to billions of people; it appears that Meta will be training its ASI models differently from current frontier AI models; the team-dynamics in Meta Superintelligence Labs will be different from Meta’s core AI team; management expects to continue open-sourcing Meta’s AI models, although not everything will be open-sourced, and ASI may have safety concerns related to open-sourcing

Over the last few months, we’ve begun to see glimpses of our AI systems improving themselves. And the improvement is slow for now, but undeniable and developing super intelligence, which we define as AI that surpasses human intelligence in every way, we think, is now in sight. Meta’s vision is to bring personal super intelligence to everyone, so that people can direct it towards what they value in their own lives. And we believe that this has the potential to begin an exciting new era of individual empowerment. A lot has been written about all the economic and scientific advances that Superintelligence can bring, and I’m extremely optimistic about this. But I think that if history is a guide, then an even more important role will be how Superintelligence empowers people to be more creative, develop culture and communities, connect with each other and lead more fulfilling lives…

…We’ve established Meta Superintelligence Labs, which includes our foundations, product and FAIR teams as well as a new lab that is focused on developing the next generation of our models…

…We are building an elite, talent-dense team Alexandr Wang is leading the overall team. Nat Friedman is leading our AI product in Applied Research and Shengjia Zhao is Chief Scientist for the new effort. They are all incredibly talented leaders, and I’m excited to work closely with them. and the world-class group of AI researchers and infrastructure and data engineers that we’re assembling…

…The reason that so many people are excited to join is because Meta has all of the ingredients that are required to build leading models and deliver them to billions of people. The people who are joining us are going to have access to unparalleled compute as we build out several multi-gigawatt clusters…

…We are making all these investments because we have conviction that super intelligence is going to improve every aspect of what we do…

…There are all these questions that people have about what are going to be the time lines to get to really strong AI or Superintelligence or whatever you want to call it. And I guess that each step along the way so far, we’ve observed the more kind of aggressive assumptions or the fastest assumptions have been the ones that have most accurately predicted what would happen…

…Some of the work that we’re seeing with teams internally being able to adapt Llama 4 to build autonomous AI agents that can help improve the Facebook algorithm to increase quality and engagement, or like. I mean, that’s like a fairly profound thing if you think about it. I mean it’s happening in low volume right now. So I’m not sure that, that result by itself was a major contributor to this quarter’s earnings or anything like that…

…We have this principle that we believe in across the company, which we tell people take Superintelligence seriously. And the basic principle is this idea that we think that this is going to really shape all of our systems sooner rather than later, not necessarily on the trajectory of a quarter or 2, but on the trajectory of a few years…

…When we take a technology, we’re good at driving that through all of our apps and our ad systems and all that stuff, it’s not just going to kind of sit on the line. I think that there’s no other company, I think that is as good as us at kind of taking something and kind of getting it in front of billions of people…

…There’s obviously different scaling paradigms, and I don’t want to get too much into the detail of research that we’re doing on this. But I think that for developing superintelligence at some level, you’re not just going to be learning from people because you’re trying to build something that is fundamentally smarter than people. So it’s going to need to learn how to — or you’re going to need to develop a way for it to be able to improve itself…

…I’ve just gotten a little bit more convinced around the ability for small talent-dense teams to be the optimal configuration for driving frontier research. And it’s a bit of a different setup than we have on our other world-class machine learning system. So if you look at like what we do in Instagram or Facebook or our ad system, we can very productively have many hundreds or thousands of people basically working on improving those systems, and we have very well-developed systems for kind of individuals to run tests and be able to test a bunch of different things. You don’t need every researcher there to have the whole system in their head. But I think for this — for the leading research on superintelligence. You really want the smallest group that can hold the whole thing in their head, which drives, I think, some of the physics around the team size and how — and the dynamics around how that works…

…As you approach real superintelligence, I think there is a whole different set of safety concerns that I think we need to take very seriously, that I wrote about in my note this morning. But I think the bottom line is, I would expect that we will continue open sourcing work. I expect us to continue to be a leader there. And I also expect us to continue to not open source everything that we do, which is a continuation of kind of what we’ve been kind of working on.

Meta is making good progress towards Llama 4.1 and 4.2 while also working on new models in parallel; management thinks the new models will be frontier-level when released in 2026; management has used Llama to lower top line bug reports in US and Canada in Facebook Feed and Notifications by 30% over the past 10 months; Llama is primarily used today to power Meta AI

We’re making good progress towards Llama 4.1 and 4.2, and in parallel, we are also working on our next generation of models that will push the frontier in the next year or so…

…We’re now exploring how to extend the use of LLMs in recommendation systems to our other apps. We’re leveraging Llama and several other back-end processes as well, including actioning bug reports so we can identify and resolve recurring issues more quickly and efficiently. This has resulted in top line bug reports in the U.S. and Canada in Facebook Feed and Notifications dropping by roughly 30% over the past 10 months…

…The primary way we’re using Llama in our apps today is to power Meta AI which is now available in over 200 countries and territories.

Meta’s Prometheus cluster, the first gigawatt-plus AI compute cluster in the world, will come online in 2025 H2; Meta is building its Hyperion AI compute cluster, which can scale up to 5 gigawatts over a few years; Meta has a number of Titan AI compute clusters in development; management expects sufficient compute capacity to be central to Meta’s growth in the coming years; management continues to see very compelling returns in its core ads and organic engagement initiatives from its AI investments; management expects to significantly grow its AI investments in 2026

Our Prometheus cluster is coming online next year, and we think it’s going to be the world’s first gigawatt-plus cluster. We’re also building out Hyperion, which we’ll be able to scale up to 5 gigawatts over several years, and we have multiple more Titan clusters in development as well…

… We expect having sufficient compute capacity will be central to realizing many of the largest opportunities in front of us over the coming years. We continue to see very compelling returns from our AI capacity investments in our core ads and organic engagement initiatives and expect to continue investing significantly there in 2026. We also expect that developing leading AI infrastructure will be a core advantage in developing the best AI models and product experiences. So we expect to ramp our investments significantly in 2026 to support that work.

Meta’s AI investments has unlocked greater efficiency and gains in its advertising systems; management has introduced Meta’s new AI-powered recommendation model for ads to new surfaces and it has led to 5% more ad conversions on Instagram and 3% on Facebook; a meaningful percentage of Meta’s advertising revenue now comes from campaigns using one of Meta’s generative AI features, and management thinks this is especially helpful for small advertisers; management improved the Andromeda ads retrieval system in 2025 Q2, leading to 4% higher conversions on Facebook Mobile Feed and Reels; management improved the GEM (Generative Ads Recommendation System) ads ranking system in 2025 Q2, which partially helped achieve the 5% more ad conversions on Instagram and 3% on Facebook seen; the introduction of new advanced sequence modeling techniques to double the length of event sequences also helped achieve the 5% more ad conversions on Instagram and 3% on Facebook seen; Meta expanded coverage of its Lattice model architecture in 2025 Q2 to earlier-stage ads ranking models, which led to a 4% increase in ad conversions in Facebook Feed and Reels; Meta completed the rollout of its streamlined campaign creation flow for Advantage+ sales and app campaigns in 2025 Q2, which lead to lifts in advertiser adoption; Meta will complete the rollout of the streamlined campaign creation flow for Advantage+ leads campaigns in the coming months; nearly 2 million advertisers are now using Meta’s video generation, image animation, and video expansion generation AI features within Advantage+; Meta began testing AI-powered translation of advertising in 2025 Q2 and prelaunch tests have delivered promising performance lifts; Meta completed the global rollout of its incremental attribution feature – the only product on the market that optimizes for and reports on incremental conversions – in 2025 Q2 

The strong performance this quarter is largely thanks to AI unlocking greater efficiency and gains across our ad system. This quarter, we expanded our new AI-powered recommendation model for ads to new surfaces and improved its performance by using more signals and longer context. It’s driven roughly 5% more ad conversions on Instagram and 3% on Facebook. We’re also seeing good progress with AI for ad creative with a meaningful percent of our ad revenue now coming from campaigns using one of our generative AI features. This is going to be especially valuable for smaller advertisers with limited budgets…

…The Andromeda model architecture we began introducing in the second half of 2024 powers the ads retrieval stage of our ad system, where we select the few thousand most relevant ads from tens of millions of potential candidates. In Q2, we made enhancements to Andromeda that enabled it to select more relevant and more personalized ads candidates while also expanding coverage to Facebook Reels. These improvements have driven nearly 4% higher conversions on Facebook Mobile Feed and Reels.

Our new Generative Ads Recommendation System, or GEM, powers the ranking stage of our ad system, which is the part of the process after ads retrieval where we determine which ads to show someone from candidates suggested by our retrieval engine. In Q2, we improved the performance of GEM by further scaling our training capacity and adding organic and ads engagement data on Instagram. We also incorporated new advanced sequence modeling techniques that helped us double the length of event sequences we use, enabling our systems to consider a longer history of the content or ads that a person has engaged with in order to provide better ad selections. The combination of these improvements increased ad conversions by approximately 5% on Instagram and 3% on Facebook Feed and Reels in Q2…

…We expanded coverage of our Lattice model architecture in Q2. We first began deploying Lattice in 2023 with our later-stage ads ranking efforts, allowing us to run significantly larger models that generalize learnings across objectives and surfaces in place of numerous smaller ads models that have historically been optimized for individual objectives and surfaces. In April, we began deploying Lattice to earlier-stage ads ranking models as well. This is leading not only to greater capacity and engineering efficiency but also improved performance, with the recent Lattice deployments driving a nearly 4% increase in ad conversions across Facebook Feed and Reels in Q2…

…We’re seeing strong momentum with our Advantage+ suite of AI-powered solutions. In Q2, we completed the rollout of our streamlined campaign creation flow for Advantage+ sales and app campaigns, which makes it easier for advertisers to realize the performance benefits from Advantage+ by having it turned on at the beginning. We’ve seen lifts in advertiser adoption of sales and app campaigns since we’ve expanded availability, and are working to complete the rollout for leads campaigns in the coming months. Within our Advantage+ Creative Suite, adoption of GenAI and creative tools continues to broaden. Nearly 2 million advertisers are now using our video generation features, image animation and video expansion, and we’re seeing strong results with our text generation tools as we continue to add new features.

In Q2, we started testing AI-powered translation so that advertisers can automatically translate the caption of their ads to 10 different languages. While it’s early, we have seen promising performance lifts in our prelaunch tests. We’re also continuing to see strong adoption of image expansion among small- and medium-sized advertisers, which speaks to how these tools help businesses who have fewer resources to develop creative. With larger advertisers, we expect agencies will continue to be valuable partners in helping apply these new tools to drive performance…

…In Q2, we completed the global rollout of our incremental attribution feature, which is the only product on the market that optimizes for and reports on incremental conversions, which are conversions that would not have happened without a person seeing the ad.

Meta’s AI investments have significantly improved its ability to show users content they would be interested in and this led to 6% increase in time spent on Instagram and 5% on Facebook just in 2025 Q2 alone; management thinks content on Meta’s platforms can get a lot better, with early progress seen with the launch of AI-powered editing tools; ongoing improvements to Meta’s ranking systems have led to video time growing 20% year-on-year globally for Instagram in 2025 Q2, and video time growing 20% year-on-year in the US for Facebook; management expects to continue delivering additional improvements in content ranking systems throughout 2025; 2/3 of recommended content on Instagram now come from original posts; management is focused on increasing freshness of original posts on Instagram in 2025 H2; Meta is making good progress on its longer-term content ranking innovations; Meta has seen LLMs (large language models) driving a meaningful amount of ranking-related gains in time spent on Threads; management has a roadmap for Meta’s content commendation systems for both the near-term and long-term; the near-term roadmap includes (1) making recommendations even more adaptive to a user at any point in time, (2) helping good content from small creators breakout, and (3) better understand user interests; the long-term roadmap includes (1) foundational recommendation models, and (2) deeper integration of LLMs in recommendation systems 

AI is significantly improving our ability to show people content that they’re going to find interesting and useful. Advancements in our recommendation systems have improved quality so much that has led to a 5% increase in time spent on Facebook and 6% on Instagram, just this quarter. There is a lot of potential for content itself to get better too, we’re seeing early progress with the launch of our AI video editing tools across Meta AI and our new Edits app…

…We continue to see momentum with video engagement, in particular. In Q2, Instagram video time was up more than 20% year-over-year globally. We’re seeing strong traction on Facebook as well, particularly in the U.S., where video time spent similarly expanded more than 20% year-over-year. These gains have been enabled by ongoing optimizations to our ranking systems to better identify the most relevant content to show. We expect to deliver additional improvements throughout the year as we further scale up our models and make recommendations more adaptive to a person’s interest within their session…

…On Instagram, over 2/3 of recommended content in the U.S. now comes from original posts. In the second half, we’ll be focused on further increasing the freshness of original posts, so the right audiences can discover original content from creators soon after it is posted.

We are also making good progress on our longer-term ranking innovations that we expect will provide the next leg of improvements over the coming years. Our research efforts to develop cross-surface foundation recommendation models continue to progress.

We are also seeing promising results from using LLM in Threads recommendation systems. The incorporation of LLMs are now driving a meaningful share of the ranking-related time spent gains on Threads…

…There are a handful of shorter-term things that we’re focused on in the near term. One is we’re focused on making recommendations even more adaptive to what a person is engaging with during their session so that the recommendations we surface are the most relevant to what they’re interested in at that moment. And we’re making optimizations to help the best content from smaller creators break out by matching it to the right audiences sooner after it gets posted. And we’re also working on improving the ability for our systems to discover more diversified and niche interest for each person through interest exploration and learning explicit user preferences. We’re also planning to scale up our models further and incorporate more advanced techniques that should improve the overall quality of recommendations.

But we also have a lot of long-term bets in the hopper around areas like developing foundational models that will support recommendations across multiple services. Incorporating LLM more deeply into our recommendation systems. And a big focus of this work is going to be on optimizing the systems to make them more efficient. So that we can continue to scale up the capacity that we use for our recommendation systems without eroding the ROI that we deliver.

Meta’s management is starting to see product market fit for business AI agents in countries where they are tested; management is integrating business AI agents into advertising shown on Facebook, Instagram, and e-commerce websites; Meta’s click-to-message revenue grew more than 40% year-on-year in the US in 2025 Q2

I’ve talked before about how I believe every business will soon have a business AI, just like they have an e-mail address social media account and website. We are starting to see some product market fit in a number of countries where we’re testing these agents, and we’re integrating these business AIs into ads on Facebook and Instagram as well as directly into e-commerce websites…

…We’re seeing good momentum in Business messaging, particularly in the U.S., where click to message revenue grew more than 40% year-over-year in Q2. The strong U.S. growth is benefiting from a ramp in adoption of our website to message ads, which drive people to a business’s website for more information before choosing to launch a chat with the business in 1 of our messaging apps.

Meta AI has more than 1 billion monthly actives now; management continues to focus on making Meta AI the leading personal AI; management is seeing engagement on Meta AI grow as the underlying AI models improve; Llama is primarily used today to power Meta AI; Meta AI is now available in over 200 countries; Meta AI’s usage primarily comes through WhatsApp and the primary use cases are for information gathering, homework assistance and generating images; management is noticing Meta AI being complementary to the company’s content discovery engines; people are using Meta AI on Facebook to ask about and find content; management expects Meta AI to help with content discovery by automatically translating and dubbing foreign languages

Meta AI. Its reach is already quite impressive with more than 1 billion monthly actives. Our focus is now deepening the experience in making Meta AI the leading personal AI. As we continue improving our models, we see engagement grow…

…The primary way we’re using Llama in our apps today is to power Meta AI which is now available in over 200 countries and territories. WhatsApp continues to be the largest driver of queries as people message Meta AI directly for tasks such as information gathering, homework assistance and generating images. Outside of WhatsApp, we’re seeing Meta AI become an increasingly valuable complement to our content discovery engines. Meta AI usage on Facebook is expanding as people use it to ask about posts they see in feed, and find content across our platform in search. Another way we expect Meta AI will help with content discovery is through the automatic translation and dubbing of foreign language content into the audience’s local language.

Sales of the Ray-Ban Meta smart glasses are accelerating; management will launch new performance AI glasses with the Oakley Meta HSTN; the percent of people using Meta AI with the smart glasses is growing, and retention of new AI users is increasing; management continues to believe that smart glasses will be the primary form factor for people to interact with AI, especially artificial super intelligence; the demand for Ray-Ban Meta smart glasses is still higher than supply and management will ramp supply in 2025 H2; management is exploring smart glasses with different kinds of displays compared to the current iteration; management wants to continue investing heavily in smart glasses because they think it’s going to be an important part of the future

We continue to see strong momentum with our Ray-Ban Meta glasses with sales accelerating. We are also launching new performance AI glasses with the Oakley Meta HSTN’s, they have longer battery life, higher resolution camera and are designed for sports. The percent of people using Meta AI is growing, and we are seeing new users AI retention increase too, which is a good sign for that continued use. I think that AI glasses are going to be the main way that we integrate super intelligence into our day-to-day lives. So it’s important to have all of these different styles and products that appeal to different people in different settings…

…The growth of Ray-Ban Meta sales accelerated in Q2, with demand still outstripping supply for the most popular SKUs despite increases to our production earlier this year. We’re working to ramp supply to better meet consumer demand later this year…

…Right now, we’re building ones that I think are stylish, but aren’t focused on the display. I think if there’s a whole set of different things to explore with displays…

…Because we’ve been investing in this, I think we’re just several years ahead on building out glasses. And I think that, that’s something that we’re excited to keep on investing in heavily because I think it’s going to be a really important part of the future.

Meta’s management’s guidance for capex in 2025 has been narrowed from a prior range of $64 billion to $72 billion to $66 billion to $72 billion (capex was $37 billion in 2024); management expects 2026 capex dollar growth to be similar to 2025’s capex dollar growth; management expects a greater mix of capex in 2025-2026 to be in shorter-lived assets than in prior years; most of the increased capex in 2025-2026 will be for generative AI compute capacity, with significant capex in 2026 also going to core AI; management expects to finance most of the 2026 capex internally while exploring partnerships with financiers

We currently expect 2025 capital expenditures, including principal payments on finance leases, to be in the range of $66 billion to $72 billion, narrowed from our prior outlook of $64 billion to $72 billion and up approximately $30 billion year-over-year at the midpoint. While the infrastructure planning process remains highly dynamic, we currently expect another year of similarly significant CapEx dollar growth in 2026 as we continue aggressively pursuing opportunities to bring additional capacity online to meet the needs of our AI efforts and business operations…

…We also expect a greater mix of our CapEx to be in shorter-lived assets in 2025 and ’26 than it has been in prior years…

…On the CapEx side, the big driver of our increased CapEx in ’26 will be scaling GenAI capacity as we build out training capacity that’s going to drive higher spend across servers, networking, data centers next year. We also expect that we’re going to continue investing significantly in core AI in 2026…

…About how we expect to finance the growing CapEx next year. We certainly expect that we will finance some large share of that ourselves, but we’re also exploring ways to work with financial partners to codevelop data centers. We don’t have any finalized transactions to announce, but we generally believe that there will be models here that will attract significant external financing to support large-scale data center projects that are developed using our ability to build world-class infrastructure while providing us with flexibility should our infrastructure requirements change over time.

Meta’s AI capex for 2025-2026 is purely for internal uses; management has strong ability to measure return on investment (ROI) for Meta’s core AI capex and the ROI remains strong; it’s much harder for management to measure ROI for Meta’s generative AI capex, but they are optimistic about the monetisation opportunities; management continues to have fungibility in mind when building its AI compute capacity

[Question] Your spend is now approaching some of the biggest hyperscalers out there. Do you think of all this capacity mostly for internal uses? Or do you think there’s a way to share or even [indiscernible] with a business model, we’re leveraging that capacity for external uses.

[Answer] Right now, we are focused on ensuring that we have enough capacity for our internal use cases, which includes both all of the core AI work that we do to support the recommendation engine work on the organic content side to support all the ads ranking and recommendation work. And then, of course, to make sure that we are building the training capacity that we think we need in order to build frontier AI models. And to make sure that we’re preparing ourselves for the types of inference use cases that we think might — that we might have ahead of us as we eventually focus not only on developing frontier models, but also how we can expand into the kinds of consumer use cases that we think will be hopefully live — hopefully, widely useful and engaging for our users. So at present, we’re not really thinking about external use case on the infrastructure…

…Around the sort of ROI on CapEx, there are a couple of things. So again, on the core AI side, we continue to see strong ROI. Our ability to measure that is quite good, and we feel sort of very good about the rigorous measurement and returns that we see there. On the GenAI side, we are clearly much, much earlier on the return curve and we don’t expect that the GenAI work is going to be a meaningful driver of revenue this year or next year. But we remain generally very optimistic about the monetization opportunities that will open up, and Mark spoke to them in his script, the sort of 5 pillars, so I won’t repeat them here…

…We are building the infrastructure with fungibility in mind. Obviously, there are a lot of things that you have to build up front in terms of the data center shells, the networking infrastructure, et cetera. But we will be ordering servers, which ultimately will be the biggest bulk of CapEx spend as we need them and when we need them and making sort of the best decisions at those times in terms of figuring out where the capacity will go to use.

Microsoft (NASDAQ: MSFT)

Azure has surpassed $75 billion in annual revenue, up 34%, in FY2025; Azure took share every quarter in FY2025; Azure has more data centers than any other cloud provider; Azure stood up more than 2 gigawatts of compute capacity in the last 12 months; Azure is scaling compute capacity faster than any other competitor; all of Azure’s regions can now support liquid cooling, making them suitable for AI compute; Azure can now deliver 90% more tokens with the GPT4o family of models for the same GPU compared to a year ago through software optimisation alone; Azure grew revenue by 39% in 2025 Q2 (FY2025 Q4) (was 33% in 2025 Q1); management expects Azure to be capacity-constrained through FY2026 H1 despite more capacity being brought online

Azure surpassed $75 billion in annual revenue, up 34%, driven by growth across all workloads. We continue to lead the AI infrastructure wave and took share every quarter this year. We opened new DCs across 6 continents and now have over 400 data centers across 70 regions, more than any other cloud provider…

…We stood up more than 2 gigawatts of new capacity over the past 12 months alone. And we continue to scale our own data center capacity faster than any other competitor. Every Azure region is now AI-first. All of our regions can now support liquid cooling, increasing the fungibility and the flexibility of our fleet…

…We are driving and riding a set of compounding S curves across silicon, systems and models to continuously improve efficiency and performance for our customers. Take, for example, GPT4o family of models, which have the highest volume of inference tokens. Through software optimizations alone, we are delivering 90% more tokens for the same GPU compared to a year ago…

…In Azure and other cloud services, revenue grew 39%, significantly ahead of expectations, driven by accelerated growth in our core infrastructure business, primarily from our largest customers. As a reminder, new cloud and AI workloads are built and scaled using the breadth of our services…

…Even as we continue bringing more data center capacity online, we currently expect to remain capacity-constrained through the first half of our fiscal year…

…I talked about, my gosh, in January and said I thought we’d be in better supply demand shape by June. And now I’m saying I hope I’m in better shape by December. And that’s not because we slowed CapEx. Even with accelerating the spend and trying to pull leases in and get CPUs and GPUs in the system as quickly as we can, we are still seeing demand improve.

The GPT4o family of models from OpenAI has the highest volume of inference tokens

Take, for example, GPT4o family of models, which have the highest volume of inference tokens.

Microsoft’s management thinks Foundry has best-in-class tooling, management, observability and built-in controls for developing AI applications; management sees customers increasingly wanting to use multiple AI models when building applications, and Foundry provides access to more AI models than any other hyperscaler, including models from OpenAI, DeepSeek, Meta, xAI, and more; the Foundry Agent Service is experiencing accelerated adoption and now has 14,000 customers; Nasdaq is using Foundry Agent Service to cut prep time for board meetings by 25%; 80% of the Fortune 500 are using Foundry; Foundry processed more than 500 trillion tokens in FY2025 (was 100 trillion tokens in 2025 Q1), up 7x from a year ago

This year, we launched Azure AI Foundry to help customers design, customize and manage AI applications and agents at scale. Foundry features best-in-class tooling, management, observability and built-in controls for trustworthy AI. Customers increasingly want to use multiple AI models to meet their specific performance, cost and use case requirements. And with Foundry, they can provision inferencing throughput once and apply it across more models than any other hyperscaler, including models from OpenAI, DeepSeek, Meta, xAI’s Grok and, very soon, Black Forest Labs and Mistral AI. We sim-shipped 15 models from OpenAI alone on Foundry this year, providing same-day access to state-of-the-art models deeply integrated with our infrastructure and tools.

And we are seeing accelerated adoption of our new Foundry Agent Service, which is now being used by 14,000 customers to build agents that automate complex tasks. For example, Nasdaq is using foundry to build agents that help customers prepare for Board meetings, cutting prep time by up to 25%. All up, 80% of Fortune 500 already use Foundry. And when we look narrowly at just the number of tokens served by Foundry APIs, we processed over 500 trillion this year, up over 7x. This is a good indicator of true platform diffusion beyond a few head apps and services.

Microsoft’s family of Copilot apps has surpassed 100 million MAUs (monthly active users)

Our family of Copilot apps has surpassed 100 million monthly active users across commercial and consumer.

Across the entire Microsoft product suite, there are 800 million monthly active users of AI features

When you take a broader look at the engagement of AI features across our products, we have over 800 million monthly active users.

Customers are adopting Microsoft 365 Copilot at a faster rate than any other new Microsoft 365 suite, with strong usage intensity; in 2025 Q2 (FY2025 Q4), Microsoft saw the largest quarter of seat adds since launch for Microsoft 365 Copilot; Barclays, UBS, Adobe, KPMG, Pifzer, and Wells Fargo are recent examples of large organisations that have expanded or bought new Microsoft 365 Copilot seats; the Researcher and Analyst deep reasoning agents have been used by tens of thousands of organisations in their first weeks of availability; hundreds of partners have built 3rd-party AI agents that integrate with Copilot; management is seeing more customers build their own AI agents with Copilot Studio; 3 million agents were created by Microsoft’s customers in FY2025; customers can use Copilot Tuning to create agents fine-tuned on their company’s data, workflow and style

Customers continue to adopt Copilot at a faster rate than any other new Microsoft 365 suite, with strong usage intensity as shown by our week-over-week retention. And we saw the largest quarter of seat adds since launch with a record number of customers returning to buy more seats. Barclays, for example, will roll out Microsoft 365 Copilot to 100,000 employees globally following a successful initial deployment of 15,000. UBS is expanding its deployment to all of its employees after initially rolling it out to 55,000 of them. And Adobe, KPMG, Pfizer, Wells Fargo all purchased over 25,000 seats this quarter.

Tens of thousands of organizations have already used our Researcher and Analyst deep reasoning agents in the first weeks of availability. And we have introduced group-level agents in Teams like Facilitator and Interpreter, which generate real-time translation and notes in meetings.

Hundreds of partners like Adobe, SAP, ServiceNow and Workday have built their own third-party agents that integrate with Copilot and Teams. We are also seeing more customers use Copilot Studio to extend Microsoft 365 Copilot and build their own agents. This year, customers created 3 million agents using SharePoint and Copilot Studio. And with Copilot Tuning, they can easily create agents fine-tuned on their company’s data, workflow and style that reflect their unique tone, language and expertise.

GitHub Copilot’s Agent Mode and Coding Agent have great momentum in IDEs (integrated development environments); GitHub Copilot has 20 million users; GitHub Copilot enterprise customers increased 75% sequentially in 2025 Q2; 90% of the Fortune 100 use GitHub Copilot; AI has led to explosive growth in GitHub usage, with AI projects on GitHub more than doubling from a year ago; vibe coding projects are generating more pull requests and reports on GitHub; the Code Review Agent is performing millions of code reviews monthly in GitHub

GitHub Copilot continues to have great momentum in IDE with Agent Mode and new form factors like Coding Agent which is capable of asynchronously executing developer tasks. We have 20 million GitHub Copilot users. GitHub Copilot enterprise customers increased 75% quarter-over-quarter as companies tailor Copilot to their own codebases, and 90% of the Fortune 100 now use GitHub Copilot. More broadly, GitHub usage and repos are seeing explosive growth because of AI. AI projects on GitHub more than doubled over the last year. The surge in vibe coding projects and AI coding agents, whether it is Claude Code, Codex, Cursor or GitHub Copilot, are generating more pull requests and more repos on GitHub. And our Code Review Agent is being used heavily across the platform, performing millions of code reviews each month.

More than half of Microsoft’s cloud and AI-related capex in 2025 Q2 (FY2025 Q4) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs, driven by strong demand signals; management feels good about the ROI (return on investment) on Microsoft’s capital expenditure; Microsoft’s capital expenditure is correlated to the company’s contracted backlog; management does not want to focus too much on when capex growth will be slower than revenue growth because doing so will cause Microsoft to be too conservative in winning market share

Capital expenditures were $24.2 billion, including $6.5 billion of finance leases where we recognize the full value at the time of lease commencement. Cash paid for PP&E was $17.1 billion. The difference is primarily due to finance leases. More than half our spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining spend was primarily for servers, both CPUs and GPUs, and driven by strong demand signals…

…When you think about the full year comments I’ve made on CapEx as well as the Q1 guidance of over $30 billion, you first have to ground yourself in the fact that we have $368 billion of contracted backlog we need to deliver, not just across Azure but across the breadth of the Microsoft Cloud. So in terms of feeling good about the ROI and the growth rates and the correlation, I feel very good that the spend that we’re making is correlated to basically contracted on the books business that we need to deliver and we need the teams to execute at their very best to get the capacity in place as quickly and effectively as they can…

…At its core, our investments, particularly in short-lived assets like servers, GPUs, CPUs, networking storage, is just really correlated to the backlog we see and the curve of demand…

…I am not as focused, Kash, on trying to pick a date at which revenue growth and CapEx growth will meet and cross. I’m focused on building backlog, building business and delivering capacity, which we are seeing has a good ROI today in terms of our ability to get that done. So I don’t want people to get overly focused on a pivot point. Because when you’re in sort of these expansive moments, picking a data point usually means you’re going to pick to be too conservative in terms of market share gain and in terms of winning

Microsoft’s management expects to deliver double-digit revenue and operating income growth in FY2026; management expects to continue investing in cloud and AI initiatives; management expects capital expenditure growth in FY2026 to moderate from FY2025’s level; management expects the capital expenditure growth rate in FY2026 H1 to be higher than in FY2026 H2; management expects operating margin to be unchanged in FY2026

Building on the strong momentum we saw this past year, we expect to deliver another year of double-digit revenue and operating income growth in FY ’26. We will continue to invest against the expansive opportunity ahead across both capital expenditures and operating expenses given our leadership position in commercial cloud, strong demand signals for our cloud and AI offerings, and significant contracted backlog. Capital expenditure growth, as we shared last quarter, will moderate compared to FY ’25 with a greater mix of short-lived assets. Due to the timing of delivery of additional capacity in H1, including large finance lease sites, we expect growth rates in H1 will be higher than in H2. We remain focused on delivering revenue growth and increasing our operational agility. And as a result, we expect operating margins to be relatively unchanged year-over-year.

Microsoft’s management is not worried about some of its its largest customers – mostly AI companies – becoming competitors as long as there’s broad diffusion happening behind what the lead companies are building

[Question] You guys have always had software start-ups as customers and potentially emerging competitors. But the AI labs now feel different…. It seems like there’s a lot of potential opportunity in supporting those businesses, but also it’s not certain that they’re going to stay your customers as they scale. They could in-source some of that infrastructure. And they very likely emerge as potential competitors.

[Answer] There’s always been, I’ll call it, head apps or head — new companies that emerge, that in fact are very needed in order to birth a new platform… Then broadly, they — or rather over time, there will be broad diffusion. In fact, one of the things that Amy and I track is not just the head app usage, but also what’s the sort of all the Tier 2 applications that are being built. So that sort of — that speaks a little bit, Keith, to I think your question, is as long as we have head apps shaping the platform and then, after that, we have the broad diffusion happen, which in some sense both of those is what we are seeing. So I feel very good about our being in decent standing going forward.

Microsoft’s management sees that every GPU requires storage and compute and the ratio is really bullish for infrastructure growth

One of the other things we track is every GPU requires storage and compute. That ratio is another thing that is really exponential for infrastructure growth.

Microsoft’s management thinks AI software will be monetised via a combination of per-seat fees and usage fees

[Question] What do you think is the best way that software companies are going to be able to monetize AI for SaaS?

[Answer] We’re seeing very similar monetization tools exist in this transition too, right? There’s a per user logic, there’s tiers of per user. Sometimes those tiers relate to consumption, sometimes there’s pure consumption models. I think you’ll continue to see a blending of these. Especially as the AI model capability grows, you’ll end up with ways that teams are going to want to throttle that usage, use the best models for the best job. And I think the blending of these models will continue to be something we see on a go-forward basis.

Microsoft’s management is noticing that the development of AI applications is becoming more sophisticated than just calling APIs from an AI model; management analogises the current increase of sophistication in the development of AI applications to the historical example of the time it took for ERP (enterprise resource planning) systems to emerge after relational databases were created, but notes that the increase of sophistication in AI application development is much faster

I think what we are noticing in our own build-out of these AI applications and in general is the platform is becoming more than, “Here is the model and here is an API. Make some calls,” right? I mean that, in some sense, was a bit of the state-of-the-art maybe even a year ago. Whereas now you have essentially these very stateful app patterns that are emerging that require quite a bit of rethinking of even the app stack. I mean take even the storage tier stuff, right, the degree of sophistication you have, and hey, how much of an index do you really want to build by preprocessing so that your prompt engineering, or context engineering as I call it, can be better and higher quality? So I think all of that is emerging…

…I always go back and say, hey, when, I don’t know, relational database came out, it took a while for people to build an ERP system, let’s say. And this thing, we’re kind of building pretty sophisticated applications at a very, very fast clip based on, I think, the degree of maturity that’s emerging.

Netflix (NASDAQ: NFLX)

Netflix’s management continues to think that AI will help creators make better content and save costs; Netflix’s creators are already seeing the benefits of AI in production, especially in visual effects; the Netflix series El Eternaut used generative AI to help create a sequence (a) 10x faster compared to using traditional methods, and (b) that was not possible previously from a budget perspective; the AI-produced sequence in El Eternaut is the very first generative AI footage to appear in Netflix’s content

We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper…

…Our creators are already seeing the benefits in production through pre-visualization and shot planning work and certainly, visual effects. It used to be that only big budget projects would have access to advanced visual effects like de-aging…

…This year, we had El Eternaut. It’s a very big hit show for us from Argentina. And in that production, we leveraged virtual production and AI-powered VFX. And there was a shot in the show that the creators wanted to show building collapsing of Buenos Aires. So our Eyeline team partnered with their creative team. Using AI powered tools, they were able to achieve an amazing result with remarkable speed and in fact, that VFX sequence was completed 10x faster than it could have been completed with visual — traditional VFX tools and workflows. And also, the cost of it just wouldn’t have been feasible for a show on that budget. So that sequence actually is the very first GenAI final footage to appear on screen in a Netflix original series or film. So the creators were thrilled with the result. We were thrilled with the result. And more importantly, the audience was thrilled with the result…

…I probably should clarify, that Eyeline is our production innovation group inside of our VFX house at Scanline, and they’re doing a lot of this work with our creators.

Netflix’s management thinks AI can be used to improve the member experience; Netflix is testing an AI-powered user interface where members can have a conversational experience to find content 

The member experience is a place where we feel like there’s tons of opportunity to leverage these new generative technologies to improve the experience. We’ve been in the personalization and recommendation business for 2 decades, but yet we see a tremendous room and opportunity to make it even better by leveraging some of the more newer generative techniques.  We’re also rolling out, have piloted right now a conversational experience that uses, allows our members to basically have a sort of natural language discussion with our user interface thing. I want to watch a film from the ’80s that’s a dark psychological thriller, get some results back, maybe iterate through those in a way that you just couldn’t have done in our previous experiences. So that’s super exciting and we see that all of the work that we do there essentially is a force multiplier to that large content investment that we’re making.

Netflix’s management thinks generative AI can be beneficial for Netflix’s advertising business by lowering the hurdle to create brand-appropriate advertising content that’s relevant in the particular title the advertisement is being shown in

Advertising is another really great area. We’ve seen — it’s a high hurdle to create a brand forward spot in a creative universe of one of the titles that we’re currently carrying. But it’s very compelling for both watchers and for those brands, and we think these generative techniques can decrease that hurdle iteratively over time and enable us to do that in more and more spots.

Paycom Software (NYSE: PAYC)

Paycom’s anagement has introduced a new command-driven AI product called IWant; management thinks IWant is Paycom’s most significant product-release to-date; IWant allows employees, managers, administrators, and executives to use natural language to ask any information about their company; IWant’s command-driven feature means nobody needs to be trained on how to use Paycom software; IWant pulls data from Paycom’s single database, so there are no problems associated with inconsistent or duplicative data sets; early customer-feedback on IWant has been phenomenal; management expects IWant to increase usage of Paycom’s software among non-daily users and to increase customer satisfaction and ROI (return on investment); management expects to activate IWant for all customers by the remainder of 2025 Q3

I’ll focus my comments on our second quarter achievements and highlight our latest AI command-driven product, IWant…

…We recently released IWant, the most significant product in our company’s history. We already have the most automated solution in the industry, and IWant delivers even more value to our clients through AI and automation…

…Hopefully, everyone has seen the demo we linked in today’s earnings press release issued at the close of the market. If you did, you saw numerous use cases for it on the employee, manager, administrator and executive side of the software. You also saw how IWant eliminates the need for a Paycom user to be trained on our software. With IWant’s command-driven AI users either type in or leverage voice-activated functionality to command the system, and IWant is designed to immediately provide the answer with accurate results. This means that navigation and asking others for system information is rendered obsolete.

A critical component of AI is the data it pulls from. And because IWant pulls from Paycom’s single database, it eliminates problems created by inconsistent or duplicative data sets.

On the manager side, IWant supports HR teams and organization leaders with instant employee information. For example, a manager can use IWant to pull data on when an employee returns from vacation, see who’s clocked in for the day or analyze an employee’s pay history…

…Today, in IWant’s executive mode executives using Paycom now have the information they need at their fingertips, enabling them to be daily users of our solution without ever having to be trained on the system. Just tell it what you want and IWant delivers, making executives even smarter and more effective. Now I can quickly find any information about my staff available in our single database because we track the entire employee life cycle and have data from applicant tracking, onboarding, Paycom Learning, expenses, benefits, time and attendance, payroll, schedules, surveys and more, all accessible through IWant.

Early feedback has been phenomenal with clients calling this a total game changer.

IWant’s command-driven AI engine will increase usage among non-daily users in our system. And I fully expect IWant to increase satisfaction and client ROI…

…We’ve turned on 10% of our clients so far this week. I would say by the end of this week, we’re at 15% to 20% activated… we do expect to be able to activate all of our clients throughout the remainder of this quarter…

…The more you add, the more functionality you have in these types of systems and enterprise-type systems, it does require a level of training for someone to really to be able to deploy it. Even some employees require some level of training. This removes all of it. And so it’s the biggest innovation that we’ve ever done at our company since its founding just because of the impact that it has.

Paycom’s management thinks that voice-activated, command-driven software is the way of the future

Voice-activated command-driven functionality is the future for all software and Paycom’s future started last week…

…This is a different way to utilize software. I’m unfamiliar with any other SaaS company that has a command-driven navigation throughout their system. And so I do think this is going to be a thing for not only our industry, but any type of software where users are currently navigating.

Paycom’s management expects IWant to drive more full-solution deployment of Paycom’s products across the company’s client base; management thinks IWant will increase Paycom’s customer retention rate; there’s no requirement for a customer to get BETI in order to use IWant; management does not want to directly monetise IWant; management thinks that Paycom’s competitive environment has gotten a lot better with the release of IWant; management thinks IWant will have a noticeable positive impact on Paycom’s new logos, retention, and new product adoption

If I’m asking IWant — if one of our clients is asking I want for resume information, or if they asked them for prior work history information, and they’re not on our applicant tracking system, they’re not going to have success pulling that information. And so — and one way it will help us is I do think there’ll be more full solution deployments across our client base so that you get access…

…I do think it’s going to, over time, impact our retention as these clients become more engaged in the software and get the full value available to them. IWant removes all the impediments to value. So now you just get, you didn’t have to work for it as much…

…As far as implications for BETI adoption, it’s not required that you’ve implemented BETI to get value out of I want. I do think that the more Paycom’s products that you use, which would include it, the greater the value you’re going to get from it. And the more questions that we’ll answer for you, the more insight it will give you. And so I do think IWant makes it easier to use all that additional functionality, but there’s not a requirement that someone would have BETI…

…[Question] Given how like useful IWant books and how into it is, like why not more directly monetize it on a [indiscernible] basis or a usage basis versus kind of indirectly monetizing it on better sales and and driving attach above the modules?

[Answer] I believe that every client should access their data this way, and we’ve had clients that have been with us a long time, and there’s no reason to make them pay to get the value that’s available for them, where I really think that this is just going to take off for us. So I really just don’t think we need to do that plus. I don’t want to spend a lot of time having to go out and sell clients and charge them on things that I can really get them to use the full utilization of the system…

…From a change in the competitive market, I think they all got a lot less competitive a couple of weeks ago, to be honest with you. And this is going to be a thing. I mean you guys kind of see this will be a thing moving forward. I mean our client feedback has been really good. I think that I know competitors will say they have the most automated, the most this, the most that. But if you can’t talk to it, it’s not the most automated, it’s not the most modern…

…[Question] When you talk about IWant taking off for you, where do you think it shows up most? Is that new logos? Is it retention? Is it new product adoption?

[Answer] I think it’s going to start showing up in all those areas. I mean I’m very bullish on it showing up in all those areas. Obviously, new sales new logo app has always been the largest opportunity. We have to increase and drive revenue growth. So I would definitely expect that to be probably the largest bucket of that. But I will also tell you, I expect to have a huge impact on our retention over time as people are using it becoming more acclimated to it. And I also think it’s going to have an impact on our CRRs being able to go out there and be able to talk to someone about if you want to be able to pull data from the complete employee life cycle. And if you want your employees to actually be able to leverage all this, it’s really important that you have these other modules that we have. And so I also think it’s going to make an impact there.

Paycom has to spend more on capital expenditure as it builds AI-powered products, but management believes the capex is front-loaded

We’ve always developed and hosted our own platforms. And as we move into AI, it does require a certain level of spend. So as we look at that, I do believe it to be more transitory in nature. But as we look at that, that’s going to be front-end loaded for us right now, and that’s really what we’re looking at. And a lot of that’s going to be through CapEx.

PayPal (NASDAQ: PYPL)

PayPal’s management sees agentic AI rapidly changing the landscape for commerce; management gave a reminder that in 2025 Q1, PayPal launched the payments industry’s first remote MCP (Model Context Protocol) server to enable AI agent frameworks to integrate with PayPal APIs

major players in AI have been working with PayPal in the last few months to create agentic commerce experiences; management will continue to build PayPal’s capabilities in agentic commerce

Agentic AI is rapidly changing the commerce landscape and PayPal is at the forefront. We were an early mover, launching the first remote MCP servers for commerce earlier this year. Now we’re helping merchants and developers meet the moment as consumers begin to purchase goods and services through AI agents. As you’ve seen through our announcements over the last few months, the major players in AI, including Perplexity, Anthropic and Salesforce are working with PayPal to create powerful new agentic commerce experiences. These new experiences will enable customers to find the right products, check out directly within the AI client, track purchases and much more. We have differentiated KYC and KYB expertise, access to the largest ecosystem of payment-ready wallets with PayPal World, and we’ll continue to build our capabilities in this nascent space, so that we strengthen our position as the go-to partner for agentic commerce.

Shopify (NASDAQ: SHOP)

Shopify’s management has often been ahead of the curve in providing solutions for Shopify merchants to meet important changes in the commerce landscape; the latest important change is agentic commerce and management has been building a suite of Shopify products for merchants, ranging from discovery to checkout, to thrive in agentic commerce; management is seeing AI platforms become the new way consumers discover products by having conversations with agents; management launched Catalog in 2025 Q2, which provides real-time access to millions of products from the company’s merchants through a single API (application programming interface) or MCP (model context protocol) server; management recently launched Universal Cart in early-access, and it holds items from multiple stores in one cart within an agentic chat; management launched a new version of Checkout Kit that is being used by Microsoft Copilot and it lets partners embed a merchant’s checkout in an AI agent; Shopify is powering conversation-driven product recommendations for consumers; management has observed that with agentic commerce, it’s not the largest product-company that wins, but the product that best serves the consumer; management has no viewpoint on whether agentic commerce is taking share away from search-based commerce, they just want to get Shopify’s merchants ready to handle any shift; management wants Shopify to be the best partner for AI companies to work with

We were ahead of the curve with social commerce, building early integrations for Instagram and YouTube. We saw the opportunity for commerce to meet culture, so we built a Spotify integration. And more recently, we predicted the rise of shopping in the metaverse with a Roblox integration that’s already growing quickly…

…Shopify has been building infrastructure to power agentic commerce. As AI platforms become the new way people discover products, consumers are not just searching, they’re having conversations with agents to find what they need, but powering seamless shopping across millions of brands is a massive technical challenge. And that’s where Shopify comes in. We’ve built a suite of products that make it easy for AI platforms to bring shopping to their agents from discovery to checkout, and our merchants are front and center…

…We launched Catalog in Q2 to give AI partners and shopping apps real-time access to millions of products from across our global merchant network, all through a single connection available as an API or an MCP server. Shopify catalog simplifies the process for apps and AI agents to search and pull product data so the results are clear, accurate and up to date…

…Let’s also talk about Universal Cart, which literally launched yesterday in early access. Universal Cart holds items from multiple stores all in one spot so that shoppers can easily track all their items they want to buy within the chat. And when it comes time to purchase, we’ve built a new and improved version Checkout Kit, and it’s already being used by Microsoft Copilot, a huge player in the AI space. Checkout Kit lets partners embed the merchant’s checkout right in their agent. Now we’re also giving partners the power to theme the Checkout Kit, so it matches their applications look and feel, creating this seamless experience and they don’t have to worry about payments, taxes or regulations…

…For shoppers, we’re powering conversation-driven product recommendations from all of their favorite brands…

…Catalog, which was launched in Q2, that’s already out there. That really helps agents to search, but also to surface exactly what customers want in seconds. And so it uses these very specialized large language models to categorize to enrich, but also to standardize product data at these massive volumes…

……So this is another surface area where there is a very serious potential where commerce could be taking place, whether it takes some of the market share away from search-based commerce or not, we want to be prepared for that….

…One thing that we do think though is really interesting about agentic commerce, in particular, is it’s not necessarily based on who is the largest company, it’s based on what consumers are looking for…

…The reason that you’re hearing about all these new innovative things we’re doing, whether it’s catalog or Universal Cart or Checkout Kit is because we want to make sure that we become the best partner for these AI companies to work with and these agents to work with.

When a consumer asks an AI agent for the best travel bag, Catalog kicks in and the consumer adds a bag into Universal Cart; the consumer can carry on shopping within the AI agent and complete the checkout later without leaving the chat

When a shopper asks an agent for the best travel bag, it instantly searches Shopify’s catalog and shows the top products, live prices, descriptions and inventory. The shopper adds their choice to the cart. They don’t have to check out right away. They can keep shopping. Everything they want is pulled into a single cart. And when they’re ready, the shopper completes their checkout without ever having to leave the chat. Now this unlocks a whole new kind of commerce.

Shopify’s management sees Sidekick as Shopify’s most exciting AI product for merchants; Sidekick has unique data analysis capabilities that delivers insights rapidly; a kids clothing merchant used Sidekick for actionable insights that they used to spend hours searching for; a skin care merchant used Sidekick to know exactly where they were experiencing customer-churn; Sidekick has many other capabilities besides unique data analysis 

Let’s talk about our most exciting AI product offering for our merchants, Sidekick. Sidekick’s unique ability for data analysis continues to shine through, helping merchants address their toughest business challenges. For example, a merchant in the kids clothing category recently shared with me that Sidekick delivers the kind of actionable insights they used to spend hours searching for. Questions like how can I optimize my inventory to avoid sellouts and boost cash flow? Or why am I seeing more customer churn from subscriptions in the last 3 months? Or even help me compare results from our last 3 BFCM campaigns and suggest improvements for the next one. They are all answered, explained and visualized in seconds…

…A skin care merchant recently told us that in real time, Sidekick helped them pinpoint exactly where they were experiencing customer churn down to the cohort, city and even purchase behavior in seconds…

…As I’ve talked about on previous calls, that’s on top of all the other ways Sidekick helps merchants like writing product descriptions, generating logos and images, streamlining workflows and customizing their storefronts and so much more.

Shopify’s management launched an AI store builder in 2025 Q2 that can create a custom online store in seconds

This quarter, we also launched an AI store builder that can create a custom online store in seconds, literally in seconds from a single phrase. Now all you need is an idea and a description of the product you want to sell like stylish athleisure apparel for women, and Shopify will do the rest.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management thinks demand for semiconductors will continue to be robust; management thinks that AI’s long-term demand outlook is very positive, given the explosive growth in token volume; management expects CoWoS (chip on wafer on substrate) demand to remain strong, driven by AI; management is trying to narrow the gap between supply and demand for CoWoS; export restrictions for NVIDIA’s H20 chip was recently lifted by the US government and TSMC’s management thinks this is good news, although they have yet to hear from NVIDIA, so TSMC is not ready to increase its forecast for CoWoS growth; the rapid development of AI data centers is driving high demand for TSMC’s leading edge nodes and management has not seen such strong demand for a long time; management is working hard to support the demand

We believe the demand for semiconductors is very fundamental and will continue to be robust. Recent developments are also positive to AI’s long-term demand outlook. The explosive growth in token volume demonstrate increasing AI model usage and adoption, which means more and more computation is needed, leading to more leading-edge silicon demand. We also see AI demand continuing to be strong. including the rising demand from sovereign AI…

…Demand from the AI getting stronger and stronger, if you pay attention to what the four-trillion company the CEO said. And so the megatrend for the AI continue to be strong and so is the CoWoS. And so now we are — again, we are in a mode trying to narrow the gap. I don’t want to use the balance. The last time you guys misunderstood what I said is — sorry it’s bad worded. So I will say we try to narrow the gap…

…[Question] H20 chip shipping to China. I remember 3 months ago, there was another question on this matter, right, meaning that back then, I believe that chip was suspend, but you’re still very confident about your mid-40% CAGR for CoWoS growth in the coming 5 years. Right now China becomes your addressable market again, do you think that mid-40% CAGR target can be revised up?

[Answer] The H20 now is again, according to the trading companies CEO, we did not receive the signal yet. So it’s too early to give you an estimate. But certainly, this is a good news, right? I mean that’s — China is a big market and my customer can still continue to supply the chip to the big market. And it’s a very positive news for them. And in return, it’s a very positive news to TSMC. Whether we are ready to increase our forecast, not yet. Another quarter probably will be more appropriate to answer your question…

…We saw a lot of announcement of the AI data center all over the world and the demand on 3-nanometer, actually on 5-nanometer, 3-nanometer and the future 2-nanometer are very high. And we did not see this kind of strong demand for a long time, but will be enough to support them, I still want to use my word, say that we try very hard to narrow the gap. between the demand and the supply. We’re working very hard.

TSMC’s 3rd fab in Arizona will utilise N2 and A16 process technologies and construction has already begun, and management is looking into speeding up the production schedule based on strong AI-related demand from customers; after all of TSMC’s Arizona facilities, including the advanced packaging fabs and R&D center, are completed, 30% of TSMC’s 2nm and more advanced capacity will be located in Arizona, creating an independent leading-edge semiconductor manufacturing cluster in the USA

With a strong collaboration, and support from our leading U.S. customers and the U.S. federal state and city government, we announced our intention to invest a total of USD 165 billion in advanced semiconductor manufacturing in the United States. This expansion includes plans for 6 advanced wafer manufacturing fab in Arizona, 2 advanced packaging fabs and a major R&D center to support the strong multiyear demand from our customers.

Our first fab in Arizona has already successfully entered into high-volume production in 4Q 2024, utilizing N4 process technology with a yield comparable to our fab in Taiwan. The construction of our second fab, which will utilize 3-nanometer process technology is already complete. We are seeing strong interest from our leading U.S. customers and are working on speeding up the volume production schedule by several quarters to support their need. Construction of our third fab, which will utilize 2-nanometer and 16 process technologies has already begun, and we will look into speeding up the production schedule as well based on the strong AI-related demand from our customers. Our fourth fab will utilize N2 and A16 process technology and our fifth and sixth fab will use even more advanced technology. The construction and ramp schedule for those fabs will be based on our customers’ needs. Our expansion plan will enable TSMC to scale up to a giga fab cluster in Arizona to support the needs of our leading-edge customers in smartphone, AI and HPC applications.

We also plan to build 2 new advanced packaging facilities and establish an R&D center to complete the AI supply chain. After completion, around 30% of our 2-nanometer and more advanced capacity will be located in Arizona, creating an independent leading-edge semiconductor manufacturing cluster in the U.S. Thus, TSMC will continue to play a critical and integral role in enabling our customers’ success.

TSMC’s A16 process technology has performance and power benefits over N2P; A16 is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; the A16 node will be the first node where TSMC’s AI customers will utilise TSMC’s leading edge node when historically it was just smartphone customers that will do so, because AI customers require chips with the best power efficiency

We also introduced A16 featuring our best-in-class Super Power Rail or SPR. Compared with N2P, A16 provides a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal routes and dense power delivery network. Volume production is on track for second half 2026…

……[Question] You highlighted A16, which will be very applicable for high-performance compute. Is that the node where AI and HPC would actually be at par with smartphone as an end market that would drive demand for the most leading-edge nodes?… So far, AI has been N+1, N+2. Is that A16 the first node where AI would move to the leading edge?

[Answer] Usually, the HPC’s customers are always one step behind using N+1 or N+2 technologies. Now because of AI demand is so strong, that’s one thing. But the most important thing is we need some kind of performance, but the power consumption is very, very important. And when we talk about A16, we have another power efficiency improvement close to 20%. That’s a big value for all the AI data center applications. So that help my customer moving faster because of — every time when we talk about the AI data center, if you notice that the first thing they talk about is power supply, electricity, right? So they did not tell you say the power efficiency is very important, but they tell you that we have to build a very big electricity power plant to support the AI data centers. So that tells you how important it is. And TSMC is the technology, by the way. A16 is a further improvement of the N2 node. So it’s not a surprise for TSMC to expect for those people in AI data centers industry, they want to use in A16.

TSMC’s management sees the momentum is still going for on-device AI and edge AI; the increase in the number of product-units is mild, but the die size is growing faster; management thinks another 6 months or a year is needed for an explosion in demand

[Question] You talked about on-device AI as a potential future driver. Are you seeing more development on the on-device AI part? Is it better compared to maybe 3, 6 months back?

[Answer] I say that it takes 1 to 2 years for my customer to complete their new design on the product. The momentum is still going. They are still continue to — as time goes by, as I said, the increase on the edge device, the number of the units is actually mild. But then the die size increase. We continue to see that. And the die size increased by about 5% to 10%. And that kind of trend continued. Okay? So you have to wait another probably 6 months or 1 year to see an explosion.

TSMC’s management thinks it’s too early to look at the market opportunity for humanoid robots, but TSMC’s customer (probably alluding to Tesla) thinks humanoid robots will be a massive economic opportunity

[Question] We have learned that humanoid robot started to contribute to TSMC and it is gaining momentum as the next frontier of the AI hardware. How does TSMC evaluate the market size of humanoid robot in the semiconductor and in terms of the potential market TAM, compute and also sensor requirements?

[Answer] It’s too early to say the humanoid robot will play a role in this year. Next year, probably still too early because it’s so complicated. You know that humanoid robot will be most of the time will be used. I think the first one will be used in the medical industry to taking care of the people getting over like me. And I probably someday I need some humanoid robot to help me. But it’s very complicated because it’s not — we are talking about the brand only. Actually, you are talking about a lot of sensor — sensor technology, the image sensor, the pressure sensor, the temperature sensor and all the feedback to the CPU. And so it’s very complicated. And since it’s dealing with human being directly, has to be very, very careful. But then once you start to fly, it was a big, big plus. I talked to one of my customers and he say that the EV car is nothing — is robot will be 10x of that. I’m waiting for that.

Tesla (NASDAQ: TSLA)

Tesla has successfully launched robotaxi in Austin; management has already expanded robotaxi’s service area in Austin since launch, and is looking to expand it further, by up to 10x; management is getting regulatory permission to launch robotaxi in other parts of the US; management thinks it’s likely half of the US population can access robotaxi by the end of 2025; management is being very cautious with the rollout of robotaxi; Tesla has more than 7,000 miles operating in Austin for the robotaxi right now, with only a handful of vehicles; there has been no notable safety critical incidents for the robotaxi so far; management thinks robotaxi has potential to bring the cost of transport down to less than $0.30 per mile partly because the robotaxi cars (Cybercab) have build-plans that are optimised for autonomy

We were able to successfully launch robotaxi, so providing our first drives with no one in the driver seat with paying customers in Austin. And as some may have noted, we’ve already expanded our service area in Austin. It’s bigger and longer. And it’s going to get even bigger and longer. We were expecting to really greatly increase the Austin service area to well in excess of what competitors are doing. And that’s hopefully in a week or so, 2 weeks…

…We’re getting the regulatory permission to launch in the Bay Area, Nevada, Arizona, Florida, and a number of other places. So as we get the approvals and we prove out safety, then we will be launching autonomous ride-hailing in most of the country. And I think we’ll probably have autonomous ride-hailing in probably half the population of the U.S. by the end of the year. That’s at least our goal, subject to regulatory approvals…

…We are being very cautious. We don’t want to take any chances…

…We’ll continue to expand in Austin to probably more than 10x our current operating region…

…We have more than 7,000 miles operating in Austin area. It’s just because service is new, we have a handful of vehicles right now, but then we are trying to expand the service in terms of both the area and also the number of vehicles, both in Austin and other locations. So far, there’s no notable safety critical incidents…

…The Cybercab, which is really optimized for autonomy, that, I think, has got probably sub-$0.30 per mile potential over time, maybe $0.25. It’s really — like if you design a car from scratch to be a cost-optimized robotic taxi like Cybercab — like, for example, we’re not trying to make its cornering like incredibly well like a Model 3 would or Model S would or even a Model Y, like it’s got — all of our cars that are driven by people are super fun to drive. They’ve got incredible acceleration, incredible cornering capability. But we’re confident that very few people in a Cybercab want to be hurtling around. So we’ve produced the top-end speed, which means we can use more efficient tires. We don’t need as much acceleration. We don’t need as much — take breaks to sort of — we want stopping distance, but we’re not expecting it to be heavily used. It’s a gentle ride. Essentially, if you design it for a gentle ride and then you have a much more optimized design point. So that’s why it seems probable we could achieve that. Especially, Optimus is serving, cleaning up the car and doing maintenance and stuff. And doing automatic charging…

There will be a step-change improvement coming soon for the FSD software for US users; management will soon be increasing the parameter count for FSD by nearly 10x; a Tesla car was recently delivered autonomously directly from the factory to the customer’s home; all of Tesla’s vehicles in its current lineup are capable of autonomy and this is a big differentiator for Tesla from the competition; Tesla cars on FSD are 10x safer than cars that are not on FSD; management is seeing a recent uptick in FSD adoption in the USA; since FSD transitioned to version 12, adoption rates have increased; management thinks Tesla vehicles can be delivered autonomously, be default, in the Greater Austin and Bay Area, by end-2025; there has been a 25% increase in penetration rate of FSD subscriptions since the introduction of version 12, and also the reduction in pricing; more than half of Tesla vehicle owners are not aware of FSD’s existence

Within the U.S., as we get confident about safety in different geographic areas, we’ll loosen up on how much somebody has to be laser-focused — to have their eyes laser-focused on the road. That’s been a common complaint. In fact, it does create an odd safety issue where people will sometimes disengage autopilot, then do something, change the radio or maybe look at the phone, drive with their knee and then reengage autopilot, which obviously is less safe than simply keeping autopilot on. So anyway, that experience will improve in the next several weeks. Because of our focus on Austin with no one in the driver seat, the production release of autopilot is actually several months behind what people experience on a robotaxi in Austin. So now we have the robotaxi launched, we’ll be adding back those elements so that there will be a step-change improvement in the autopilot experience for people outside of Austin…

…We’re continuing to make significant improvements just with the software. So we’re expecting to increase the parameter count. Actually, at this point, we think we can probably 10x the — almost 10x the parameter count…

…We rolled out our robotaxi service in Austin and delivered a car completely autonomously directly from the factory to the customer’s home…

…All our vehicles in the lineup are capable of autonomy. This is by far the biggest differentiator between us and the competition…

…We published our vehicle safety report earlier today. And you can see a car on FSD is 10x safer than a car not on FSD. We’ve started seeing an uptick in FSD adoption in North America in recent months, which is a very promising trend. And just to give you a perspective, since the launch of — since we moved to version 12 of FSD, we’ve seen the adoption rates really increase…

…I think we’ll end up delivering cars in the Greater Austin area and the Bay Area by default from the factory by the end of this year. A car will deliver itself to where you are, unless you say you don’t want them…

…Since we have launched version 12 of FSD in North America, we’ve definitely seen a marked improvement in the FSD adoption. And the other thing which we had also done last year is we did bring down the pricing and we’ve made subscription much more affordable. So we have seen a 25% increase since that time, which is an encouraging trend…

…The vast majority of people don’t know it exists. And it’s still like half of Tesla owners who could use it haven’t tried it even once…

…The 25% comment was 25% increase in the penetration rate since we’ve seen the release of V12 and V13 in North America.

Optimus is in version 2.5 at the moment, and Tesla is working on version 3, which management thinks has a great design; management still thinks Optimus will be Tesla’s biggest product; every component of Optimus had to be designed in-house by Tesla; management will train Optimus’s limbs with an AI neural net, using the same techniques for FSD for Tesla’s vehicles; management thinks there will be Optimus 3 prototypes by the end of 2025, and production of the robot will start scaling in 2026; management wants to scale the production of Optimus rapidly, and thinks 1 million units a year in 5 years from now is achievable; it’s difficult to predict the production ramp of Optimus because there are many parts of its supply chain that are new

We’re in Optimus version 2 right now, sort of 2.5. Optimus 3 is an exquisite design, in my opinion, and will be — as I’ve said many times before, I predict it will be the biggest product ever. It’s a very hard problem to solve. You have to design every part of it from physics first principles. There’s nothing that’s off the shelf that actually works. So you got to design every motor, gearbox, power electronics, control electronics, sensors, the mechanical elements. We also got to train Optimus to use its limb sensors with a neural net. But we’ll be applying the same techniques that we applied for our car, which is essentially a 4-wheel robot. And Optimus is a robot with arms and legs. So put the same principles that apply to optimizing AI inference on the car, apply it to Optimus because they’re both really robots in different forms…

…There’s no significant flaws with the Optimus 3 design. But we are going to retool a bunch of things. So there will probably be prototypes of Optimus 3 end of this year and then scale production next year. We’re going to try to scale Optimus production as fast as it’s humanly possible to do, so we’ll try to get to 1 million units a year as quickly as possible. We think we can get there in less than 5 years, it’s my sort of — I guess. That’s a reasonable aspiration, 1 million units a year, 5 years, it seems like an achievable target…

…The production ramp — it’s always difficult to predict the S curve of your production ramp when something has got an entire — when everything is new because the rate of production will move as fast as the least lucky and least confident element of the entire supply chain as well as internal processes. So the more new stuff that is in a product, the slower the ramp could be because of unexpected supply chain interruptions or mistakes made internally.

Tesla’s management thinks Tesla is the best company in the world at real-world AI; management thinks Tesla has the best inference efficiency, measured by intelligence per gigabyte

It is important to note that Tesla is by far the best in the world at real-world AI. Like a clear proof point for that would be — if you compare, say, Tesla to Waymo, Waymo has got — the car is festooned with God knows how many sensors. And yet, isn’t Google good at AI? Yes, but they’re not good at real-world AI. Thus far, they have — Tesla is actually much better than Google by far and much than anyone at real-world AI…

…Tesla has the best inference efficiency. Like I think a key figure of merit for AI is what is the intelligence per gigabyte. And people talk about parameters, blah, blah, blah, but I think we’ll — stop talking about parameters and talk about per gigabytes because with the parameters, you can have 4-bit parameters, 8-bit parameters, 16-bit parameters. But the actual constraints in the hardware are how many gigabytes of RAM and how many gigabytes per second can you transfer from RAM. Therefore, it is not a parameter constraint. It is a byte constraint. And Tesla has the highest intelligence density of any AI by far. And I have a lot of insight into this with xAI. xAI is — Grok is the smartest AI overall, but it’s — Grok 4 is a giant beast sort of at the terabyte level. And so kind of important to note, Tesla has the best intelligence density.

Tesla’s management is targeting Dojo 2, Tesla’s AI-training supercomputer, to be operating at scale sometime in 2026; Tesla’s AI5 chip for inference could be in volume production around end-2026; management is thinking of converging Dojo 3 and AI6 into the same chip; there’s no other AI chip that Tesla’s management is willing to place in Tesla vehicles; management thinks the AI5 chip will be a game changer and it’s so powerful that it has to be nerfed for Tesla’s markets outside of the US because of chip-export restrictions; the AI models that xAI (Elon Musk’s AI startup) is building are very different – much larger – than what Tesla is building

We expect to have Dojo 2 operating at scale sometime next year, with scale being somewhere around 100,000 H100 equivalents. And then AI5, which is really spectacular, too — and I don’t use those words lightly, spectacular, too. The AI5 chip will hopefully be in volume production around the end of next year…

…Thinking about Dojo 3 and the AI6 inference chip, it seems like intuitively, we want to try to find convergence there where it’s basically the same chip, but it’s used where, say, 2 of them in a car or an Optimus and maybe a larger number on a board, kind of 5, 12 on a board or something like that, if you want high-bandwidth communication between the chips, for serving — doing inference serving. That sort of seems like intuitively the sensible way to go…

…There’s still not a chip that exists that we would prefer to put in our car, that is, an AI chip that we would prefer to put in the car over our own, even though it’s been now out for several years. And we’re confident that the AI5 chip will be a profound game changer. In fact, it’s so powerful that we’ll have to nerf it, to some degree, for markets outside of the U.S. because it flows way past the export restrictions. So unless the export restrictions change, we actually will have to nerf our AI5 chip, which is kind of weird. Hopefully, we keep raising the bar on export restrictions…

…xAI is doing like terabyte-scale models and multi-terabyte-scale models. Tesla is 100x smaller models. So one is real-world AI and one is kind of, I guess, artificial superintelligence type of thing.

The Trade Desk (NASDAQ: TTD)

Kokai is powered by Koa, which management thinks is the digital advertising industry’s most advanced AI; AI has been infused throughout Kokai and driven huge performance improvements; Samsung used Kokai to achieve a 43% improvement in reaching its target audience in Europe; Cashrewards used Kokai to achieve a 73% improvement in cost per acquisition in Asia; campaigns that run on Kokai have an average 20% improvement in key KPIs (see Point 28 for more on how AI unlocks the 20% improvement); clients who transitioned the majority of their spend to Kokai are growing their spending on Trade Desk at least 20% faster than those who have not; around 3/4 of all client-spend is now run through Kokai (was 2/3 in 2025 Q1); management expects to transition all clients to Kokai by end-2025; Kokai is able to decide for clients which supply path gives the best ad impression out of the same impression from hundreds of supply path; Kokai helps deliver one of the promises of live sports in a biddable CTV environment, which is the ability for advertisers to target key moments in a game when the audience is most leaned in; Kokai has the industry’s most advanced retail media marketplace (see Point 8 for more on retail media); Koa is able to answer many important questions about digital advertising, such as the value of an impression to a brand, and the price of an inventory-auction; management sees many tasks where AI agents can improve the performance of Kokai because they are always on

Kokai gives advertisers unprecedented power to drive precision and relevance in everything they do, all powered by the industry’s most advanced AI technology, Koa. We have injected AI into so many parts of the system that clients that have adopted Kokai have seen tremendous performance improvements. 

Samsung was able to drive a 43% improvement in reaching its target audience for an omnichannel campaign in Europe. Cashrewards saw a 73% improvement in cost per acquisition for campaigns in Asia using Kokai. In the aggregate, we are seeing more than a 20-point improvement across key KPIs for campaigns running in Kokai. What’s even more encouraging is the clients who have transitioned the majority of their spend on Kokai are increasing their overall spend on The Trade Desk by more than 20% faster than those who have not. This is precisely what we believed was possible when we launched Kokai. Advertisers are getting meaningfully better returns on their ad dollars, and they are doubling down on the open Internet and on us as a result. Around 3/4 of all client spend is now running through Kokai, and we expect all of our clients to be using Kokai by the end of this year…

…We might see the same ad impression from hundreds of supply paths. We don’t want to burden our clients with figuring out which one is best, and it is not efficient to manage that challenge by defaulting to deals. Instead, Kokai does that work for our clients, leveraging AI and data from sources like Sincera, so advertisers can obsess about buying the right impression rather than the delivery mechanism…

…One of the promises of live sports in a biddable CTV environment is that advertisers can target key moments like overtime in an NBA game or the PKs at the end of a soccer game when the audience has most leaned in. Well, now we will be offering this capability with new tooling in Kokai and partnerships with companies such as Disney, Sky TV and Omnicom, which we announced at Cannes a few weeks ago…

…Kokai already has the industry’s most advanced retail media marketplace…

…There are so many specific tasks where AI can massively level up the status quo. What is an impression worth to a specific brand? What is the price that this auction is likely to clear at? What is the best supply chain to maximize transparency and minimize unnecessary costs? These applications of AI are already in our product. Koa is what powers Kokai’s forecasting, which is predicting the reach and performance of a campaign before a single dollar is spent. Distributed AI is foundational in Kokai, and this is only the beginning. There are many tasks where agents can improve performance in part because they’re always on.

Deal Desk is one of the major final pieces of Kokai; Deal Desk uses AI forecasting to help advertisers and publishers understand how deals are performing, how they are pacing, whether the right impressions are being delivered and more; Deal Desk helps underperforming deals get back on track; management is seeing very strong appetite for Deal Desk from both advertisers and publishers; Disney is one of the first publishers to use Deal Desk 

One additional innovation that will help accelerate our supply chain work is Deal Desk. It is one of the major final pieces of Kokai, and it is in beta now. Deal Desk leverages AI, especially AI forecasting, to reshape how we think about deals between advertisers and publishers and intermediaries such as SSPs. It helps advertisers and publishers understand how deals are performing, how they are pacing, whether the right impressions are being delivered and so on. But perhaps just as important, when deals are underperforming, Deal Desk will help those deals get back on track, and it will showcase open market and premium Internet alternatives. We are seeing very strong appetite for Deal Desk across both advertisers and publishers…

…Disney is one of the first publishers to lean into Deal Desk.

Trade Desk’s management sees AI having a profound impact on digital advertising; management thinks the quality of AI will depend on the quality of data; management thinks AI-driven buying requires objectivity; Trade Desk does as many transactions in 30 seconds as Visa and Mastercard does in a year, and this gives Trade Desk a massive data advantage when it comes to AI

AI is changing everything and creating new opportunities. Quality AI requires quality data, and to trust AI-driven buying long term requires objectivity. A black box that just sells owned and operated media will struggle far beyond what ad networks have struggled with for decades…

…We sit on top of one of the most underappreciated data assets on the Internet and frankly, in the world. And given that we do in 30 seconds as many transactions as Visa and Mastercard do in a year, if you add them together, and that quality data is now feeding an AI engine that helps the biggest buyers in the world sort out the most complex supply chain they’ve ever faced in advertising, that means our data plus AI creates an amazing opportunity for us, for the open Internet and for the biggest brands in the world.

Trade Desk’s management  thinks Kokai’s ability to drive an average 20% improvement in key KPIs for campaigns is merely scratching the surface of what is possible over time; the 20% improvement is driven by AI; the 20% improvement sometimes can be found immediately, and in other cases, it takes time to show up

As it relates to the 20% improvement, let me answer the last part of your question first, which is that I believe that, that is merely scratching the surface of what is possible over time. So the unlock that AI can bring to campaign optimization is really just beginning. Whether that is slow or fast largely depends on how campaigns are constrained today. So while there is more supply than there is demand, there is often a bunch of settings on any individual campaign that make it so it really can’t select from all the options that are the very best to help that perform. So essentially, what we’re creating is a dialogue between man and machine to make it easier for people to see what is constraining their campaign and what would be the unlock…

…Sometimes the 20%, if you will, can be found immediately. And sometimes, it just takes a little bit of a ramp.

Visa (NASDAQ: V)

Visa has a solution, Visa Intelligent Commerce, that enables consumers to shop and buy with AI agents; there are more than 30 partners currently testing Visa Intelligent Commerce in a sandbox; management thinks the first live transaction pilot for Visa Intelligent Commerce will soon happen, with general availability to come later this year

Another way that we are advancing a more digital future is with Visa Intelligent Commerce, which enables consumers to shop and buy with AI agents. It combines a suite of integrated APIs, including AI-ready cards with tokenization and authentication, together with a commercial partner program for AI platforms, enabling developers to deploy Visa’s AI commerce capabilities securely and at scale. We are excited to announce that we have more than 30 partners testing in our live sandbox, and we will soon enter the live transaction pilot phase, with general availability to follow later this year as we see agentic commerce becoming a reality.

Wix (NASDAQ: WIX)

Wix’s management is seeing AI make creation on the internet easier, driving demand for AI-powered creation

We’re seeing a fundamental shift in how people create, discover and interact online. AI-driven advancements are lowering the barriers to digital creation. This is allowing more people to turn their ideas into more sophisticated and higher quality projects with greater speed and ease. Demand for AI-powered online creation is growing faster than ever, as AI is undoubtedly bringing more people online in new ways and rapidly expanding the world of what is possible.

Wix’s management recently built algorithms to help Wix users’ content show up in AI-generated responses; Wix is the first CMS (content management system) to offer AI visibility tools; organic search traffic is declining for websites, so management sees the need for Wix’s customers to appear on AI-generated answers

Recently, we developed proprietary algorithms that help our users’ content surface prominently in AI-generated responses with our generative engine optimization offering. This empowers users to understand, monitor and actively improve how their brand appears in LLM-based search engines. Wix is the first CMS to offer this kind of AI visibility natively, setting a new benchmark for AI search optimization tools within website platforms and demonstrating our first-mover advantage, as we transform our core website building offering to align with the next area of Internet…

…When it comes to organic search traffic, we do see a decline. It’s still very small, but we do see a decline. However, there is a new universe now that people have to think about and work very hard to do that, and this is how to appear and be visible on the LLMs themselves, right? And that is actually at least as complicated as being found on Google. As a result, again, I think we need to supply our customers with the best tool and the best technologies to be visible and to be found on LLMs

Wix acquired BASE44 in June 2025 (Base44 is an AI-powered platform that allows users to build web applications using natural language prompts); management thinks the acquisition of Base44 will unlock a new vibe coding addressable market for Wix; management thinks vibe coding is going to be a major growth driver for Wix in the future; Base44’s business is growing very fast; management thinks there are synergies between Wix and Base44, such as Wix providing hosting capabilities, security frameworks, GDPR compliance, payment processing, marketing automation, and more for Base44 users; management thinks vibe coding will be a complement to Wix’s existing core offerings; management thinks vibe coding is complementary to the drag-and-drop way of building websites rather than replacing it; management believes vibe coding is good for building business applications, but it’s not good for building websites; management does not think vibe coding will replace Wix; management intends to keep Base44 separate from Wix Studio for now; Base44’s product is aimed at non-developers

We are also unlocking completely new markets such as vibe coding… 

…We are making big leaps with our June acquisition of BASE44. BASE44 gives us immediate access to a completely new audience. This includes developers, design and product teams, enterprises building internal tools and DIY users building applications, not just websites…

…Vibe coding, whether through BASE44 or native capabilities, yet to come, is going to be a major growth driver in 2026 and beyond. We’re already seeing the fruits of this investment today. With just a few million of ARR at the time of our acquisition, BASE44 is now on track to generate $40 million to $50 million of ARR by the end of this year. This is a supersonic level of growth in just a matter of weeks, and we don’t expect this momentum to slow as we accelerate towards the $100 million ARR milestone. More importantly, there is opportunity to generate long-term synergy between Wix and BASE44. Wix can provide the robust infrastructure that vibe coding platforms need to scale. This includes hosting capabilities, security framework, GDPR compliance, payments processing, marketing automation and more. BASE44 brings the application layer to empower rapid development of ideas while Wix can supply the business and online platform…

…Long term, I strongly believe vibe coating is a natural complement to our existing core offering…

…[Question] As vibe coding grows the way it’s growing, do you think this model of building replaces the drag-and-drop editor?

[Answer] Well, I think it’s complementary, right? I think that if you look at the history, we’ve done the first version of something that is very similar to vibe coding where you type what you want, and we actually build the website around it. We started in 2016. Of course, we continue to improve it, and we’ll continue to improve it. And I think for websites, it’s very hard just with a text interface to move things around and design them the way you want them. But — and we can actually see already the tools that they do, just vibe coding, already started to add a very weak, but existing visual editing elements. So obviously, the solution in the future will be a combination. 

I think the vibe coding has tremendous potential when it comes to building applications. So that way I think it’s very interesting because a lot of the business logic is extremely hard, and that’s where vibe coding shines. I want to point out again that if you build a website with the standard vibe coding tools today, you actually end up with a website that is very poor in terms of a lot of the quality that is needed or required by law.

For example, you don’t have support for GDPR. You don’t have support for accessibility. You don’t have support for cookie burners. You don’t have support to tons of other things that you want to have. So I think the combination should be that vibe coding allow you to start very quickly and then switch between designs very quickly for websites. And of course, for application, allow it to build the logic of the applications with the text interface. For website, it’s a bit different. It’s very hard to write the text of a full-blown e-commerce package, the prompt…

…You can already see some of the signs of that on Wix itself, right? We accelerated, not decelerated. So in theory, if there was a huge amount of competition out there, it would have decelerated and not accelerated. However, I do think that if you look at the 3 different needs that you have mostly for website builder and application builders, it will either be website building, application building and prototype building, okay? So I think for prototype building and application building, you see tremendous use of vibe coding now, and I expect that to continue to go. And I’m sure that we can enjoy at Wix a lot of the new capabilities of AI in order to enhance our offering, which is something that we’ve always been doing…

…[Question] On Base44, just wondering, as you guys work it into the Wix platform, is this a business that you guys intend to kind of run separately? Is it just going to be part of the core Wix’ ADI studio platform?

[Answer] We’re going to keep it separately, at least for the current future that we can foresee. I believe, again, that those are very different needs. People don’t do the same thing on Base44 that they do on Wix. And I think that vibe coding is a great way to build prototypes and applications and not necessarily the best solution for website…

When you look at something like Windsurf or Cursor, they are aimed at developers, right? So the whole experience is very different than the experience that you have in Base44. Base44 is aimed for mostly people that are not a developer or that are developers and do not want to develop and to do something very quick and then continue to innovate on top of it, again, without coding.

Wix’s management believes that the infusion of AI into websites will make it even harder for people to move off Wix as there are fewer platforms that offer all of the necessary capabilities

[Question] Do the barriers to change websites change as we think about more text website capabilities? How do we think about the kind of the component of churn within websites as new capabilities kind of lower the barrier to creation?

[Answer] Well, it’s always been easy to change a website, right? I think that the content has always been owned by the user, and you can always move between different platforms. I think that the reason that we see so many people staying with Wix is because we offer them a better platform for many of the things that they need. I do believe that the more AI capabilities, advanced AI capabilities exist, it’s actually going to be harder to change a website, not easier. I think that there’s going to be less platforms that offer all those capabilities. As a result, the amount of platform we can change between would actually grow down, and we can already see that. 

Wix’s management  does not see back-end of commercial transactions being down on LLMs themselves any time soon

I don’t see any time in the near future where the back end of the transactions will be done on the LLMs themselves. Let me explain. For example, let’s say that you have a yoga studio and you want — and somebody want to go to an LLM and actually order a class video to join a seminar, right? For that, the LLM has to know the seminar exists, how many seats are there, what is the price of a ticket, what are the tax rules, what is the reimbursement rules, what are the refund rules, what kind of coupons go together, how does it all combine to the membership card that you have, do you need a membership card or you don’t need a membership card. All of those things require very complicated back end, which is a very complicated database and a lot of rules on top of that. I don’t see, and currently, all the signs point to the other direction, that LLM’s providers will not develop those, but actually interface with the existing website.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Netflix, Paycom Software, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Visa, and Wix. Holdings are subject to change at any time.

Potential Bargains In A Niche Corner Of The US Stock Market (Part 3)

Earlier this year, I published two articles on investing in thrift conversions in the US stock market titled Potential Bargains In A Niche Corner Of The US Stock Market and Potential Bargains In A Niche Corner Of The US Stock Market (Part 2). In them, I described what thrift conversions are and why both fully-converted thrifts and first-step thrift conversions could all be huge potential bargains.

I focused on first-step conversions in Potential Bargains In A Niche Corner Of The US Stock Market (Part 2). In it, I referenced an article from the experienced US-community-bank investor Phil Timyan on Rhinebeck Bancorp and used the same bank to explain first-step thrift conversions, how such thrifts can be acquired, and their potential for generating good returns for shareholders. Timyan’s article briefly mentioned two examples of completed or ongoing acquisitions of first-step thrift conversions and I would be delving into their details in this article you’re now reading.

Wake Forest Bancshares, which was the owner of the operating bank Wake Forest Federal Savings & Loan Association, is one of them. In January 2024, Wake Forest Bancshares (shortened to WAKE from here on) was acquired by Piedmont Financial Holding Company for US$34 per share in cash. Before the acquisition, Wake Forest Bancorp MHC owned 0.635 million of the 1.070 million WAKE shares that were outstanding in total. Wake Forest Bancorp MHC was a mutual holding company, so it had no shareholders. At the point of the acquisition by Piedmont Financial Holding Company, Wake Forest Bancorp MHC’s 0.635 million WAKE shares were cancelled, which resulted in 100% of the economics of Wake Forest Federal Savings & Loan Association belonging to WAKE’s remaining shareholders.

Based on the latest financials that I could find for WAKE* prior to the acquisition, it had stockholders’ equity of US$26.507 million, which translates to a book value per share of US$61 based on the 0.435 million shares of WAKE remaining after the cancellation of Wake Forest Bancorp MHC’s stake. At a stock price of US$34 for WAKE, Piedmont Financial Holding Company paid a P/B ratio of just 0.56. But public shareholders of WAKE still enjoyed substantial gains, as WAKE’s stock price was significantly lower than US$20 for months prior to the acquisition. If WAKE’s stock price was, say, US$17 before the acquisition, it would have an optically higher P/B ratio of 0.69 but a true P/B ratio of just 0.28.

CFSB Bancorp, the owner of the operating bank Colonial Federal Savings Bank, is another instance. CFSB Bancorp (shortened to CFSB from here on) completed its first-step conversion process in January 2022. As of 31 March 2025, CFSB has: 

  • 6.549 million outstanding shares, of which 3.587 million belongs to 15 Beach MHC, the mutual holding company – again, with no shareholders – that owns a portion of CFSB. 
  • Stockholders’ equity of US$75.715 million, which gives CFSB a book value per share of US$26 if 15 Beach MHC’s shares are cancelled.

Hometown Financial Group announced on 20 May 2025 that it will be acquiring CFSB for US$14.25 per share, subject to regulatory approval. If the acquisition is successful, it will be a mutually beneficial situation for both Hometown Financial Group and public shareholders of CFSB. Hometown Financial Group will be buying CFSB at an effective P/B ratio of just 0.55, while CFSB’s public shareholders get to earn a healthy return, seeing that the thrift’s stock price was only US$8.19 just prior to the deal’s announcement. For perspective, a US$8.19 stock price for CFSB translates into an optical P/B ratio of 0.70 but a true P/B ratio of just 0.32.

In Potential Bargains In A Niche Corner Of The US Stock Market, I shared the traits I looked out for and they apply to first-step thrift conversions too. In fact, CFSB ticks most of the boxes against my criteria for investing in thrifts:

  • The equity-to-assets ratio: As of 31 March 2025, CFSB has total assets of US$366.2 million and total stockholders’ equity of US$75.715 million, giving it a high equity-to-assets ratio of 20.7%
  • The P/B ratio: Earlier, I mentioned that CFSB’s true P/B ratio was just 0.32 before Hometown Financial Group jumped into the scene
  • Share buybacks: CFSB announced a plan on 5 April 2024 to repurchase up to 0.152 million shares (around 5% of its outstanding shares then); as of the first quarter of 2025, CFSB has bought back more than half of the number of shares under the plan
  • Non-performing assets as a percentage of total assets: CFSB had no non-performing assets in its fiscal years ended 30 June 2024 and 30 June 2023
  • Net income: CFSB was profitable in each of its fiscal years ended 30 June 2022, 30 June 2023, and 30 June 2024, but made a small loss of US$0.16 million in the nine months ended 31 March 2025 (the loss is immaterial against the bank’s total stockholders’ equity)
  • Change in control provisions: CFSB’s CEO, Michael McFarland, can receive up to three times the average of his effective annual compensation in the five years prior to a change in control 
  • Management’s compensation: McFarland controlled 61,549 CFSB shares as of 4 October 2024; the shares were worth slightly more than US$0.5 million at the stock price of US$8.19 before Hometown Financial Group’s involvement and the value of the shares was also higher than McFarland’s annual compensation of US$0.35 million for the fiscal year ended 30 June 2024; It’s worth noting too that McFarland is already 71 this year, so there is even more incentive for him to cash out from CFSB

I also cautioned in Potential Bargains In A Niche Corner Of The US Stock Market that “not every thrift conversion [referring to standard conversions or thrifts that have completed the second-step of the two-step conversion process] leads to a happy ending.” I think this absolutely stands with first-step thrift conversions too. 

If any of you reading this letter is interested to have deeper conversations about investing in thrifts, please reach out, I would love to engage. 

*Publicly-available historical financials for WAKE are currently scarce and the latest we could find was for the fiscal year ended September 2021 (fiscal 2021). Despite the time-gap between WAKE’s acquisition in January 2024 and the financials we could find, we think the numbers are still relevant. This is because WAKE’s total assets just prior to its acquisition and at the end of fiscal 2021 were US$121 million and  US$110.5 million, respectively.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

The Federal Reserve Is Not All-Powerful

I’ve noticed that many financial market participants tend to think of the Federal Reserve, the USA’s central bank, as an all-powerful entity that controls all aspects of the US financial markets. For example:

  • A Reuters journalist wrote in November 2023: “If investors failed to heed the ‘don’t fight the Fed’ mantra this year, they should be doubly cautious about ignoring it again next year….betting against the Fed is risky, no matter where the economic or policy cycles are”
  • Two journalists from Reuters commented in September 2024: “The Federal Reserve cut U.S. short-term borrowing costs on Wednesday… a lower policy rate should translate to cheaper borrowing costs for most kinds of loans”
  • Howard Marks, who is the co-founder of the distressed debt investment firm Oaktree Capital and an investor I respect deeply, shared the following in a June 2025 podcast with The Motley Fool: “You go through these periods of time, like 2017, 18. I would go travel the country… and speak to audiences or clients even. I would get one question. What month will the Fed raise interest rates? That’s all they ask”

The quote from Marks is one of the best at showing just how important the Federal Reserve looks in the eyes of most financial market participants.

But when it comes to interest rates, the truth of the matter is that the Federal Reserve controls only one interest rate, which is the federal funds rate. The federal funds rate is the interest rate that banks charge each other for overnight loans. 

Most types of loans that consumers and businesses interact with are not pegged to the federal funds rate. In addition, many types of corporate bonds and government bonds have interest rates that are set by market forces, not the Federal Reserve.

Figure 1; Source: St Louis Federal Reserve

Figure 1 above shows the monthly percentage change for a few different interest rates over the past two years. There’s the federal funds rate, which is the blue line; there’s the interest rate for 1-year US Treasuries, which is the orange line; there is the interest rate for 10-year US Treasuries, which is the brown line; and lastly, there is the interest rate for 20-year US Treasuries, which is the red line. The monthly percentage change for these four different interest rates do not move in lock-step. In fact, in the green circle, all three Treasuries saw a monthly increase in their interest rates around October 2024 when the federal funds rate declined. This would not have happened if the Federal Reserve was all-powerful.

As for the stock market, the Federal Reserve’s impact on stocks is unclear, outside of severe crises where the central bank can play a role in stabilising asset prices – as it did during the 2008 financial crisis.

Table 1 shows a few time periods in the past where the interest rate on the 3-month Treasury bill had increased significantly. It’s important to note that the 3-month Treasury bill is a close proxy for the federal funds rate, so the time periods when the interest rate on the 3-month Treasury bill increased would also be times when the Federal Reserve had raised interest rates

Time periodChange in yield of 3-month Treasury billS&P 500 annualised return
1954 – 19641.2% → 4.4%21%
1960s4% → 8%7.7%
1970s8% → 12%6%
Table 1; Source: Ben Carlson

It turns out that the three time periods of rising interest rates actually saw the S&P 500 produce annualised returns ranging from a decent 6% to an outstanding 21%. So, there have been past episodes where US stocks have done well over the long run even when the Federal Reserve was raising the federal funds rate.

Table 2 shows a few dates in the past where the Federal Reserve had cut the federal funds rate and how US stocks performed over the next 12 months. It turns out that US stocks have done very well, as well as done very poorly, in the 12 months after the Federal Reserve had lowered interest rates.

Date of Federal Reserve rate cutReturn of US stocks in the next 12-months after rate cut
October 195717%
October 1973-36%
February 198232%
September 2007-24%
Table 2; Source: Joshua Brown

So the reality of the situation, when it comes to the Federal Reserve, is that it is far from being all-powerful. There are many aspects of the US financial markets where the central bank has little to no control. 

This is also why I spend very little time thinking about or keeping track of what the Federal Reserve is doing when making investment decisions. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Does Grab Holdings’ Recent Convertible Note Offering Make Sense?

Management teams that can make use of opportune pricing of stocks and debt can greatly increase the returns of shareholders.

Grab Holdings (NASDAQ: GRAB) recently announced that it would be raising cash through a convertible note offering. This came as a surprise to investors as Grab still has lots of cash on its balance sheet.

But if you look at recent history, it is not uncommon to see companies raise cash when the cost of capital is relatively low, even when they have sufficient cash on their balance sheets.

Companies such as Zoom Communications (NASDAQ: ZM) and Tesla (NASDAQ: TSL) raised cash through a secondary offering in 2021 when their stock prices went hyperbolic. 

With stock prices rising to new all-time highs again, we could potentially see more companies taking advantage of favourable market conditions to raise cheap capital. With that in mind, I thought it would be a good time to share some quick thoughts on such capital raises.

Understanding cost of capital

The question of whether a company should raise capital boils down to whether the returns earned on the capital exceed the cost of capital.

But there is a lot of confusion over what the cost of capital is. For debt issuance, the cost of capital is simply the interest that is paid on the debt. For equity issuance, the cost of capital is a lot more complicated.

There are a few schools of thought when it comes to calculating an equity’s cost of capital. I like to keep things simple – and the simplest way to think about it is by assessing the impact on future returns to shareholders on a per share basis. For instance, if a company needs to issue 300 shares and has 1,000 shares outstanding, the cost of capital is 30% of the company. To make the share issuance worthwhile, the company needs to ensure that the money raised will be able to increase the future stream of cash returned to shareholders by at least 30%.

So a company that is projected to return $1 per share to shareholders for eternity will require the cash that is raised to increase that return-figure to at least $1.30 to justify a share issuance that dilutes shareholders by 30%.

Does Grab’s issuance make sense?

With this in mind, let’s take a look at Grab’s recent note offering. Last month, Grab announced that it would be raising US$1.5 billion through a zero coupon convertible note offering. 

Convertible note offerings are debt offerings in the sense that the money needs to be paid back. But because it is convertible, these notes can potentially be turned into equity. In Grab’s convertible note offering, the debt can be turned into equity if its stock price trades above the conversion price of US$6.55. If the conversion happens, Grab does not need to pay back cash to the note holders but the new shares will become dilutive to existing shareholders.

Let’s assume that all these notes will be turned into equity. As of end-2024, Grab had a fully diluted share count of 4.3 billion shares (including warrants, unvested restricted stock units, and options). The note offering, if converted to shares, will result in 229 million new shares being created, which means 5.3% dilution. In other words, for the convertible note offering to make sense for Grab, it needs to use the proceeds to increase its future cash returned to shareholders by at least 5.3% per share. 

This can be done in two ways: (1) Grow the cash generated by the company by more than 5.3% or (2) decrease the share count by more than 5.03% (a 5.03% reduction in share count will lead to 5.3% per share growth – you can do the math)

Grab mentioned that it could potentially use the cash to buyback shares. If they manage to do it at the current share price of around US$4.70, the company will be able to buy back 330 million shares or around 7.3% of its fully diluted share count (this includes the 5.3% dilution from the conversion of the convertible notes). This would be a massive win for the company. In essence, Grab would be able to reduce its share count even after the conversion of the convertible notes to shares, simply by using the cash raised and buying back its shares at current prices.

Grab doing a massive buyback may not be far fetched, as the company also simultaneously announced along with its note offering that it is buying back around US$273.5 million of its shares at US$4.68 each from buyers of the notes.

Bottom line

As investors, it is challenging to assess whether equity raises make sense or not. The theory mentioned above may be simple but in most cases, there are many moving parts.

Cash is also fungible, and we usually do not know where the capital went. If the company had not raised cash, which aspect of its expenses or investments would it have cut?

As an investor, instead of trying to assess where the money went, one thing that we can do is to dissect whether management teams are raising capital at opportune times; opportune times are when stock prices are high or when interest rates are low. This is when the cost of capital is the cheapest. Likewise, companies should be buying back stock when stock prices are low and holding back on debt issues when interest rates are high.

As investors, owning a strong business is one thing, but we also need management teams that are savvy with capital allocation and capital raising. Management teams that can make use of opportune pricing of stocks and debt can greatly increase the returns of shareholders.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Tesla Inc. Holdings are subject to change at any time.

Can The (Micro)Strategy Bitcoin Playbook Last Forever?

Strategy’s amazing financial engineering.

Strategy (recently renamed from Microstrategy) is one of the top performing companies in the US stock market in recent years. The stock price of the highly controversial “Bitcoin holding company” is up 210% in the last year alone and up a staggering 3,300% in the last five years.

One reason why Strategy has done so well is because it is one of the best at raising cheap capital. How does this work?

Self-fulfilling cycle

Strategy’s Bitcoin playbook is pretty simple and yet quite ingenious. The “Bitcoin holding company” basically takes advantage of its stock price trading at a premium to book value by selling new shares for cash. 

Imagine a company that has a book value of $1 million and has 1 million shares. Each share, hence, has a book value of $1. But let’s say that for some reason, someone is willing to buy the shares at $2 each. The company can take advantage of this and sell new shares to this buyer. Let’s say the company sells 1 million new shares for $2 million. After the share issuance, the company now has 2 million shares outstanding and $3 million in book value. The book value per share is also now magically $1.50. The process can become a self-fulfilling cycle where the company raising shares above book value actually leads to the book value per share increasing.

This is exactly what Strategy has done. Its book value per share has risen by using this simple financial engineering trick. But Strategy then also uses proceeds from its share issuance to buy Bitcoin. If Bitcoin’s price rises, Strategy’s book value per share will increase yet again.

In 2023, Strategy raised US$2.0 billion from issuing shares. In 2024, the company raised an even larger sum of US$16.3 billion from ordinary share sales. As of its last quarterly earnings update for the first quarter of 2025, it has raised another US$5.7 billion through sales of common shares and preferred shares.

But Strategy has gone yet one step further. The company has also raised capital through debt markets to buy more Bitcoin, in effect leveraging up its balance sheet and increasing its exposure to Bitcoin. Strategy’s total debt has increased from US$2.2 billion in 2023 to US$7.2 billion in 2024, and US$8.1 billion in the first quarter of 2025.

What the bulls believe

Investors who are bullish on Strategy believe that this virtuous cycle can continue forever. They believe that Strategy’s premium to book value will exist for many years as there are sufficient buyers of the stock who believe in this self-fulfilling cycle. 

If true, Strategy will become a compounding machine simply by issuing new shares at a premium and juicing its book value per share. There’s also the Bitcoin purchases, which adds another growth-factor for Strategy’s book value per share.

But as I mentioned earlier, there’s also leverage at play with Microstrategy because the company has used debt to buy more Bitcoin that it can actually afford. Microstratregy’s book value will therefore swing more than Bitcoin’s price. If Bitcoin’s price rises, Microstrategy’s book value will go up faster. 

When will the party end?

“I applaud Strategy’s playbook. But there are some risks that shareholders need to be wary of. The obvious one is if Bitcoin’s price falls. When this happens, Strategy’s book value per share will fall faster because of the leveraged nature of the company’s balance sheet. As of 31 March 2025, Strategy had US$43.5 billion worth of Bitcoin but only US$32.2 billion in equity. If Bitcoin’s price falls by 50%, Strategy’s book value would drop to US$10.5 billion, or roughly a 66% fall. For Strategy to enter negative book value territory, Bitcoin will need to fall by around 74% from the 31 March Bitcoin price. 

The other major risk is if stock market participants decide that Strategy’s stock price simply does not deserve to trade at a premium to book value. In other words, buyers of the stock only want to pay book value to buy shares. This throws Strategy’s ability to raise capital cheaply out the window. But it also means that Strategy’ shareholders who first invested at a premium to book value could face a potential heavy loss.

As of Bitcoin’s price at the time of writing, Strategy’s book value is worth around US$38 billion. But based on the company’s current stock price, its market capitalisation is around US$108 billion, or a 180% premium to its book value. Even if Bitcoin’s price remains stable, but Strategy’s stock price reverts to no premium on book value, this could still lead to a painful 64% reduction in the stock price price.

For now, momentum and the current environment suggests that market participants are unlikely to bid down Strategy’s stock price so drastically so soon. But things can change during “risk-off” environments and when market participants become more cautious.

A double whammy for Strategy shareholders can happen if both Bitcoin’s price falls and Strategy’s premium to book value narrows.

The bottom line

Whatever you think about Michael Saylor and his Bitcoin views, he certainly has mastered the dark arts of financial manoeuvring. In most assets, fundamentals drive price. Saylor has managed to turn the script around, making price drive fundamentals.

But this comes with risks. If Strategy’s stock price collapses, the virtuous engine stops running. Saylor seems to be wary of these risks. While Strategy continues to issue shares to buy Bitcoin, Saylor is constantly selling his Strategy shares.

Despite the risks, market participants seem hungry for more of such companies. Besides Strategy, there are now a number of copy cats around the world, such as Metaplanet in Japan which has seen a meteoric rise in its share price this year. Its stock price is at an eye-popping 7 times book value.

For such companies, the party will end when there are no more greater fools to sell to (both for Bitcoin and for new shares of the company). Whether – or more likely, when – that happens is anybody’s guess. Just be careful not to be the last one holding the bag.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

Potential Bargains In A Niche Corner Of The US Stock Market (Part 2)

An optically expensive thrift can look really cheap under the hood.

In February this year, I wrote Potential Bargains In A Niche Corner Of The US Stock Market where I discussed thrift conversions and why they could be potential bargains. In the article, my focus was mostly on thrifts that have undergone the standard conversion, or the second-step of the two-step conversion process. This was because I thought that only such thrifts could be acquired and most of a thrift’s economic value gets unlocked for shareholders when it is acquired.

Earlier today, courtesy of an article from the experienced US-community-bank investor Phil Timyan, and after more investigation, I learnt that thrifts that have undergone just the first-step conversion process can also be acquired in what’s known as a remutualisation. In this article you’re reading now, I will attempt to explain first-step conversions and remutualisations – and their potential for generating good returns for shareholders – by using Rhinebeck Bancorp (NASDAQ: RBKB) as an example. Rhinebeck Bancorp, which from here on will be referred to as RBKB, was also the subject of the Phil Timyan article I mentioned. 

How a first-step conversion works:

  • RBKB is a public-listed company that owns 100% of Rhinebeck Bank. Rhinebeck Bank is the operating bank that was established in 1860.
  • 57.1% of RBKB is owned by Rhinebeck Bancorp MHC. Rhinebeck Bancorp MHC is a non-stock corporation, so it has no shareholders. 42.9% of RBKB is owned by public shareholders.
  • In January 2019, Rhinebeck Bank completed its first-step conversion process. During the conversion process, 4,787,315 shares of RBKB were sold. Crucially, 6,345,975 shares were also issued to Rhinebeck Bancorp MHC but these shares were never sold, and Rhinebeck Bancorp MHC has no shareholders, as mentioned earlier.
  • Effectively, the 6,345,975 shares of RBKB held by Rhinebeck Bancorp MHC are not trading and can’t claim the economics of Rhinebeck Bank until Rhinebeck Bancorp MHC chooses to convert from its mutual ownership structure to one where it also has stockholders; this is known as the second-step conversion.

How a remutualisation works:

  • A remutualisation occurs when RBKB is acquired by another mutual bank. What happens at the point of acquisition is that the shares of RBKB owned by Rhinebeck Bancorp MHC gets cancelled, so 100% of the economics of Rhinebeck Bank then belongs to shareholders of RBKB, instead of the initial 42.9%.
  • As of 31 March 2025, RBKB has total shares outstanding of 11,094,828. After deducting 6,345,975 shares of RBKB owned by Rhinebeck Bancorp MHC and 302,784 shares of RBKB from unearned ESOP (employee stock ownership plan) shares, the remaining shares of RBKB that will be left in a remutualisation is 4,446,069.
  • As of 31 March 2025, RBKB’s stockholders’ equity is US$125.975 million. RBKB’s stock price is US$12.12 as of 12 June 2025. If the acquiring mutual bank decides to pay, say, US$20 per share for RBKB, it only has to cough up US$88.921 million (US$20 multiplied by 4,446,069 shares) for US$125.975 million in stockholders’ equity. So both the acquiring mutual bank and existing shareholders of RBKB win big.
  • On the surface, RBKB has a book value per share of US$11.35 (US$125.975 million divided by 11,094,828 shares), which gives it a PB ratio of 1.06. But if the remutualisation math is used, RBKB’s true book value per share is US$28.33 (US$125.975 million divided by 4,446,069 shares), which gives it a PB ratio of just 0.43.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.