All articles

What We’re Reading (Week Ending 24 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 August 2025:

1. The deep transformation of China’s consumption structure: a complex picture beyond “downshifting” – Robert Wu and Dongfan Ma

From a macro and traditional industry perspective, China’s consumer market does show signs of weakness:

Growth slowdown: Over the past three years, the annualized growth of total retail sales of consumer goods has fallen significantly compared to the ~10% seen between 2010 and 2020, highlighting weaker macro consumption momentum.

Pressure on traditional sectors: In 2024, the catering industry in Beijing and Shanghai saw profit declines of 80–90%. Hotel average daily rates kept falling, and airline ticket prices dropped consistently between 2024–2025. Together, these figures underpin the concerns about sluggish consumption.

Yet, another set of data paints a very different picture.

Entertainment boom: The concert economy remains in an extremely overheated state, with shows across genres selling out instantly — acting as the “contrarian” force in the consumption market.

Non-essential consumption growth: Products like Pop Mart’s designer toys or Lao Pu Gold’s jewelry — both considered non-essentials — are seeing robust growth, defying the conventional wisdom that such categories should be hit hardest during consumption downgrades.

Segment upgrades: Pet-related spending remains strong, with treats and premium pet food turning into hotspots, suggesting stable or even rising purchasing power among certain groups.

Lower-tier market vitality: Categories like household goods in third- and fourth-tier cities continue to show resilient demand for quality.

This contradiction makes clear that a single pessimistic lens is no longer sufficient to describe the reality of China’s consumer market. At its core lies a deeper structural transformation…

…What China’s consumer market is undergoing is not a simple story of expansion or contraction, but a profound structural transformation characterized by multiple forces:

Channel: Social and livestream commerce is displacing offline and traditional e-commerce.

Supply: Flexible chains and rapid product iteration are overtaking traditional production models.

Market: Downward tier integration reshaping consumption layers.

Corporate Strategy: A shift from “ad-driven + distributor networks” to “private domain operations + digital reach.”

If we focus only on traditional offline retail, distributor-based brands, or oversupplied catering chains, the picture appears bleak — a “consumption winter.” But if we turn to social commerce (already nearly 10% of retail, still growing at 30% annually), new brand growth, and supply chain-enabled rapid iteration, we see instead a “consumption spring.”

2. AI x Commerce – Justine Moore and Alex Rampell

The internet’s most profitable business model has always been simple: running search ads on monetizable queries. When you search “how many protons are in a cesium atom,” Google makes no money. When you search “best tennis racket,” it prints cash…

…Google could lose 95% of search volume and still grow revenue –as  long as it retains the valuable queries, which are largely commerce related…

…The nature of an impulse buy means that you won’t be doing research in advance or consulting with an expert, so there’s limited opportunity for AI agents to play a role. However, the algorithms that guide your attention will continue to improve, enabling advertisers to target you with the right product at the right time. And it will be easier for brands to create hyper-personalized marketing materials that draw you in…

…You probably already have brands and SKUs that you know and love when it comes to everyday essentials, so an AI research agent won’t be particularly helpful unless you’re adding a new product to the lineup (like if you get a dog and need to pick their food). But AI should play a role when it comes to sourcing and purchasing items. For example, if you regularly get the same laundry detergent, your AI agent could monitor and buy on your behalf if the price dips below a certain level…

…Lifestyle purchases – when you’re purchasing items that you don’t buy regularly (especially if they’re a bit more spendy, like a luxury handbag), you’re likely going to want to evaluate various options to make sure you’re picking the best one. But researching and aggregating the choices, and ranking them across various criteria, is time-consuming. Imagine deputizing an AI agent to do the grunt work for you and come back with a recommendation that explains why a specific SKU is the perfect choice for you based on your past purchases, what it knows about your preferences, and even things like your body type and what colors look best with your eyes…

…Functional purchases – these items are important because they are typically (1) a meaningful financial investment, and (2) a product you’ll use every day, likely over several years. This means that you want to feel very confident that the product meets your needs and will hold up over time. You may feel comfortable purchasing a product that your AI research agent recommends. But you’ll likely want to have a more in-depth conversation with a subject-matter expert (an AI “consultant”) about different options…

… Life purchases – there are only a few “life purchases” you’ll make (e.g. a home, car, wedding, or college education). These are expensive and meaningful, so you’ll likely spend months – if not years – evaluating options. You’ll do your own research online, but there’s a decent chance that you’ll also speak with experts and try out the options (e.g. touring wedding venues or homes, test driving a car, visiting a college). It’s hard to imagine people fully outsourcing these decisions to AI…

…As agents become the new interface for buying, both platforms are well-positioned — Amazon with end-to-end control, Shopify perhaps more so with distributed ownership across millions of stores and growing consumer touchpoints. It doesn’t matter if a consumer search starts with Google or ChatGPT if the destination merchant is hosted by Shopify…

…AI’s potential is first and foremost bottlenecked by content, not compute. Most product reviews are noisy, gamed, or overly polarized. Agents need access to structured, trustworthy, real-time feedback. Let’s say you’re looking for the “best” blender. In a perfect world, your AI would order every blender, test them all for a week in your kitchen (with your home robot!), decide which one you like best, and then send the rest back. But today AI just summarizes the web, and cannot turn shilled junk into honest analysis…

…The best AI-native experiences will capture data directly in the user journey that contributes to better recommendations. Imagine an AI agent that infers information about what to recommend to you (or others) from data that’s not typically present on product description pages or reviews. This could be direct (e.g. next time you open the app, it asks you a few specific questions about your last purchase), or more passive (e.g. it looks at how long you linger on a specific item or feature and maybe even asks follow-ups if you’re hesitating).

Until these foundations are in place, LLMs will remain clever summarizers — not true commercial agents. But this is happening fast.

3. Why zero-click panic is overblown – Mike Elgan

The idea is that when you want information, you go to an AI chatbot like GPT-5, ask a question, get an answer, and move on with your life without clicking through to the websites that monetize with advertising or subscriptions. And even when you “Google it,” Google’s direct answers, knowledge panels, and AI overviews often give users a zero-click answer.

The crisis: AI companies are getting rich by giving away other people’s content for free. Every time someone gets an answer from a chatbot instead of visiting a website, that’s money being transferred from content creators to AI companies. The media ecosystem will be strangled by this “zero-click crisis.”

But the trend might not turn out as bad as some think.

The reason is that while most people might turn out to be zero-clickers, a minority of people are likely to keep on clicking…

…Most importantly for people who care about quality information — AI provides a narrow, generic and average worldview.

In other words, on that last point, getting your information about the world from AI will make you average, not exceptional. And some people will want to be exceptional.

Many, but certainly not most, information-seeking people will continue to click through to original sources, seek out original sources, follow original sources, pay for original sources and patronize advertising…

…Let’s take a look at the advertising that everyone points to when gnashing teeth about the zero-click crisis.

Well over 99% of Google users who click through to content websites never buy anything from the ads they see on those sites.

Far less than 1% of Google users (between 0.3%–0.6%) do sometimes buy something after seeing an ad.

That tiny minority pays for all the content that every Google user sees. More than 99% get a free ride, subsidized by the people who buy the ads…

…For the past century, advertiser-supported content has been paid for entirely by a small minority of people with the means and desire to buy the advertised products.

I suspect our zero-click future will look a lot like our most-people-don’t-buy-the-advertised-product past.

In other words, the zero-click people are the same majority of people who used to click through to ad-supported or subscription-supported content sites and then never buy or subscribe to anything.

If a non-contributor stays on the ChatGPT website and never pays for the content, or if a non-contributor clicks through to an ad-supported website and never buys the advertised products — what’s the difference?

Content supporters — people who buy ads and especially people who pay subscriptions — will continue to support quality content with their wallets.

The minority who want exceptional, rather than average, information will have to seek out that exceptional information, subscribe to it and (as people who buy things) will be seen as extremely valuable to advertisers.

4. Bitcoin treasuries – Oliver Sung

In case you’ve missed the financial news, Bitcoin treasuries (some call them “digital asset treasuries,” or “DATs”; others dub them “crypto holdcos”; still others abbreviate them to “BTCOs”) are simply companies that buy Bitcoin and park it on their balance sheet. Any company could do this, but the point is that a pure-play Bitcoin treasury shouldn’t have much of an operating business attached, making the entity a vehicle to “invest in” (or rather “hold”) Bitcoin through a corporate wrapper…

…The whale of Bitcoin treasuries is Strategy—formerly MicroStrategy—led by Michael Saylor. He pioneered the model, having now amassed 630k Bitcoin (as of Q22025), or 3% of all Bitcoin ever to be in existence…

…With help from ZIRP and a volatile stock, Saylor discovered he could issue 0% (or close to it) convertible bonds to fund further Bitcoin purchases. If you ask why Saylor wouldn’t just issue equity instead, the answer is that the convertibles were issued at a premium and wouldn’t dilute the share count before they came in-the-money. That’s when he found his masterstroke: To keep being able to raise money to fuel his newly-discovered perpetual motion machine, in marketing newly issued Strategy securities at premiums to the share price, he, ironically, had to borrow a term from conventional finance which Bitcoin certainly lacked: yield.

“Bitcoin yield” is not to be confused with the yield earned on your cash flow-generating assets. No, Bitcoin yield is the period-to-period percentage change in the ratio between the company’s Bitcoin holdings and its diluted shares. In other words, it’s the change in Bitcoin per share. But it’s a smokescreen—another way to say that new investors fund “yield” for old investors. The yield that reaches old investors comes straight from newcomers’ pockets. Because the “Ponzi” label has been thrown around Bitcoin forever, this is easily brushed off by Bitcoiners. But here, it fits not Bitcoin itself. Ponzi, in this case, is the definition of how Strategy and other Bitcoin treasuries operate: publicly boasting Bitcoin yield as shareholder value, while obfuscating the fact that the yield stems not from any operations but from new investors hoping to get a high Bitcoin yield themselves…

…Many of the zombie companies, persuaded by the promise of easy money and good ol’ wealth transfer, pulled it off—perhaps to their own surprise—enriching insiders in the process.

Metaplanet, formerly known as Red Planet Japan, is a former budget hotel operator in Japan turned aggressive Bitcoin treasury. Since pivoting in 2024, it has expanded its share count by some 400%, with the market cap reaching almost $7bn at its peak from $13mn, currently priced at 2x its Bitcoin holdings. Metaplanet counts Eric Trump, the son of the US president, as strategic adviser.

While The Smarter Web Company, a web designer, isn’t the first and only UK-listed company to do this (there are about a dozen), it certainly was a pioneer. Shortly after its shares were admitted to trading on the Aquis Stock Exchange in April this year, the company announced a 10-year Bitcoin treasury plan. From a market cap of GBP3.7mn at the time of listing, shares of SWC quickly exploded past GBP1bn (now sitting at GBP550mn).

And unsurprisingly, the POTUS jumped on the bandwagon too. After minting a monumental amount of money and legalized bribes from launching $Trump coin three days before inauguration, the President wasn’t done squeezing crypto. Trump Media recently raised $2.4bn to buy Bitcoin, modelled after Saylor’s blueprint (and personally recommended to the Trumps by Saylor himself), which followed the President’s establishment of a US Strategic Bitcoin Reserve that currently holds 200k Bitcoins. The President owns 40% of Trump Media with an implied market value of ~$2bn…

…As for Saylor’s Bitcoin treasury valuation model illustrated above (Bitcoin NAV + Bitcoin $ gain x multiple), it’s absurd. The premise—that the appreciation of Bitcoin should be treated like recurring profit and capitalized accordingly—is lunacy. It’s like saying that because you expect the $500k house you live in (let’s say it’s your entire net worth) to appreciate to $550k next year, your net worth is not $500k, and not $550k, but a whole $2mn with a 30x multiple on the appreciation. It doesn’t surprise me that Saylor believes this nonsense, since he, having missed econ class 101 by the evidence of this clip, thinks that cash, which is priced at the risk-free rate, carries a cost of capital of 15% (then proceeding to botch basic math by saying 12% of $325bn is $32bn).

I wish the world would allocate its precious resources and brainpower to more productive pockets of the economy than what we discussed today. I know that’s wishful thinking. Stuff like this happens all the time, but speculation has clearly raised the stakes since the pandemic. The writing on the wall hasn’t dried yet. Saylor et al’s vision for Bitcoin treasuries is that the scheme runs far enough that Bitcoin approaches “hyperbitcoinization”: the point where sponsors believe the price stabilizes (some peg it at $10-20mn per coin). The pools of fiat are so vast that the sponsors aren’t anywhere close to running out of convincing new buyers of these products, and so are willing to floor the pedal to make these things more ingrained in the financial system. (I think you know what that implies.) It sure helps keep the scheme going when people—usually Gen Zs—run around hyping Strategy as an “infinite money glitch” and Saylor himself calling it a “quadratically reflexive engineered instrument”. (You can’t make this stuff up.)

The whole thing raises an odd paradox: How are all of the Bitcoin treasuries going to buy more Bitcoin if every big holder of Bitcoin can cash in bigger by launching their own Bitcoin treasuries? If there’s a massive wealth transfer to be taken simply by moving Bitcoins onto public markets, then everyone with a pile of Bitcoins will want that premium for themselves.

Now for what you’ve been waiting for: how do you bank on this? The answer is, I won’t. I wouldn’t short any type of absurdity in a million years—not even with long-dated options…

…And if you’re already long invested in Strategy or any new shiny Bitcoin treasury, the best action you can take is to copy what the insiders and promoters are doing: sell.

“On the one hand, we’ve capitalized on the most innovative technology and capital asset in the history of mankind. On the other hand, we’re possibly the most misunderstood and undervalued stock in the US and potentially in the world.”—Michael Saylor

5. Constraints, and challenges of value capture in the AI race – Abdullah Al-Rezwan

Another bit that I thought was interesting in the Acquired interview was their point about how they think about creating leverage through AI:

…we always like to say the way we think about an AI first company is we’re building a machine to produce happy customers…And I think that’s important because it’s like if something comes off the assembly line of machine that’s malformed, you don’t just fix that thing. You say what part of the machine broke to produce the malformed item.

And so just as it relates to, for example software engineering, we have this philosophy like when cursor, which is the most popular co-pilot for software engineers to like write code and now having some sort of more agentic flavors of it, if it produces incorrect code, our philosophy is don’t fix the code, fix the context that cursor had that produced the bad code. And I think that’s a big difference when you’re trying to make like a company driven by AI. So essentially, if you just fix the code, you’re not adding leverage. If you go back and say, what context did this coding AI not have that had it had it, it would have produced the correct code. So I don’t want to pretend we’re perfect here, but that’s the way we think about it. I really like thinking of our business as a machine…

…The Information pointed out yesterday how the token price seems to be stable in recent months compared to the last couple of years. The subscription model just doesn’t seem appropriate in many of the use cases. For example, this Reddit post points out how one dev basically consumed $50k worth of tokens while paying $200 for the monthly subscription. This is, of course, a business model problem…

…It may be tempting to think it won’t be that difficult to capture value over time. While I have no doubt that SOTA model developers will get better at it, there is a long list of revolutionary technology which had hard time capturing the value. Let me share a personal example. Recently, I opted for “ChatGPT Pro” subscription ($200/month) just to see if there is a noticeable difference between Plus and Pro subscription. One of my family members asked me to run a query that had important career implications for her. After I sent ChatGPT Pro’s response, she was really glad and was telling me that it would probably cost her $1,000 to get such information if not for ChatGPT. At first, I thought even $200/month could be considered incredible value if it can solve at least one such problem in every couple of months. The only problem is when I ran the same query on Gemini 2.5 Pro for which I pay $20/month, it also came up with a very, very good response. ChatGPT Pro was slightly better in some marginal details, but now I was starting to feel $200/month wasn’t worth for those marginal improvement.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Gemini), Amazon, and Shopify. Holdings are subject to change at any time.

An Investing Legend’s Thoughts on Investing in Thrift Conversions

Notes from an investing legend’s book on how we can research and invest in thrift conversions 

Earlier this year, I had written a number of articles on The Good Investors on investing in thrift conversions (see here, here, and here). An important part of my learning process on thrifts came from investing legend Peter Lynch, who is revered for his track record when managing the Fidelity Magellan Fund. From 1977 to 1990, Lynch generated an annualised return of 29%, nearly double that of the S&P 500 over the same period.

Although his investing book One Up on Wall Street is well-known and highly popular, Lynch actually wrote a few other lesser-known books on investing including Beating The Street. The latter is the source of what I learnt about investing in thrift conversions from Lynch. 

Because Beating The Street is not widely known, and because I find studying thrift conversions as potential investments to be a fascinating activity, I thought it would be useful to share my notes from Beating The Street on how Lynch thought about investing in thrift conversions. 

What’s shown between the two horizontal lines below, besides the section-headers, are direct quotes from Lynch’s book. Do note that the emphases are mine.


On investing in S&Ls (Savings & Loans institutions)

Prior to the 1980s, Golden West was one of the few S&Ls that was a public company.  Then in a rash of stock offerings in meid-defcaded, hundreds of the formerly private thrifts, operating as “mutual savings banks,” went public more or less simultaneously. I acquired many of these for the Magellan Fund. I was so selective in my purchases during this period that anything that had the word  “first” or “trust” in it, I bought. Once, I confessed to the Barron’s panel that I’d invested in 135 of the 145 thrifts whose prospectuses had landed on my desk. The response from Abelson was typical: “What happened to the others?”

There are two explanations for my indiscriminate and sometimes fatal attraction for S&Ls. The first is that my fund was so big and they were so small that to get enough nourishment out of them I had to consume large quantities, like the whales who are forced to survive on plankton. The second is the unique way that S&Ls came public, which made them an automatic bargain from the tart. (To learn how you, too, can get something for nothing, turn to page 215.)

On acquisition statistics for S&Ls

The experts at SNL Securities in Charlottesville, Virginia, who keep tabs on all the thrifts in existence, recently provided me with an update on what happened to the 464 S&Ls that came public after 1982. Ninety-nine of these were subsequently taken over by bigger banks and S&Ls, usually at a large profit to the shareholders. (The watershed example is the Morris County [New Jersey] Savings Bank. The initial offering price in 1983 was $10.75 a share, and Morris was bought out three years later for $65.) Sixty-five of the public traded S&Ls have failed, usually at a total loss to the shareholders. (I know this from personal experience because I owned several in this category.) That leaves 300 still in business.

On how to study an S&L

If you decide to pursue the subject of undervalued S&Ls – which to me is much more exciting than any trip to Hawaii – you’d be well advised to seek out the latest copy of The Thrift Digest at the local library or to borrow one from your broker. I borrowed mine from Fidelity. 

I spent so much time with my nose in this book before dinner, during dinner, and after dinner that Carolyn began to refer to it as the Old Testament. The Old Testament in hand, I devised my own S&L scorecard, listing 145 of the strongest institutions by state and jotting down the following key details. This, in a nutshell, is everything you need to know about an S&L:

Current price

Self explanatory.

Initial offering price

When an S&L is selling below the price at which it came public, it’s a sign that the stock may be undervalued. Other factors, of course, must be considered.

Equity-to-assets ratio 

The most important number of all. Measures financial strength and “survivability.” The higher the E/A, the better. E/As have an incredible range, from as low as 1 or 2 (candidates for the scrap heap) to as high as 20 (four times stronger than J.P. Morgan). An E/A of 5.5 to 6 is average, but below 5, you’re in the danger zone of ailing thrifts. 

Before I invest in any S&L, I like to see that its E/A ratio is at least 7.5. This is not only for disaster protection, but also because an S&L with a high E/A ratio makes an attractive takeover candidate. This excess equity gives it excess lending capacity that a larger bank or S&L might want to put to use.

Dividend

Many S&Ls pay better-than-average dividends. When one of them meets all the other criteria and also has a high yield, it’s a plus.

Book Value

Most of the assets of a bank or an S&L are in its loans. Once you assure yourself that an S&L has avoided high-risk lending (see below), you can begin to have confidence that its book value, as reported in the financial statements, is an accurate reflection of the institution’s true worth. A lot of the most profitable Jimmy Stewarts are selling at well below book value today.

Price-Earnings ratio

As with any stock, the lower this number, the better. Some S&Ls with annual growth rates of 15 percent a year have p/e ratios of 7 or 8, based on the prior 12 months’ earnings. This is very promising, especially in the light of the fact that overall p/e of the S&P500 was 23 when I did this research. 

High-Risk Real-Estate Assets

These are the common problem areas, especially commercial loans and construction loans, that have been the ruination of so many S&Ls. When high-risk assets exceed 5-10 percent, I begin to get nervous. All else being equal, I prefer to invest in an S&L that has a small percentage of its assets in the high-risk category. Since it’s impossible for the casual investor to analyse a commercial lending portfolio from afar, the safest course is to avoid investing in S&Ls that made such loans.

Even without The Thrift Digest, it’s possible to do your own calculation of high-risk assets. Check the annual report of the dollar value of all construction and commercial real estate lending, listed under “assets.” Then find the dollar value of all outstanding loans. Divide the latter into the former, and you’ll arrive at a good approximation of the high-risk percentage.

90-Day Non-performing assets

These are the loans that have already defaulted. What you want to see here is a very low number, preferably less than 2 percent of the S&L’s total assets. Also you’d like this number to be falling and not rising. An extra couple of percentage points’ worth of bad loans can wipe out an S&L’s entire equity.

Real Estate Owned

This is property on which the S&L has already foreclosed. The REO category, as it’s called, is an index of yesterday’s problems, because whatever shows up here has been written off as a loss on the books. 

Since this financial “hit” has already been taken, a high percentage of real estate owned isn’t as worrisome as a high percentage of non-performing assets. But it’s worrisome when REO is on the rise. 

S&Ls aren’t in the real-estate business, and the last thing they want is to repossess more condos or office parks that are expensive to maintain and hard to sell. In fact, where there’s a lot of ROE, you have to assume that the S&L is having trouble getting rid of it. 

Why larger banks want to acquire S&Ls

An S&L with excess equity, excess lending capacity, and a loyal depositor base is a prize that commercial banks covet. Commercial banks can take in deposits only in their home states (this rule is changing, to some degree), but they can lend money anywhere. This is what makes taking over an S&L a very tempting proposition.

If I were the Bank of Boston, for instance, I’d be sending love notes to Home Port Bancorp of Nantucket, Massachusetts. Home Port has a 20 percent equity-to-assets ratio, making it perhaps the strongest financial institution in the modern world. It also has a captive island market with crusty New England depositors, who aren’t about to change their banking habits and run off to a new-fangled money-market fund. 

Maybe the Bank of Boston doesn’t want to make loans on Nantucket, but once it acquires Home Port’s equity and its deposit base, it can use the excess lending capacity to make loans in Boston, or anywhere else around the country.

During 1987-90, a terrible period for S&Ls, more than 100 were acquired by larger institutions that saw the same sort of the potential the Bank of Boston ought to see in Home port. Banks and thrifts will continue to consolidate at a rapid rate, and with good reason. Currently , the U.S. has more than 7,000 banks, thrifts, and other assorted deposit takers – which is about 6,500 too many. 

How an S&L’s business model works

An S&L needs loyal depositors to keep money in their savings and checking accounts. It needs to make money on that money by lending it out – but not to borrowers who default. And it needs low operating expenses in order to maximise its profits. Bankers like to live on threes and sixes: borrow money at 3, lend money at 6, play golf at 3.

Examples of S&Ls that Lynch recommended

GLACIER BANCORP

I’d opened my Glacier Bancorp file. The stock was selling for $12 a share, a 60 percent gain over the year before. This was a 12-15 percent grower selling at 10 times earnings – not a spectacular bargain, but there wasn’t much risk in it either.

Glacier Bancorp used to be called the First Federal Savings and Loan of Kalispell, and I wish they’d kept the old name. It sounded antiquated and parochial, which to me is always reassuring. I’d rather have antiquated and parochial than trendy and sophisticated, which usually means a company is desperate to improve its image.

I like companies that stick to business and let the images take care of themselves. There is this unfortunate tendency among financial institutions to take the “bank” out of their names and replace it with “bancorp.” I know what a bank is, but “bancorp” makes me nervous.

Anyway, whoever answered the phone at Glacier Bancorp in Kalispell told me they were having a retirement party for one of the officers, but they’d inform chairman Charles Mercord that I called. They must have dragged him out of the party, because a few minutes later Mercord called me back.

Asking a president or a CEO about a company’s earnings is a ticklish proposition. You’re not going to get anywhere by blurting out, “ What are you going to earn next year?” First you have to establish rapport. We chatted about the mountains. I said that the entire Lynch family had been to all the Western states to see the national parks, and that we loved Montana…

…Then I begin to slip in more serious investment-type questions, such as “What’s the population out there?” and “what’s the elevation of the town?,” leading up to the more substantive “Are you adding any new branches or standing pat with what you’ve got?” I was trying to get a sense of the mood at Glacier.

“Anything unusual in the third quarter?” I continued. “You made thirty-eight cents, I see.” It’s best to pepper these inquiries with bits of information, so that your source thinks you’ve done your homework. 

The mood at Glacier Bancorp was upbeat. Non-performing loans were almost nonexistent. In all of 1991, this bancorp had had to write off only $16,000 in bad loans. It had raised its dividend for the 15th year in a row. It had just bought out two other thrifts with wonderful names: the First National Banks of Whitefish and Eureka, respectively

This is how many of the stronger S&Ls are going to speed up growth in the next few years. They are acquiring the valuable deposits of troubled and defunct S&Ls. Glacier can fold the First National of Whitefish into its own system and make more loans with the additional Whitefish deposits. It can also do more administrative cost-cutting, since two S&Ls together can live more cheaply than one. 

“You’re building up a nice asset here,” I said, introducing the Whitefish subject. “I’m sure it’s a good move, accountingwise.” My only worry was that Glacier may have overpaid for its acquisition, a topic I approached obliquely. “I assume you had to pay way over book value for this,” I said, inviting Glacier’s president to admit the worst. But no, Glacier hadn’t overpaid.

We talked about Glacier’s 9.2 percent of commercial loans, the sole troubling statistic I’d gleaned from The Thrift Digest. If this had been a New England thrift, that high number would have scared me away, but Montana wasn’t Massachusetts. The Glacier president assured me that his S&L wasn’t loaning money to developers of empty office towers or unsalable vacation condos. Glacier’s commercial loans were mostly in multifamily housing, which was in great demand. Montana’s population was growing. Every year, thousands of escapees from California smog and taxes were taking up residence in the Big Sky, small government state.

SOVEREIGN BANCORP 

In the November 25, 1991, issue of Barron’s, I came across an article entitled “Hometown Lender to the Well-Heeled.” It described how Sovereign Bancorp serves a wealthy element in southeastern Pennsylvania from its headquarters in Reading. I liked the part about how a bell goes off in a Sovereign branch every time a mortgage loan is approved.

This was not the only time in my career I was introduced to a stock by a weekly magazine. I checked the annual and the quarterlies. In every important category, Sovereign got good marks. Nonperforming loans were 1 percent of assets. Commercial and construction lending was 4 percent. Sovereign had set aside sufficient reserves to cover 100 percent of its nonperformers.

Sovereign had acquired two New Jersey thrifts from the Resolution Trust Corporation, which boosted its deposits and eventually would boost its earnings. To review some of the details, I called Jay Sidhu, Sovereign’s Indian-born president. We chatted about Bombay and Madras, which I’d visited the year before on a charity trip.

When we got around to serious subjects, Mr. Sidhu said that management was determined to “grow” the business by at least 12 percent a year. Meanwhile, based on the latest analysts’ estimates for 1992, the stock was selling at a p/e ratio of 8. 

The only negative detail was that Sovereign had sold an additional 2.5 million shares in 1991. We’ve already discussed how it’s usually a good thing when a company buys back its shares, as long as it can afford to do so. Conversely, it’s a bad thing when a company increases the number of shares. This has the same result as a government printing more money: it cheapens the currency.

At least Sovereign wasn’t squandering the proceeds from its stock sale. It was using the proceeds to buy more troubled thrifts from the Resolution Trust.

Mr. Sidhu’s model for success, I was pleased to discover, was Golden West. Basically, he wanted to copy the penurious Sandlers by increasing loan originations and cutting expenses. With the payroll that Sovereign inherited from its recent acquisitions, the overhead was 2.25 percent, much higher than Golden West’s 1 percent, but Mr. Sidhu seemed devoted to bringing that down. The fact that he owned 4 percent of the stock gave him a considerable incentive to carry out this plan.

Instead of holding on to the mortgages as many thrifts do, Sovereign had decided to specialize in making loans and then selling them to packagers such as Fannie Mae or Freddie Mac. This strategy enabled Sovereign to get its money back quickly and plow it into new mortgages, profiting from the points and other upfront fees. The risk of owning the mortgages was transferred to others.

Even so, Sovereign was being very conservative in the kinds of loans it would approve. It was devoted to residential mortgages. It hadn’t made a single commercial loan since 1989. Its average residential loan didn’t exceed 69 percent of the value of the property on which the loan was made. The few bad loans were thoroughly investigated so that Sovereign could learn who or what went wrong and not repeat its mistakes.

As often happens in my conversations with companies, I learned something new from Sidhu. He described a sneaky method by which unscrupulous banks and S&Ls camouflage their problem loans. If a developer, say, asks to borrow $1 million for a commercial project, the bank offers him $1.2 million on the basis of an inflated appraisal. The extra $200,000 is held in reserve by the bank. If the developer defaults on the loan, the bank can use this extra money to cover the developer’s payments. That way, what has turned into a bad loan can still be carried on the books as a good loan—at least temporarily.

I don’t know how widespread this practice has become, but if Sidhu is right, it’s another reason to avoid investing in banks and S&Ls with large portfolios of commercial real estate

Why thrift conversions are such good bargains

Imagine buying a house and then discovering that the former owners have cashed your check for the down payment and left the money in an envelope in a kitchen drawer, along with a note that reads: “Keep this, it belonged to you in the first place.” You’ve got the house and it hasn’t cost you a thing. 

This is the sort of pleasant surprise that awaits investors who buy shares in any S&L that goes public for the first time. And since 1,178 S&Ls have yet to take this step, there will be many more chances for investors to be surprised.

I learned about the hidden cash-in-the-drawer rebate early in my career at Magellan. This explains why I bought shares in almost every S&L and mutual savings bank (another name for the same sort of institution) that appeared on my Quotron.

Traditionally, the local S&L or mutual savings bank has no shareholders. It is owned cooperatively by all the depositors, in the same way that rural electric utilities are organized as co-ops and owned by all the customers. The net worth of a mutual savings bank, which may have been built up over 100 years, belongs to everyone who has a savings account or a checking account in one of the branches. 

As long as the mutual form of ownership is maintained, the thousands of depositors get nothing for their stake in the enterprise. That and $1.50 will get them a glass of mineral water

When the mutual savings bank comes to Wall Street and sells stock in a public offering, a fascinating thing happens. First of all, the S&L directors who put the deal together and the buyers of the stock are on the same side of the table. The directors themselves will buy shares. You can find out how many in the offering circular that accompanies the deal. 

How do directors price a stock that they themselves are going to buy? Low. 

Depositors as well as directors will be given the opportunity to buy shares at the initial offering price. The interesting thing about this is that every dollar that’s raised in the offering, minus the underwriting fees, will end up back in the S&L’s vault. 

This is not what happens when other kinds of companies go public. In those cases, a sizable chunk of the money is carted away by the founders and original shareholders, who then become millionaires and buy palazzi in Italy or castles in Spain. But in this case, since the mutual savings bank is owned by the depositors, it would be inconvenient to divvy up the proceeds from a stock sale to thousands of sellers who also happen to be buyers. Instead, the money is returned to the institution, in total, to become part of the S&L’s equity. 

Say your local thrift had $10 million in book value before it went public. Then it sold $10 million worth of stock in the offering—1 million shares at $10 apiece. When this $10 million from the stock sale returns to the vault, the book value of this company has just doubled. A company with a $20 book value is now selling for $10 a share.

This doesn’t guarantee that what you’re getting for free will necessarily turn out to be a good thing. You could be getting a Jimmy Stewart S&L, or it could be a lemon S&L with inept management that’s losing money and eventually will lose all its equity and go bankrupt. Even in this can’t-lose situation, you ought to investigate the S&L before you invest in it.

The next time you pass a mutual savings bank or an S&L that’s still cooperatively owned, think about stopping in and establishing an account. That way, you’ll be guaranteed a chance to buy shares at the initial offering price. Of course, you can always wait until after the offering to buy your shares on the open market, and you’ll still be getting a bargain. 

But don’t wait too long. Wall Street seems to be catching on to the cash-in-thedrawer trick, and the increase in stock prices of mutual savings banks and savings and loans that have converted to public ownership since 1991 is nothing short of remarkable. It’s been a bonanza almost anywhere you look, from one end of the country to the other.

In 1991, 16 mutual thrifts and savings banks came public. Two were taken over at more than four times the offering price, and of the remaining 14, the worst is up 87 percent in value. All the rest have doubled or better, and there are four triples, one 7-bagger, and one 10-bagger. Imagine making 10 times your money in 32 months by investing in Magna Bancorp, Inc., of Hattiesburg, Mississippi. 

In 1992, another 42 mutual thrifts came public. The only loser in this group has been First FS&LA of San Bernardino, and it’s down a modest 7.5 percent. All the rest have advanced—38 of them by 50 percent or more, and 23 by 100 percent or more. These gains have come in 20 months! 

Table 13-1. MUTUAL THRIFT AND SAVINGS BANK IPOs COMPLETED IN 1991†

Table 13-2. THE 10 BEST AND 10 WORST RESULTS: MUTUAL THRIFT AND SAVINGS BANK IPOs COMPLETED IN 1992 

Table 13-3. THE 10 BEST AND 10 WORST PERFORMING MUTUAL THRIFT AND SAVINGS BANK IPOs COMPLETED IN 1993 THROUGH 9/30/93

There are two quadruples in the group—Mutual Savings Bank of Bay City, Michigan, and United Postal Bancorp in St. Louis. A portfolio of the five top performers taken together has produced a 285 percent return. Even a person who was unlucky enough to have chosen the five worst-performing thrifts that came public in 1992 has made 31 percent on his money through September 1993. Investing in the five worst has beaten the S&P 500 and most of the equity mutual funds. 

Through the first nine months of 1993, another 34 mutual thrifts have come public, and in this shorter period the worst is up 5 percent, 26 are up 30 percent or better, 20 are up 40 percent or better, and 9 are up 50 percent or better. (All the above numbers were provided by the skillful crunchers at SNL Securities.) 

From Asheboro, North Carolina, to Ipswich, Massachusetts, on the East Coast; from Pasadena, California, to Everett, Washington, on the West; from Stillwater, Oklahoma, to Kankakee, Illinois, to Rosenberg, Texas, in the middle, neighborhood S&Ls have been the best investments that hundreds of thousands of people have ever made. This is the ultimate example of how individual investors can succeed by ignoring companies that are widely held by institutions and by investigating what’s close to home. What could be closer to home than the local thrift where you keep your safety deposit box and your checking account? 

An account in any one of these thrifts or savings banks entitles you to participate in the IPO if and when it happens, but you certainly aren’t required to do so. You can go to the meeting where the deal is explained to potential shareholders, see whether the insiders are buying the shares, read the prospectus to find out the book value, the p/e ratio, what the earnings are, the percentage of nonperforming assets, the quality of the loan portfolio, etc., and thus get all the information you need to make an informed decision. It’s an opportunity to take a close look at a local company—and it’s free. If you don’t like the deal, the organization, or the management, you simply don’t invest.

There are still 1,372 mutual savings banks that have not yet come public. Check to see whether any of these are located in your area. By opening a savings account in any of them, you’ll have the right to participate in the IPO when it happens. Sit back and await developments. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 17 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 17 August 2025:

1. Beyond the “Search” Box – Abdullah Al-Rezwan

Semrush tracked 260 billion rows of clickstream data on U.S. desktop users who began using ChatGPT in Q1 2025, comparing their Google Search sessions in the 90 days before and after adoption to a control group that never used ChatGPT. This setup allowed them to isolate whether ChatGPT adoption caused changes in traditional search behavior compared to natural trends over time.

The overall result of the study shows after adopting ChatGPT, users increased their Google Search sessions from 10.5 to 12.6 per week while also adding about 5 ChatGPT sessions weekly, suggesting ChatGPT use complemented rather than replaced Google searches.

Semrush shared some cohort level data by month which all show that despite sustained ChatGPT usage after adoption, Google Search usage remained resilient.

One may wonder if you keep using ChatGPT for longer than a year, perhaps it eventually changes your Google usage. That also doesn’t quite seem to be the case yet since a 500-day study by Semrush of users who began using ChatGPT in January 2024 found that Google search activity remained steady while ChatGPT usage stayed consistent after adoption.

2. Podcast: Amazon’s advertising strategy (with Adam Epstein) (Transcript here) – Eric Benjamin Seufert and Adam Epstein

Adam Epstein: I’ve been working in ad tech for seven plus years, and people have been decrying the end of the agency for as long as I can remember through the use of automated and simple software. But AI adds a new layer of complexity to everything, and complexity is good for agencies particularly. I’m not sure who coined the phrase, but they basically said agencies are cockroaches. And I believe that to probably be the case.

At least for the next three to five years, I actually don’t even think agentic AI will be a headwind for agencies—I think it will be a tailwind on two dimensions. First, the most scaled agencies in the world have been able to scale themselves not through data and technology, but through scaled processes, standard operating procedures, training collateral and docs to create expertise and uniform level of service across all their clients and team members. Well, guess what’s really good for training an LLM? Literally all of those documents.

Every agency I’ve talked to for seven years comes to me and says, “How is your off-the-shelf ad tech different than the off-the-shelf ad tech that you’re going to sell to the next agency tomorrow?” And the answer has always been it hasn’t been any different—it’s been exactly the same. But with agentic AI, you now no longer buy software—you hire software. You hire software, and you train software, and you develop a new teammate that you train and mold exactly as you would a new team member.

Agencies want this level of customization. They’re actually in a perfect position to do so because they’ve invested in collateral that allows them to train an LLM in a very efficient manner. We’re just catalyzing them and giving them the tools to do exactly that.

The other interesting thing with services businesses is that you typically need to linearly scale headcount as you scale customer and revenue growth. But I believe agentic AI will bring in a world for media agencies in particular where they’ll be able to exponentially increase customers and revenue while maintaining a flat headcount. Agentic AI will take all the operational work that teams are currently running and allow these agencies to scale in ways they’ve never been able to scale before. It’ll be a massive tailwind from an operating margin perspective, and I think people will actually start to value agencies on a different multiple than what they have in the past, given the fundamentally different margin profile.

3. Robotaxis & AI | Uncharted Territories Magazine | Tech Update Summer 2025 – Tomas Pueyo

Waymo is destroying the competition. It has surpassed Lyft in rides in SF, and is on track to surpass Uber within 8 months or so.

And this is with Waymo taking 2x longer and costing 70% more than Lyft!!!1 That’s how much better the Waymo experience is: People really care about not having a driver!…

…Uber said ride-hailing could grow by 25x if its price dropped under $1/mile…

…Uber couldn’t make it happen. But in Austin, now Tesla costs $1 per mile.

As a comparison, ride hail customers are currently paying nearly $3/mile.

If Tesla maintains this type of pricing, it won’t make sense for drivers to continue their job, and Uber and Lyft will crash.

8% of US workers are professional drivers…

…I didn’t realize how important this is until I read this article:

Something like 40,000 people die in traffic accidents in the US every year. The number is over one million per year globally.

There are over 5 million non-fatal injuries from car crashes each year that require medical attention in the US.

In 2010, the total costs from these events was $836 billion, or ~$2700 per American per year.

But these costs are just the tip of the iceberg because most of the cost of transportation, at >$2 trillion per year, comes from adjusting to human inadequacies.

Wait, what? Car accidents are costing trillions to the world economy? How?

  • A big share of the materials in cars are due to safety. Without accidents, you can strip them out, saving all their money. Austin Vernon calculates we could make car weights 10x lower.
  • Automobile shapes today trade off safety and aerodynamicity. Without safety, they can become more aerodynamic, and move faster at a cheaper cost.
  • Cheaper transportation costs massively improve the economy.
  • Lower weights on roads means less road wear, and hence less maintenance cost.

4. What If Money Expired? – Jacob Baynham

More than a century ago, a wild-eyed, vegetarian, free love-promoting German entrepreneur and self-taught economist named Silvio Gesell proposed a radical reformation of the monetary system as we know it. He wanted to make money that decays over time. Our present money, he explained, is an insufficient means of exchange. A man with a pocketful of money does not possess equivalent wealth as a man with a sack of produce, even if the market agrees the produce is worth the money.

“Only money that goes out of date like a newspaper, rots like potatoes, rusts like iron, evaporates like ether,” Gesell wrote in his seminal work, “The Natural Economic Order,” published in 1915, “is capable of standing the test as an instrument for the exchange of potatoes, newspapers, iron and ether.”…

…Gesell believed that the most-rewarded impulse in our present economy is to give as little as possible and to receive as much as possible, in every transaction. In doing so, he thought, we grow materially, morally and socially poorer. “The exploitation of our neighbor’s need, mutual plundering conducted with all the wiles of salesmanship, is the foundation of our economic life,” he lamented.

To correct these economic and social ills, Gesell recommended we change the nature of money so it better reflects the goods for which it is exchanged. “We must make money worse as a commodity if we wish to make it better as a medium of exchange,” he wrote.

To achieve this, he invented a form of expiring money called Freigeld, or Free Money. (Free because it would be freed from hoarding and interest.) The theory worked like this: A $100 bill of Freigeld would have 52 dated boxes on the back, where the holder must affix a 10-cent stamp every week for the bill to still be worth $100. If you kept the bill for an entire year, you would have to affix 52 stamps to the back of it — at a cost of $5.20 — for the bill to still be worth $100. Thus, the bill would depreciate 5.2% annually at the expense of its holder(s). (The value of and rate at which to apply the stamps could be fine-tuned if necessary.)

This system would work the opposite way ours does today, where money held over time increases in value as it gathers interest. In Gesell’s system, the stamps would be an individual cost and the revenue they created would be a public gain, reducing the amount of additional taxes a government would need to collect and enabling it to support those unable to work.

Money could be deposited in a bank, whereby it would retain its value because the bank would be responsible for the stamps. To avoid paying for the stamps, the bank would be incentivized to loan the money, passing on the holding expense to others. In Gesell’s vision, banks would loan so freely that their interest rates would eventually fall to zero, and they would collect only a small risk premium and an administration fee.

With the use of this stamp scrip currency, the full productive power of the economy would be unleashed. Capital would be accessible to everyone. A Currency Office, meanwhile, would maintain price stability by monitoring the amount of money in circulation. If prices go up, the office would destroy money. When prices fall, it would print more.

In this economy, money would circulate with all the velocity of a game of hot potato. There would be no more “unearned income” of money lenders getting rich on interest. Instead, an individual’s economic success would be tied directly to the quality of their work and the strength of their ideas. Gesell imagined this would create a Darwinian natural selection in the economy: “Free competition would favor the efficient and lead to their increased propagation.”…

…Although many dismissed Gesell as an anarchistic heretic, his ideas were embraced by major economists of the day. In his book “The General Theory of Employment, Interest and Money,” John Maynard Keynes devoted five pages to Gesell, calling him a “strange and unduly neglected prophet.” He argued the idea behind a stamp scrip was sound. “I believe that the future will learn more from the spirit of Gesell than from that of Marx,” Keynes wrote…

…That very year, the owner of a dormant coal mine near the Bavarian town of Schwanenkirchen tried in vain to get a loan from a bank to begin mining again. Stymied by the representatives of traditional finance, he went to the Wära Exchange Association, a group that was created to put Gesell’s ideas into practice. The group agreed to give the mine owner 50,000 Wära, a depreciating currency equivalent to 50,000 Reichsmarks.

The mine owner then gathered the unemployed miners and asked if they would go back to work, not for legal tender, but for this new currency. They agreed that any money was better than no money. The mine owner purchased food, clothing and household goods from warehouses that were already using the Wära currency. The miners, now back digging coal, used their wages to buy these goods from the mine owner. Soon, other businesses in town wanted to use the currency to benefit from the sudden influx of cash. Because the currency depreciated at 1% per month, everyone was eager to part with it and it circulated rapidly throughout the economy. Soon, in whole districts, the Wära currency replaced the Reichsmark, which alarmed the bigger banks and the government. Finally, the Reichsbank ended the experiment by banning the currency.

Two years later, in the Austrian town of Wörgl, Gesell’s ideas came to life again. In 1932, Wörgl’s mayor, a socialist locomotive engineer, desperately wanted to get his constituents back to work. A supporter of Gesell’s ideas, he devised a plan where Austrian schillings would be replaced with Work Certificates that depreciated at 1% per month.

The mayor hired townspeople, paid in Work Certificates, to improve roads, install streetlights and build a concrete bridge. Work Certificates circulated rapidly from merchants to tenants, to landlords, to saving accounts. People paid their taxes early to avoid paying for stamps. In one year, the Work Certificates traded hands 463 times, creating goods and services worth almost 15 million schillings. By contrast, the ordinary schilling was exchanged only 21 times.

The experiment was called the Miracle of Wörgl. Vienna newspapers took notice. The government of France expressed interest. Two hundred mayors in Austria devised similar programs in their communities. Again, however, the financial authorities grew uneasy, arguing that these local stamp scrips undermined the currency-issuing power of the national bank. By the fall of 1933, the Austrian Supreme Court had prohibited their circulation.

Gesellian experiments happened in the U.S. and Canada too, inspired by the Great Depression. In 1932, in Hawarden, Iowa, a limited amount of stamp scrip was put into circulation to pay for public works. The same year, a similar program was deployed in Anaheim, California. In 1933, Oregon attempted to print $80 million in stamp scrip, but the U.S. Treasury stopped it. The government of Premier William “Bible Bill” Aberhart in Alberta, Canada, introduced depreciating “prosperity certificates” (which people quickly renamed “velocity dollars”) in 1936.

That decade in the U.S., 37 cities, eight counties and some business groups attempted to issue almost 100 different types of stamp scrip. All these experiments were local, small in scope and short-lived. In 1933, the economist Irving Fisher, who called himself “a humble student of Silvio Gesell,” tried to persuade President Franklin Delano Roosevelt to adopt a national stamp scrip, and even convinced an Alabama senator to introduce a bill that would have issued up to $1 billion in depreciating currency. It never came to a vote. Roosevelt, who was preparing to take the country off the gold standard, worried that any further economic innovations would be too destabilizing…

…Gesell’s idea for depreciating money “runs counter to anything we’ve ever learned about the desirable properties of money,” David Andolfatto, a former senior vice president of the Federal Reserve Bank of St. Louis and the chair of the economics department at the University of Miami, told me recently. “Why on Earth would you ever want money to have that property?”

But during the economic downturn that followed the Covid pandemic, Andolfatto recognized the potential value of an expiring money in times of crisis. The relief checks that the government sent out to U.S. households didn’t immediately have their desired effect of stimulating the economy because many people saved the money rather than spend it. This is the paradox of thrift, Andolfatto explained. What’s good for the individual is bad for the whole.

“Well, what if we gave them the money with a time fuse?” Andolfatto remembers wondering. “You’re giving them the money and saying look, if you don’t spend it in a period of time, it’s going to evaporate.”

In a paper he wrote for the Fed in 2020, Andolfatto called this concept “hot money credits.” He pointed out that when the economy goes into a funk, there is a “coordination failure” where people stop spending and others stop earning. Withholding money in times of fear creates a self-fulfilling prophecy by further stifling the economy. So, could Gesell’s idea of expiring money be the cure?

“The desirability depends on the diagnosis,” Andolfatto told me. “It’s like a doctor administering a drug to a healthy person and a sick person. You administer the drug, and it has some side effects. If the person is healthy, you’re not going to make them any better. You might make them even worse. If they’re sick, it might make them better.”

The problem, Andolfatto said, is that issuing pandemic checks with an expiration date would hurt those with little savings. People with money in the bank would use their expiring money just like normal money. People with no savings, on the other hand, might find that expiring money forced them to spend and did little to stabilize their financial situations…

…Keynes believed Gesell’s expiring money amounted to “half a theory” — it failed, Keynes argued, to account for people’s preference for liquid assets, of which money is just one example. “Money as a medium of exchange has to also be a store of value,” Willem Buiter, a former global chief economist at Citigroup, told me. In a Gesellian economy, he continued, the affluent would simply store their wealth in another form — gold bars, perhaps, or boats — which could be converted into money when they wanted to transact.

Buiter doesn’t believe Gesellian money can really address serious social inequality, but he did note times when it was advantageous for a central bank to drop interest rates below zero, like when inflation and market interest rates are low and should go lower to maintain full employment and utilization of resources. Positive or negative interest rates could easily be applied to digital money in a cashless economy, for which Buiter and others have advocated. But it’s hard to imagine how a government today could practically implement a Gesellian tax on hard currency. “You’d have to be able to go out and confiscate money if it’s not stamped,” Buiter said. “It would be rather brutal.”

5. Intel’s One True Stakeholder is Here – Doug O’Laughlin

There is a rumor that the Trump administration could be taking a stake in Intel…

…And it’s no surprise that the future of American semiconductors has Intel written all over it. But there’s no other way than forward, and I think it’s time to consider what needs to happen realistically, and that’s the death of the Intel we once knew to make room for what’s next. The key is that while CPUs don’t matter, the only American leading-edge foundry left making them is critical.

The problem is that the company that funds it might run out of money, and that’s why they need to publicly threaten to stop financing the future of the foundry, because it’s a problem they can’t do alone. That is why I believe they so publicly announced the ending of future nodes past 14A…

…The calculus for America is pretty simple. In my view, there is very little strategic importance to the Intel CPU business. The x86 ecosystem was once the most incredible compute ecosystem, but AMD designs better chips than Intel could; Intel has the one thing that AMD does not, a Fab. The fabless business at Intel has a real issue in that making a CPU is becoming a relatively commoditized business. ARM has made it possible for almost any hyperscaler to have its ARM-based CPU, while AMD continues to outdesign Intel at its core job, and that’s not even discussing the longer-term RISC-V ecosystem.

Adding up the CPU side, I see a business with massive competition and Intel not at the top of the stack. Intel has to deal with increasing competition in’s core profit center while at the same time covering the increasingly heavy burden of a leading-edge fab. There is only one leading-edge foundry (TSMC), and a second American option is the single highest value-added project of all time…

…We cannot rely on Taiwan for the future of semiconductors. The more capacity we get from TSMC, the more we remain reliant on R&D in Taiwan rather than the US. Intel must be standalone and must have the capabilities to do the two things the US critically needs. High-end logic and military capabilities. I’d argue the second is met chiefly, but the first Intel is hopelessly behind.

What’s worse is that Intel has a bad customer, itself. Intel needs a good customer to be the anchor, and sadly, the core customer is a CPU company that is struggling to find its way in an accelerated compute world…

…Trump can bully Broadcom, Nvidia, Qualcomm, Apple, and AMD to put orders towards Intel, while possibly forcing Amazon, Microsoft, Google, and others to make a large investment in the fab itself (or push orders). Additionally, forcing semicap companies like KLAC, Applied Materials, and Lam Research to invest and give resources in exchange for approved licenses is another example of a carrot and a stick. I think Trump could forge the giant partnership to happen, but then execution is all up to Intel. And LBT is still once again qualified for the job.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Waymo), Amazon, Apple, Microsoft, Tesla, and TSMC. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2025 Q2)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q2 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

We’re thick in the action of the latest earnings season for the US stock market – for the second quarter of 2025 – and I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees AI as being a critical part of Airbnb’s long-term product vision; management thinks travel planning in the future cannot be done without AI

We couldn’t talk about long-term product vision without talking about AI…

…I think you can’t do travel planning without AI going forward.

Airbnb’s management thinks they have chosen the hardest part to start with AI in the travel industry, which is customer service; customer service is the hardest part because the stakes are high; management has built a custom AI agent for Airbnb based on 13 different models; the custom AI agent has been rolled out in the US in English and it has reduced the need for human contact by 15%; management will roll out the custom AI agent in more languages in 2025 H2; the custom AI agent will become more personalised and agentic in 2026

We’ve chosen a very specific way to approach AI. A lot of companies have chosen what I would say is the lower stakes part of travel, which is travel planning and inspiration. For AI, we actually start with the hardest problem, which is customer service. Customer service is the hardest problem because the stakes are high, you need to answer this quickly and the risk of hallucination is very, very high, and you cannot have a high hallucination rate. And when people are locked out, they want to cancel reservation, they need help, you need to be accurate. And so what we’ve done is we built a custom model or we’ve built a custom agent built on 13 different models that have been tuned from tens of thousands of conversations. 

We rolled this out throughout the United States in English. And this has reduced, as I mentioned in the opening remarks, 15% of people needing to contact a human agent when they interact instead with this AI agent. We’re going to now, over the course of this year, bring this to more languages.

And throughout next year, it’s going to become more personalized and more agentic. So what this means is that when you reach out to an agent, the AI agent, it will not only tell you how to cancel your reservation, it will know which reservation you want to cancel, it cancel it for you, and it can be agentic as in it can start to search and help you plan and book your next trip.

Airbnb’s management will introduce AI into travel search in 2026

Next year, we’re going to bring AI into travel search.

Airbnb’s management sees the company becoming an AI-first application over the next few years; management is seeing that in the 2-3 years since ChatGPT’s introduction, there have been no other top apps in app stores that can be considered as AI-native, Airbnb included; management thinks that in the next few years, the top apps in app stores will mostly be AI-native

Over the next couple of years, I think what you’re going to see is Airbnb becoming an AI-first application. And this leads to the bigger question around AI. Over the last almost 3 years since ChatGPT spun out, if you look at the top 50 apps in the App Store, almost none of them are AI apps. The #1 app in the App Store, I think, as we speak, is ChatGPT. And if you go through 2 through 50, maybe only 1 or 2 others are AI-native applications. So you’ve got basically AI apps and kind of non-AI native apps. And Airbnb would be a non-AI native application. Over the next couple of years, I believe that every one of those top 50 slots will be AI apps. either start-ups or incumbents that transform into being AI native apps. And I think at Airbnb, we are going through that process right now of transitioning from a pre-generative AI app to an AI native app. We’re starting to customer service. We’re bringing into travel planning. So it’s really setting the stage.

Airbnb’s management is open to the idea of opening Airbnb to 3rd-party AI agents, but it appears their preference is to be the leading destination for people to come and book travel

[Question] On the AI side, do you anticipate — there’s — it seems like there’s going to need to be a choice made whether to be open to agents and kind of agent agentic traffic and who will own that relationship versus being more of a closed platform. And given that you have much of your traffic today is direct and that you have a lot of exclusive supply, you probably have your choice in the matter.

[Answer] As far as whether or not we integrate with AI agents, I think that’s something that we’re certainly open to. Remember that to book an Airbnb, you need to have an account, you need to have a verified identity. Almost everyone who books uses our messaging platform. So I don’t think that we’re going to be the kind of thing where you just have an agent or operator book your Airbnb for you because we’re not a commodity. But I do think it could potentially be a very interesting lead generation for Airbnb. So I think it could be really interesting, but I don’t think it’s like a commodity like booking a flight.

Alphabet (NASDAQ: GOOG)

AI Mode for Search has launched in US and India and is going well; AI Overviews now has more than 2 billion monthly users; overall queries and commercial queries on Search continue to grow year-on-year, driven by AI features within Search; AI features within Search are leading users to search more, especially among younger users; AI Overviews are leading to 10% more queries globally, with the growth increasing over time; AI Overviews are now powered by Gemini 2.5, with the fastest Search response times; management is seeing strong growth in multimodal Search, especially with younger users; AI Mode now has more than 100 million monthly active users in the USA and India; management will soon introduce Deep Search into AI Mode; Search users in the USA can now access agentic AI-powered calling to local businesses; SearchLabs users can now try on clothes virtually and early results are promising, especially among Gen Z users, and management will soon roll out this feature to all US users; management does not manage Google Search based on paid clicks and CPC targets; paid clicks on Google Search was up 4% year-on-year in 2025 Q2; management continues to see monetisation of AI Overviews being similar to traditional Search

AI Mode has launched in the US and India and is going well, while AI Overviews now have over two billion monthly users across more than two hundred countries and territories and forty languages…

…Overall queries and commercial queries on Search continue to grow year over year, and our new AI experiences significantly contributed to this increase in usage. We are also seeing that our AI features cause users to search more as they learn that Search can meet more of their needs. That’s especially true for younger users…

…We know how popular AI Overviews are because they are now driving over ten percent more queries globally for the types of queries that show them, and this growth continues to increase over time. AI overviews are now powered by Gemini 2.5, delivering the fastest AI responses in the industry. We also saw strong growth in the use of multimodal search, particularly the combination of Lens or Circle to Search, together with AI overviews. This growth was most pronounced among younger users.We also saw strong growth in the use of multimodal search, particularly the combination of Lens or Circle to Search, together with AI overviews. This growth was most pronounced among younger users.

Our new end-to-end AI search experience, AI Mode, continues to receive very positive feedback, particularly for longer and more complex questions. It’s still rolling out but already has over one hundred million monthly active users in the US and India. We plan to keep enhancing the AI Mode experience for users by shipping great features fast. That includes our advanced research tool, Deep Search, and more personalized responses…

…Just last week, we brought a new agentic capability directly into Search for all US users with AI-powered calling to local businesses. Finally, shopping. In Q2, we introduced a virtual try-on experience for SearchLabs users in the US. Now people can try billions of clothing products on themselves virtually. Early results and engagement have been extremely positive, particularly with Gen Z users, and we’ll be bringing this functionality to all US users imminently…

…We actually don’t manage to pay clicks and CPC targets. Some of the product and policy changes we make actually drive better monetization at the expense of paid clicks. You will actually see in the 10-Q paid clicks were up 4% year on year, but a number of factors affect these metrics from quarter to quarter, such as a few examples, advertiser spending, product changes, policy changes, user engagement, and so on…

…You’re referring to the AI overview… When it comes specifically to the monetization of it, we talked about it before. We see monetization at approximately the same rate, which gives us actually a really strong base on which we can then innovate and drive actually more innovative and new and next-generation ad formats.

Alphabet’s management is using AI to improve Youtube Shorts’ content recommendation and dubbing and this helps to widen the audience-reach of creators; management is rolling out new AI tools for creators on Youtube Shorts; management is seeing the price and volume of advertising in Shorts increase, driven partly by AI-powered ad creative resizing tools, better advertising targeting, and higher viewer engagement

We now average over 200 million daily views on YouTube Shorts. AI is helping improve our recommendations and auto-dubbing, which translates to better returns for creators and brands by dramatically increasing the potential audiences they can reach. And today, we began rolling out a whole draft of new AI tools for creators on YouTube Shorts…

…We introduced Veo3, photo-to-video, and generative effects to Shorts, making content creation easier and offering unexplored avenues for creativity.

We’re seeing both the volume and the price of ads in Shorts increase, particularly in developed markets. The feed-based nature of the product allows for more ad opportunities on average, and this growth is further supported by ad formats native to Shorts, AI-powered ad creative resizing tools, improved ad targeting, and the rise in viewer engagement.

Google Cloud revenue run rate is now more than $50 billion; nearly all generative AI unicorns use Google Cloud, with some high-profile startups using TPUs specifically; Google Cloud saw strong customer demand, driven partly by its AI products; management has integrated AI agents into Google Cloud’s products and technology and traditional enterprises are using these agents; management has introduced an open-source AI agent development kit; the kit has 1 million downloads in less than 4 months; Google Cloud is now partnering with OpenAI; AI features have helped accelerate Google Cloud subscriptions

Cloud had another great quarter of strong growth in revenues, backlog, and profitability. Annual revenue run rate is now more than $50 billion…

…Nearly all Gen AI unicorns use Google Cloud, and it’s why a growing number, including leading AI research labs like SAFE Superintelligence and Physical Intelligence, use TPU specifically…

…Next, Google Cloud. We see strong customer demand driven by our product differentiation and our comprehensive AI product portfolio. Four stats show this. One, the number of deals over $250 million doubling year over year. Two, in the first half of 2025, we signed the same number of deals over $1 million that we did in all of 2024. Three, the number of new GCP customers increased by nearly 28% quarter over quarter. More than eighty-five thousand enterprises, including LVMH, Salesforce, and Singapore’s DBS Bank, now build with Gemini, driving a 35x growth in Gemini usage year over year…

…We’ve also integrated AI agents deeply into each of our cloud products. Wayfair is leveraging our databases integrated with AI to streamline data pipelines and deliver more personalized customer experiences. Mattel is leveraging our Gemini-powered data agents and BigQuery to review and act on product feedback more quickly. Target is using our Gemini-powered threat intelligence and security operations agents to improve cybersecurity. Capgemini is utilizing our AI software engineering agents to deliver higher quality software faster by automating tasks from code generation to testing. And BBVA says Gemini and Google Workspace are saving employees nearly three hours per week by automating repetitive tasks. It’s now rolling it out to one hundred thousand employees globally.

We are also focused on building a flourishing AI agent ecosystem. We introduced an open-source agent development kit, which now has over a million downloads in less than four months. We also introduced AgentSpace, an open and interoperable enterprise chat, search, and agent platform. Gordon Foodservice is bringing AgentSpace to its US employees, which is enabling better, more efficient decision-making. And over one million subscriptions have been booked for AgentSpace ahead of its general availability…

…On the second part with respect to OpenAI, we are very excited to be partnering with them on Google Cloud…

…On the first thing on subscriptions, you know, we’ve definitely, yeah. Google One has been an attractive value proposition powered by storage. But with now, our AI plans, including both Pro and Ultra, and particularly with the 2.5 series of models, they’ve definitely seen accelerated transactions.

Alphabet is expanding its Gemini 2.5 family of hybrid reasoning models; Gemini 2.5 models have industry-leading performance in nearly all major benchmarks; Alphabet recently debuted the extremely fast Flash Lite model; Gemini recently achieved a gold-medal-level performance in the International Math Olympiad; Alphabet has the best models today at every price point; 9 million developers have now built for Gemini; over 70 million videos have been generated with Veo3 since May 2025; the Gemini app has a new feature that turns photos into videos, and users love it; the photo-to-video feature on the Gemini app is now in Google Photos too; the number of tokens per month processed by Alphabet has doubled since May 2025 to 980 trillion; the Gemini app now has 450 million monthly active users (MAUs), and daily requests are up 50% from 2025 Q1; more than 50 million people used AI meeting notes in June 2025 alone in Google Meets; Google Workspace’s new video product, Google Vids, has reached nearly 1 million MAUs; AI Overviews are now powered by Gemini 2.5, with the fastest Search response times; Gemini usage in Google Cloud grew 35x year-on-year in 2025 Q2; Alphabet’s infrastructure provides the best performance and cost for both training and inference when the Gemini models are used

We continue to expand our Gemini 2.5 family of hybrid reasoning models, which provide industry-leading performance in nearly every major benchmark. In addition to improving our popular workhorse model, Flash, we debuted an extremely fast Flash Lite version. We achieved gold medal level performance in the International Math Olympiad using an advanced version of Gemini with DeepThink. We can’t wait to bring DeepThink to users soon. We have some of the best models available today at every price point. Our 2.5 models have been a catalyst for growth, and nine million developers have now built for Gemini.

I also want to mention Veo3, our state-of-the-art video generation model. It’s been a viral hit with people sharing clips created in the Gemini app and with our new AI filmmaking tool, Flow. Since May, over seventy million videos have been generated using Veo3, and we recently introduced a feature in the Gemini app to turn photos into videos, which people absolutely love. It’s also rolling out to Google Photos users starting today…

…At I/O in May, we announced that we processed four hundred and eighty trillion monthly tokens across our surfaces. Since then, we have doubled that number, now processing over nine hundred and eighty trillion monthly tokens—a remarkable increase.

The Gemini app now has more than four hundred and fifty million monthly active users, and we continue to see strong growth in engagement, with daily requests growing over fifty percent from Q1.

In June alone, over fifty million people used AI-powered meeting notes in Google Meet. And powered by Veo3, our new short video product in Workspace called Google Vids reached nearly one million monthly active users…

…AI overviews are now powered by Gemini 2.5, delivering the fastest AI responses in the industry…

…More than eighty-five thousand enterprises, including LVMH, Salesforce, and Singapore’s DBS Bank, now build with Gemini, driving a 35x growth in Gemini usage year over year. Our models are served on our AI infrastructure, which offers industry-leading performance and cost efficiency for both training and inference.

Waymo recently launched in Atlanta, doubled its Austin footprint, and expanded its Los Angeles and San Francisco Bay Area footprints by 50%; Waymo now has teen accounts in Phoenix for riders aged 14-17; Waymo has now autonomously driven more than 100 million miles on public roads

Last month, Waymo launched in Atlanta, more than doubled its Austin service territory, and expanded its Los Angeles and San Francisco Bay Area territories by approximately fifty percent. Waymo also launched teen accounts, starting with riders aged fourteen to seventeen in Phoenix…

…The Waymo driver has now autonomously driven over 100 million miles on public roads, and the team is testing across more than ten cities this year, including New York and Philadelphia.

Google Lens searches grew 70% year-on-year in 2025 Q2; most of Google Lens’ searches are incremental, and there’s healthy growth in shopping searches; Circle to Search is now on more than 300 million Android devices; gamers can now use Circle to Search while playing games

Google Lens searches are one of the fastest-growing query types on search and grew 70% since this time last year. The majority of Lens searches are incremental, and we’re seeing healthy growth in shopping queries using Lens. And you can obviously take this to the next level by moving from image to video-based capabilities like SearchLive.

Then there’s Circle to Search, which is now on over 300 million Android devices. We’ve been adding capabilities to help people explore complex topics and ask follow-up questions without switching apps. For example, gamers can now use Circle to Search while playing mobile games to see an AI Overview or answers.

Advertisers that use AI Max in Search campaigns typically see 14% more conversions; Alphabet’s latest Smart Bidding Exploration update allows advertisers to bid more often for less obvious but higher value queries; campaigns with Smart Bidding Exploration typically see 19% more conversions; Depop used DemandGen on Youtube Shorts to drive 80% brand lift and double its click-through rates; management has launched AssetStudio to help advertisers generate creatives; more than 2 million advertisers now use Alphabet’s AI-powered asset generation tools, up 50% from a year ago

Last quarter, we introduced AI Max in Search, a new suite of AI-powered features and existing search campaigns. Advertisers that activate AI Max in Search campaigns typically see 14% more conversions. On media buying, Smart Bidding Exploration, the biggest update to bidding strategy in a decade, brings better performance to advertisers by allowing them to bid on less obvious but potentially higher value queries more often. Campaigns using Smart Bidding Exploration see a 19% increase in conversions on average.

DemandGen continues to drive revenue growth and deliver measurable impact for our customers. As an example, Depop, Etsy’s resale clothing marketplace, used the Shorts-only DemandGen campaign to drive new customers to the site. Shorts drove 80% brand lift and double click-through rates versus benchmarks.

On creatives, we launched AssetStudio using our latest models to help businesses large and small generate creative assets. Small businesses benefit from top-quality assets and deployment scaling capabilities, but larger businesses can go faster from proof of concept to launch and resize at lower costs. Over two million advertisers now use Google’s AI-powered asset generation tools to run ads, a 50% increase on this time last year.

Google Cloud had 32% revenue growth in 2025 Q2 (was 28% in 2025 Q1) driven by growth in core GCP products and AI products; AI products revenue growth was at a much higher rate than Google Cloud’s overall revenue growth; Google Cloud operating margin was 20.7% (was 17.8% in 2025 Q1 and was 11.3% in 2024 Q2); even as Google Cloud’s capex ramps up, management continue to drive productivity and efficiency improvements; Google Cloud’s backlog was up 18% sequentially in 2025 Q2, and up 38% year-on-year, to $106 billion; Google Cloud still has more AI demand than capacity in 2025 Q2 (as it did in 2025 Q1)

Turning to the Google Cloud segment, which delivered very strong results this quarter. Revenues increased by 32% to $13.6 billion in the second quarter, reflecting growth in GCP across core and AI products at a rate that was much higher than cloud’s overall revenue growth, and growth in Google Workspace driven by an increase in average revenue per seat and the number of seats. Google Cloud operating income increased to $2.8 billion, and operating margin increased from 11.3% to 20.7%. 

The expansion in cloud operating margin was driven by strong revenue performance and continued efficiencies in our expense base, partially offset by higher technical infrastructure usage costs, which includes the associated depreciation. As we ramp our AI investments, we continue to focus on driving improvements in productivity and efficiency to offset growth in technical infrastructure-related expenses, particularly from higher depreciation.

Google Cloud backlog increased 18% sequentially in Q2 and 38% year over year, reaching $106 billion at the end of the quarter. This growth was driven by strong demand for our products and services from both new and existing customers…

…We have been working hard to increase capacity and have improved the pace of server deployment. We expect to remain in a tight demand-supply environment going into 2026.

Alphabet’s management thinks that AI agents are currently too slow, costly, and brittle, but Alphabet is making progress on those fronts; management thinks AI agents will be used more broadly in 2026; management has rolled out agent coding journeys for internal use and Alphabet’s software engineers are doing more agentic workflows in software engineering

The forward-looking trajectory, I think, will really unlock these agentic experiences. We see the potential. We’re able to do them, but they’re a bit slow and costly and take time and sometimes are brittle. But we’re making progress on all of that. And I think that’s what will really unlock. And I expect 2026 to be the year in which people kind of use agent experiences more broadly…

…We are now beginning to roll out agent coding journeys for our software engineers within the company. And it’s been exciting to see just over the last few months, particularly over the last few weeks, people are definitely doing more agentic workflows in software engineering as well internally.

Alphabet’s management is very excited about the potential of smart glasses as the next-generation device for AI experiences, but they think smartphones will still be central for a few more years at least

We are super excited about our investment in glasses, and found experiences have taken a dramatic step up compared to the last iteration. So I think it’ll be an exciting new emerging category. But I still expect phones to be at the center of the experience for the next two to three years at least.

Alphabet’s management sees some overlap in use cases between AI Mode and Gemini app, but there are also unique use cases to each product; for AI Mode, people are using it for searching, whereas in the Gemini app, people are using it for long conversations, sometimes in almost therapy-like sessions; management thinks of AI Overviews as more for information-retrieval and Gemini app as more of a personal assistant; management is open to the possibility of merging AI Overviews with the Gemini app in the future, but for now, they want to meet users where there are

On AI mode versus Gemini standalone app, broadly, there are some use cases where you can get a great experience in both places. But there are use cases that are very specific. I think where the queries are information-oriented, but people really wanted to rely on the information, but have the full power of AI. I think AI mode really shines in that. You can go there and you know it’s backed up. The Gemini models are using Search deeply as a tool. And so it’s on-ground and in that Search experience, and I think users are responding very positively to it. Whereas in the Gemini standalone app, you see everything from people can have a long conversation or chat just kind of pass time, in the Gemini app. You’ve seen early cases where people may get into it in a therapy-like experience…

…Search is more information-focused. And we think of the Gemini app as more your assistant, more personal, proactive, and powerful assistant for every aspect of your daily life. And so you can imagine wanting to call deeply or create a long video, etc. Like, you know, those things can be done by the Gemini app today better. Over time, like we’ve always done, we’ve gone through these evolutions before, like, as you point out. You know, we can understand user intent better and abstract some of the complexity for our users. At one point, people used to go to, you know, query separately for text differently from images, differently from videos, etc. And we kind of made it all seamless with universal search. So we have the experience of being able to bring together experiences in a way that makes sense for users. And do the heavy lifting for them. But I think, you know, when you’re in this early stage of new emerging paradigms, I think we want to make sure we can meet them where they are expecting today.

Amazon (NASDAQ: AMZN)

Amazon’s management has rolled out Deep Fleet, an AI that improves robot travel efficiency by 10%; Deep Fleet helps improve delivery times for customers while saving costs, and improves workplace safety for employees; management will be introducing a lot more in the area of robotics and generative AI in the coming years

We deployed our 1 millionth robot across our global fulfillment network and unveiled innovations in our last-mile innovation center, such as automated package sorting and a transformative technology that brings packages directly to employees in an ergonomic height. We rolled out Deep Fleet, our AI that improves robot travel efficiency by 10%. At our scale, it’s a big deal. Deep Fleet acts like a traffic management system to coordinate robots’ movements to find optimal paths and reduce bottlenecks. For customers, it means faster delivery times and lower costs. For our team members, our robots handle more of the physically demanding tasks, making our operations network even safer. This combination of robotics and generative AI is just getting started. And while we’ve made significant progress, it’s still early with respect to what will roll out in the next few years

AWS grew 17.5% year-on-year in 2025 Q2, and is now at a $123 billion annualised revenue run rate (was $117 billion in 2025 Q1); AWS continues to help organisations of all sizes transition to the cloud; AWS’s AI business continues to have a multi-billion annual revenue run rate and growth rate of triple-digits year-on-year; AWS’s AI business currently has more demand than supply; AWS has launched EC2 instances that are powered by NVIDIA’s latest chip architecture, the Grace Blackwell; AWS is starting to release powerful applications at the top layer of the AI stack; management still sees 85% of global IT spend being on-premises and that the spend will flip to the cloud over the next 10-15 years, with acceleration for the flip coming from companies’ excitement over AI; management is confident that AWS is well-positioned to capture the flip from on-premises to the cloud; AWS saw growth in both generative AI business and non-generative AI offerings in 2025 Q2; management will continue to invest more capital in compute capacity for AWS as they see an unusually large opportunity in generative AI; management thinks AWS is growing slower than Azure and GCP because AWS is much larger; the supply constraints AWS is facing are mostly in power, but also in chips and components; management thinks the supply constraint will get better each quarter, but will take a few quarters to fully resolve

In Q2, AWS grew 17.5% year-over-year and now has over $123 billion annualized revenue run rate. We continue to help organizations of all sizes accelerate their transition to the cloud, signing new agreements with companies, including PepsiCo, Airbnb, Peloton, NASDAQ, London Stock Exchange, Nissan Motor, GitLab, SAP, Warner Bros. Discovery, 12 Labs, FICO, Iberia Airlines, SK Telecom and NatWest. In the rapidly evolving world of generative AI, AWS continues to build a large, fast-growing triple-digit year-over-year percentage multibillion-dollar business with more demand than we have supplied for at the moment…

…We’ve also launched Amazon EC2 instances powered by NVIDIA Grace Blackwell Super chips, AWS’ most powerful NVIDIA GPU accelerated instance…

…You’re starting to see AWS release more powerful applications at the top layer of the AI stack…

…Remember that 85% to 90% of worldwide IT spend is still on-premises versus in the cloud. In the next 10 to 15 years, that equation is going to flip, further accelerated by companies’ excitement for leveraging AI. So AWS’s significantly broader functionality, stronger security and operational performance, a much deeper experience helping enterprises modernize their infrastructure bodes well for the AWS business moving forward…

…During the second quarter, we continue to see growth in both our generative AI and non-generative AI businesses as companies turn their attention to newer initiatives bring more workloads to the cloud, restart or accelerate existing migrations from on-premise to the cloud and tap into the power of generative AI…

…We will continue to invest more capital in chips, data centers and power to pursue this unusually large opportunity that we have in generative AI…

…[Question] On AWS, we’re seeing significantly faster cloud growth among the #2 and #3 players in the space. I totally appreciate that AWS is coming off of a bigger base. But beyond that, do you think the output gap is due more to customer demand or infrastructure supply for both?

[Answer] Year-over-year percentages and growth rates are always a function of the base in which you operate. And we have a meaningfully larger business in the AWS segment than others. I think the second player is about 65% of the size of AWS. And we — when we look at the results over the last number of quarters, there are sometimes where — as far as we can tell, we’re growing faster than others and sometimes others are growing faster than us. But it’s still like if you look at second place player you’re talking about, it’s a — it’s still a pretty significant segment market segment leadership position that we have…

…Some of the constraints and they kind of exist in multiple places, the single biggest constraint power. But you also see constraints off and on with chips and then some of the components that — once you have the chips to actually make the servers, the sometimes you have new generations of chips that are a little bit later than they’re supposed to be and sometimes you get the chips and the yield you get in making servers isn’t what you expect when you get to ramp…

…I don’t believe that we will have fully resolved the amount of capacity we need for the amount of demand that we have in a couple of quarters. I think it will take several quarters. But I do expect that it’s going to get better each quarter, and I’m optimistic about that.

AWS’s in-house AI chip, Trainium 2, is landing capacity in larger quantities; Trainium 2 is the backbone for Anthropic’s newest generation Claude models and other Amazon offerings such as Amazon Bedrock; management thinks the real costs for AI in the future will be for inference, which will take up 80%-90% of AI costs at scale, and Trainium 2 has 30%-40% better price performance than GPUs for inference; management is already working on Trainium 3; management thinks a lot of AI compute and inference will ultimately run on Trainium 2, using the historical analogy of developments in CPUs, where customers want better price performance than Intel’s leading x86 CPUs and where AWS met the demand through its Graviton chips; management thinks that price performance is going to matter to companies as they scale their AI applications

 Our custom AI chip, Trainium2 is landing capacity in larger quantities and has impressively emerged as the backbone for Anthropic’s newest generation Claude models and many of our most essential offerings like Amazon Bedrock…

…If you look at where the real costs are, they’re going to ultimately be an inference today, so much of the cost in training because customers are really training their models and trying to figure out to get the applications into production. But at scale, 80% to 90% of the cost will be an inference because you only train periodically, but you’re spinning out predictions and inferences all the time. And so what they’re going to care a lot about is they’re going to care about the compute and the hardware they’re using. And we have a very deep partnership with NVIDIA and will for as long as I can foresee, but we saw this movie in the CPU space with Intel, where customers are anchoring for better price performance. And so we built just like in the CPU space, where we built our own custom silicon and building Graviton which is about 40% more price performance than the other leading x86 processors.

We’ve done the same thing on the custom silicon side in AI with Trainium and our second version of Trainium2 is really — it’s become the backbone of Anthropic’s next Claude models they’re training on top of, and it’s become the backbone of Bedrock and the inference that we do. So I think a lot of the inference, it’s about 30% and 40% better price performance than the other GPU providers out there right now, and we’re already working on our third version of Trainium as well. So I think a lot of the compute and the inference is going to ultimately be run on top of Trainium2…

…Price performance is going to matter to people as they get to scale. 

Amazon Bedrock is AWS’s fully-managed service for companies to leverage frontier models to build generative AI apps; Bedrock recently added Anthropic’s Claude 4 and it is the fastest-growing model ever; Amazon’s own frontier model, Amazon Nova, is the 2nd-most popular foundation model in Bedrock

In Bedrock, we’ve recently added Anthropic’s Claude 4 and is the fastest-growing model ever in Bedrock. We’ve also continued to see strong adoption of Amazon Nova, our own Frontier model, and it’s now the second most popular foundation model in Bedrock.

Amazon’s management is seeing that AWS customers are excited about AI agents, but lack the tools to build them; AWS released Strands, an open-source software to build AI agents; Strands already has 2,500 stars on GitHub and 300,000 downloads on PyPI; management is seeing that AWS customers are struggling to deploy AI agents securely in a scaled way and management recently released the Agent Core feature to solve the problem; management is seeing excitement from customers about Agent Core; AWS Transform is an AI agent that reduces mainframe modernization time lines from years to months; management recently released Kiro, an agentic integrated development environment coding agent; several hundred thousand developers are already using Kiro in the first couple of weeks; Kiro allows developers to do vibe coding but makes it much easier to go from prototyping to production; Kiro has event-driven hooks that help developers catch things that are easy to miss; it’s early days for Kiro, but management thinks there’s a chance for Kiro to transform how developers build software

As people have become excited about building agents, they’re realizing they lack the tools to build them. In May, we released Strands, an open-source way to more easily build agents, has taken off with a wide range of customers with already 2,500 stars on GitHub and over 300,000 downloads on PyPI.  Customers are also struggling with deploying agents into production in a secure and scalable way. It’s holding up enterprises scaling agents. To help solve that problem, Bedrock just released Agent Core. Agent Core is a set of building blocks that gives customers the industry’s first secure serverless run time to provide both synchronous and asynchronous execution, aging identity and boundaries, a memory service, a gateway to translate services to MCP compatible interfaces, built-in code execution and web browser tools, and an observability service. Customers are excited about Agent Core, and it frees them up to start deploying agents more expansively…

…AWS Transform as an AWS agent that dramatically reduces mainframe modernization time lines from years to months completes VMware TC2 conversions up to 80x faster. It makes it simple to move from .NET windows to .NET Linux implementations, reducing licensing costs for .NET applications by up to 40%. We’ve also just released Kiro, our new Agentic integrated development environment coding agent. There’s a lot of buzz around Kiro with several hundred thousand developers using and requesting access in the first couple of weeks, 100,000 used in the first 5 days of the preview. What struck a cord for developers is that Kiro allows them to do Vibe coding where developers use natural language to chat with a coding agent to build code. But unlike other coding agents, where developers don’t really have any structure to build on top of, Kiro allows developers to use natural language to build spec and then automatically updates that spec as they continue to vibe code or interact with Kiro. This makes it much easier to go from prototyping to production. Customers also like Kiro’s event-driven hooks that act like an experienced developer catching things developers might miss. When developers save a React component, hooks update that test file. When they modify API endpoints, Hooks refresh readme files. When they’re ready to commit security hook scan for leak credentials. It’s still very early for Kiro, but it seems clear we’re on to something customers love and Kiro has a chance to transform how developers build software.

Amazon’s management has seen very positive feedback in the early rollout of Alexa Plus, Amazon’s generative-AI-powered assistant, to millions of users in the US; management thinks the current Alexa Plus experience is so much better than the prior experience; Alexa Plus can take actions for users; Alexa Plus will be rolled out broadly in the US in the coming months, and internationally in the later part of 2025; usage of Alexa Plus is much more expansive than before; management thinks Alexa Plus’s economic opportunity could come in three ways, (1) driving more shopping on Amazon, (2) a surface for advertising, and (3) subscriptions

We’re excited about our progress with Alexa Plus, our next-generation assistant powered by generative AI. We’ve been rolling out early access to U.S. customers to start millions of customers have access now. We’re seeing very positive feedback, and we’ll continue to iterate on the experience…

…The Alexa Plus experience is so much better than I think our prior Alexa experience. She’s much more intelligent than her prior self. She’s much more capable and I would say unlike the other chat bots that are out there today who are good at answering questions, but really can’t take any action for you. Alexa Plus can take a lot of action for you, which is very compelling. So I can ask Alexa to play music for me or play video for me to move my music from one device to another or if I’m listening to a song, that’s on — that’s in a movie, I can ask Alexa Plus to actually put that movie scene on — of the song I’m playing, and it will put it on my Prime video on Fire TV or if I have guests coming over. I can say, Alexa draw the curtains, put the light on the porch and the driveway, increase the temperature by 5 degrees and put on music that would be great for a dinner party. And she does all that just through using natural language…

…We’ve been rolling out Alexa Plus starting in the U.S. It’s with millions of customers now. The rest in the U.S. coming in the next couple of months and it’s starting the international rollout more broadly later in the year…

…The usage is much more expansive than what they were using before and the number of calls they’re making is meaningfully higher…

…if you build the world’s best personal assistant, that has a lot of utility for customers, and therefore, it gets used a lot. So it means everything from people are excited about the devices that they can buy from us that has Alexa Plus enabled in it. People do a lot of shopping and it’s really — it’s a delightful shopping experience that will keep getting better. I think over time, there will be opportunities as people are engaging more multiturn conversations to have advertising play a role to help people find discovery and also as a lever to drive revenue. And I think over time, you could also imagine, as we keep adding functionality that there could be some sort of subscription element beyond what there is today. Today, Prime members get Alexa Plus for free and non-Prime members pay $9.99 a month for Alexa Plus. So I think it’s very — it’s still very early days, but we’re very encouraged by the experience we’re providing and you can bet we’re going to be iterating on it constantly.

AWS’s backlog is $195 billion in 2025 Q2, up 25% year-on-year (was $189 billion in 2025 Q1, up 20% year-on-year)

[Question] I’ll stick with AWS to start with. Could you just disclose the backlog number?

[Answer] I’ll just start off to give you the backlog figures. So at the end of the quarter, at June 30, that was $195 billion, so that’s up about 25% year-over-year.

Amazon’s management thinks the AI space is still very early and is currently very top-heavy, with a small number of very large frontier models being trained with very large amounts of compute, and with a small number of very large-scale AI applications, with chatbots and coding agents being the largest categories and ChatGPT being a standout by far; some of the training and the large-scale AI applications are being served by AWS; there is a long-tail of small AI applications that are in pilot mode or being developed; there are a very significant number of enterprises and startups building AI applications on AWS

I think it is so early right now in AI. If you look at what’s really happening in the space, you have — it’s very top heavy. So you have a small number of very large frontier models that are being trained that spend a lot on computing, a couple of which are being trained on top of AWS and others are being trained elsewhere. And then you also have, I would say, a relatively small number of very large-scale generative AI applications. The one category would be chatbots with the largest by a fair bit being ChatGPT, but the other category being really, I’ll call it, coding agents. So these are companies like Cursor, Versall, Lovable and some of the companies like that. Again, several of which run significant chunks on top of AWS…

…You’ve got a very large number of generative AI applications that are in pilot mode — or they’re in pilots or that are being developed as we speak and a very substantial number of agents that also people are starting to try to build and figure out how to get into production in a broad way, but they’re all — they’re quite early. And many of them that are out there are they’re significant, but they’re just smaller in terms of usage relative to some of those top heavy applications…

…We have a very significant number of enterprises and startups who are running applications on top of AWS’ AI services.

Amazon’s management thinks that companies that are developing AI applications are currently not paying close attention to where their AI applications are operating relative to the locations of the rest of their data and infrastructure; management thinks that companies will eventually want to run their AI applications close to where their data is, and this is a strength for AWS because so many applications and data are on AWS than anywhere else

Because we’re at a stage right now where so much of the activity is training and figuring out how to get your gender of AI applications into production. People aren’t paying as close attention as they will and making sure that those generative AI applications are operating where the rest of their data and infrastructure. Remember, a lot of general AI, inference is just going to be another building block like compute, storage and database. And so people are going to actually want to run those applications close to where the other applications are running, where their data is. There’s just so many more applications and data running in AWS than anywhere else. And I’m very optimistic about as we get to a bigger scale what’s going to happen to AWS on the AI side.

Amazon’s management thinks AI is the biggest technology transformation of our lifetime; management sees AI impacting every single area within Amazon, and they want to embrace the change

I think that AI is the biggest technology transformation for a lifetime…

…It’s also going to change very substantially the way we work. And if you think about it, the way that we do coding, the way that we do analytics, the way that we do research, the way that we do finance and measure — I mean, really, the way we do business process automation, the way we do customer service. Every single area that I can think of in the way we work is likely going to be impacted in some meaningful way by AI. And I think when you have a big shift like that, you have 2 macro choices. You can either decide that you’re going to embrace it. and you’re going to help shape it and you’re going to figure out how to build the right tools to allow you to take advantage of the technology or you can wish it away and have it shape you. And the posting that you’re referencing, Ron, that I made was just really being clear with the team that we’re going to pursue that former approach. We are going to embrace it. We’re going to try and shape it.

Apple (NASDAQ: AAPL)

Apple’s management recently announced new AI capabilities, such as live translation and Workoutbuddy; management opened up access to Apple’s on-device foundation models; management sees AI as a profound technology and is embedding it across Apple’s devices and platforms; management is significantly increasing Apple’s AI investments; management is integrating AI across Apple’s platforms, and have released 20 Apple Intelligence features; management expects to release a personalised Siri in 2026; management reiterated their expectation to release a personalised Siri in 2026

And we were excited to share some updates across our AI work. We announced even more capabilities coming later this year, including live translation and Workout Buddy. In addition to those new features, we announced new support for a number of languages, and we opened up access to the on-device foundation models at the core of Apple Intelligence…

…We see AI as one of the most profound technologies of our lifetime. We are embedding it across our devices and platforms and across the company. We are also significantly growing our investments…

…With Apple Intelligence, we’re integrating AI features across our platforms in a way that is deeply personal, private and seamless, right where users need them. We’ve already released more than 20 Apple Intelligence features, including visual intelligence, cleanup and powerful writing tools. We’re making good progress on a more personalized Siri, and as we’ve said before, we expect to release these features next year…

…We’re making good progress on a more personalized Siri, and we do expect to release the features next year, as we had said earlier, our focus from an AI point of view is on putting AI features across the platform that are deeply personal, private and seamlessly integrated. 

Apple’s chips in Apple’s devices allow users to run AI models on-device; when greater AI capabilities than the on-device models can provide are needed, the requests are routed through Apple’s private cloud compute 

Apple silicon is at the heart of all of these experiences, enabling powerful Apple Intelligence features to run directly on device. For more advanced tasks, our servers, also powered by Apple silicon, deliver even greater capabilities while preserving user privacy through our private cloud compute architecture. We believe our platforms offer the best way for users to experience the full potential of generative AI. Thanks to the exceptional performance of our systems, our users are able to run generative AI models right on their Mac, iPad and iPhone.

Apple’s capex for FY2025 year-to-date (FY2025 9M0) is notably higher; the higher capex is because of AI investments, which includes Apple’s 1st-party data centers for private cloud compute; management expects Apple’s capex to grow substantially in the future because of AI-related investments

[Question] Just on the CapEx, it’s up notably year-to-date. Could you just comment on your capital spending plan this year and next and provide some qualitative color in terms of what’s driving that growth?

[Answer] It’s a combination of factors. I would say, a pretty significant driver as Tim talked about, is the fact we are increasing our investment significantly in AI. So that is certainly a component of it. As you know, we’ve been investing in private cloud compute, which is also in our first-party data centers. The other piece, as you know, is we do have a hybrid strategy where in cases we do use third parties to make capital investments, and we also invest in our own. So you are going to see an increase in CapEx…

…[Question] CapEx is clearly moving higher. I know you guys don’t guide specifically to that number. But just kind of qualitatively, should we — as you lean in more on AI, should we really start to see that CapEx, which is running close to about $4 billion annualized today, really start to move appreciably higher? 

[Answer] we are increasing our investment significantly in AI. You are going to continue to see our CapEx grow. It’s not going to be exponential growth, but it is going to grow substantially. And a lot of that’s a function of the investments we’re making in AI. As we mentioned, we also have other items that fall under that category, facilities and some of our retail store investments. But I would say a lot of the growth is really being driven by AI.

Arista Networks (NYSE: ANET)

Arista Networks’ management has even more conviction now with the AI and Cloud Titans opportunity and has raised company’s revenue guidance for 2025; management thinks it’s a once-in-a-lifetime opportunity with the AI and Cloud Titans; management’s goal of $750 million in back-end AI networking revenue in 2025 is well on track; back-end AI networking revenue is purely incremental revenue for Arista Networks; management expects total AI-related networking revenue to exceed $1.5 billion in 2025 and to grow for years; Arista Networks recently lost its fifth big AI customer, which was a sovereign AI customer, but management thinks the company will still be able to achieve $750 million in back-end AI networking revenue in 2025, and $1.5 billion in total AI-related networking revenue; management is seeing a lot of activity in its four big AI customers and has been surprised at the level of activity, albeit still small, in enterprises and neoclouds; management thinks the 25-30 enterprise and neocloud customers Arista Networks recently won will help the company reach its goal of $750 million in back-end AI networking revenue in 2025

Our conviction with AI and Cloud Titans and enterprise customers has only strengthened. We began the year with a pragmatic guide of 17% or $8.2 billion annual revenue. But as the year has progressed, we recognize the potential to build a truly transformational networking company, addressing a massive total available market. This feels to us like a unique once-in-a-lifetime opportunity. We, therefore, raised our 2025 annual growth to 25%, now targeting $8.75 billion in revenue, which is an incremental $550 million more due to our increased momentum that we are experiencing across AI, cloud and enterprise sectors…

…Our stated goal of $750 million back-end AI networking is well on track and gaining from nearly 0 revenue 3 years ago in 2022 to production deployments this year in 2025…

…The back-end AI is all incremental revenue and incremental market share to Arista…

…We do expect an aggregate AI networking revenue to be ahead of the $1.5 billion in 2025 and growing in many years to come…

…On AI, I don’t need to tell you that despite losing one of our key anchor customers, the fifth customer was a sovereign AI customer that’s pretty much out of these numbers. We were still able to, we believe, achieve $750 million in back-end targets revenue and exceed $1.5 billion for the year. Exact numbers, we’ll know when we finally ship. We can’t give you those specifics now. But despite losing one customer, we’re having a lot of activity in the four big ones. And it’s pleasantly a surprise to us to see the advent of enterprise and even some neo clouds. The numbers are small. It’s not as big as the large titans, but it’s all adding up…

…To make that number or actually to exceed that number, you may have noticed that I pointed out that we now have in an aggregate, I think last time we said 15 and now we’re saying 25 to 30 enterprise and Neocloud customers. So they’re not big individually, but together, they add up to contribute as well for the loss of the fifth customer and the slowness of the fourth.

Arista Networks’ management sees AI data centers as consisting of all 3 of scale-out front-end networks, scale-up back-end networks and scale-out back-end networks; management sees scale-up back-end networks being built today predominantly with NVLink, but they expect a move towards Ethernet or UALink in the coming years; management sees scale-out back-end networks rapidly migrating from Infiniband to Ethernet based on the Ultra Ethernet Consortium specification released in June 2025; management sees Arista’s portfolio of Etherlink and EOS products as important components fo scale-out front-end networks; management thinks Arista Networks’ Etherlink portfolio has the most comprehensive solution for scale-out back-end and scale-out front-end networking; management thinks Arista Networks is the best AI networking platform for all kinds of AI accelerators; scale-up networks are a new and unique requirement, and will be a new incremental market for Arista Networks; management is currently unsure how big the total addressable market (TAM) will be for the new incremental market in scale-up networks; management thinks Arista Networks has the premier scale-out platform

AI centers consist of both scale-out front-end and scale-up/scale-out combination for back-end networks. 

Scale-up back-end networks consist of high-bandwidth, low-latency interconnects that tightly link multiple accelerators within a single rack as a unified compute system with workload parallelism. Today, this is predominantly constructed with NVLink as a compute-attached I/O, but we do expect a move to open standards such as Ethernet or UALink in the next few years.

Scale-out back-end network is dedicated spines interconnecting XPUs across racks, engineered for high bandwidth and minimal latency, thereby resulting in efficient parallel processing of massive training models. Here, InfiniBand is rapidly migrating to Ethernet based on the Ultra Ethernet Consortium specification released in June of 2025.

Scale-out front-end connects the back-end clusters to external clouds, compute resources, storage, wide area networks and data center interconnect to handle data ingestion, orchestration for AI and cloud traffic in a leaf-spine network topology. Arista’s flagship Etherlink and EOS are key hallmarks of scale-out networking with a wide breadth and depth of network protocol support. Introduced in 2024, Arista’s Etherlink portfolio is now 20-plus products with the most comprehensive and complete solution in the industry, especially for scale-out back-end and scale-out front-end networking…

…What is crystal clear to us and our customers is that Arista continues to be the premier and preferred AI networking platform of choice for all flavors of AI accelerators…

…Scale-up is a new and unique requirement, and it particularly is going to come in as people start building more and more AI racks, right? So when you’re building an AI rack and you want to boost the ratings and performance of an individual rack or cluster and your XPU ratings gets bigger and bigger, you often need a very simple interconnect, right? This interconnect in the past has been PCIe Express, CXL and now you’re seeing a lot of NVIDIA NVLink where you can really collapse your system board and XPU socket into an I/O. It’s almost not a network, it’s an I/O. It’s a back-end to a back-end, if I can call it that, right? And so scale-up networks will be an incremental new market as Arista pursues it…

…[Question] You talked about Scale-Up Ethernet to be incremental to your TAM. Curious if you have any sense how big this TAM is in 3 years.

[Answer] I don’t know yet. In terms of port density, in terms of units, if I look at the ratio within a rack versus outside in units, it’s quite high, 8:1, 10:1. But in terms of dollars, I don’t think it’s nearly as much because the level of functionality required is much simpler. So how about we beg that question out for September when we’ll know more?…

…Arista is the premier scale-out spine platform. The 7800 spine, our AI spine is a really flagship franchise platform. It takes advantage of all of the virtual output queuing, the congestion control, the peripheral queuing, the buffering, et cetera, in a way that nobody else in the industry has been able to demonstrate. And oh, by the way, besides being a great AI spine, it’s also a great routing platform for the WAN.

Poor networks lead to inefficient usage of GPUs; good networking is critical when building GPU clusters because 30%-50% of processing time is spent on exchanging data over networks and GPUs

Poor networks and bottlenecks lead to idle cycles on GPUs, wasting both capital GPU costs and operational expenses such as power and cooling. With a 30% to 50% processing time spent in exchanging data over networks and GPU, the economic impact of building an efficient GPU cluster with good networking improves utilization, and this is super paramount.

Arista Networks’ management expects back-end and front-end networks in AI data centers to converge as LLMs (large language models) expand into distributed training and inference, making it increasingly difficult to differentiate between back-end and front-end networks 

As large language models continue to expand into distributed training and inference use cases, we expect to see the back-end and the front-end converge and call us more together. This will make it increasingly difficult to parse the back-end and the front-end precisely in the future.

Most AI accelerators today are NVIDIA GPUs, but Arista Networks is entering early pilots with alternate AI accelerators including those from hyperscalers, AMD, and startups

While majority today is NVIDIA GPUs, we are entering early pilots connecting with alternate AI accelerators, including start-up XPUs, the AMD MI series and in AI and Titan customers who are building their own XPUs.

Arista Networks’ management is seeing enterprises and neoclouds increasingly adopt AI; one of Arista Networks’ neocloud customers is a sovereign AI working with a non-NVIDIA cluster; Arista Networks’ neocloud customers almost always adopt the company’s products for both front-end and back-end deployments

As we continue to progress with our four top AI Titan customers, AI is also spreading its wings into the enterprise and Neocloud sectors, and we are winning approximately 25 to 30 customers to date…

…In fact, one of the Neoclouds is a sovereign AI, which is a non-NVIDIA cluster that they’re working with right now that may factor in 2026…

…In terms of Neoclouds, almost always, the Neocloud is a combination of back and front. It’s never one or just the other, but definitely, the Neoclouds also have a back-end component.

Arista Networks’ management sees the rise of AI agents straining LAN and WAN traffic patterns

The rise in Agentic AI ensures any-to-any conversations with bidirectional bandwidth utilization. Such AI agents are pushing the envelope of LAN and WAN traffic patterns in the enterprise.

Arista Networks’ management is seeing a more balanced deployment of both cloud and AI now as compared to 2-3 years ago when there was raging excitement over just AI

if you recall 2, 3 years ago, maybe it’s hard to remember all of that, I was actually very worried that the cloud spending had a little bit frozen, and all of the excitement and enthusiasm was going towards GPU and how big is your GPU cluster, that kind of thing. We now see it coming back and the pendulum swinging into a more balanced deployment of both cloud and AI.

Arista Networks’ management continues to see very different data-traffic patterns between traditional cloud and AI

As a result of all these AI deployments, as I’ve often said, the traffic patterns of cloud and AI are very different. The diversity of the flows, the distribution of the flows, the fidelity of the flows, the duration, the size and intensity 

Arista Networks is progressing well with its 4 major AI customers; 2 of the customers are quickly-approaching 100,000 GPUs; 1 customer may reach 100,000 GPUs soon; the last customer will take more time to reach 100,000 GPUs; management is no longer just thinking about the number of GPUs with the AI projects of the 4 major AI customers; management expects all 4 of the major AI customers to adopt Arista Networks’ products for back-end deployments in 2026

I think two of our customers have already approached or going to fast — quickly approach 100,000 GPUs. But I don’t think it’s any more about just how big we used to talk about 1 million GPUs and all that. Increasingly, what we are seeing is more and more distributed GPU clusters for training and inference. And so two customers have reached that goal. The third one might reach that goal. The fourth one that I said we just begin with is probably too early to reach that 100,000. That’s probably a goal for next year. So that’s the composition. Two are strong, one is medium and the other still does…

…I won’t measure it anymore just on number of GPUs. I think there’s a lot more to do with locality, distribution, radix and also choice of multi-tenants, optimizations, collective libraries, level of resilience, et cetera. So we’re seeing a lot more complexity run into this than straight number of GPUs…

…[Question] You noted you are seeing good activity with the top 4 hyperscalers. While you indicated that your back-end revenue this year will be primarily driven by two of them, would you expect that all four cloud providers would adopt Arista switches for back-end deployments in 2026?

[Answer] The short answer would be yes. We’ve got some work to do, but the answer is absolutely. All four of them — two of them already have large and the other two will be deployed in the back end. It will also fuel the front end.

ASML (NASDAQ: ASML)

ASML’s management still sees AI (artificial intelligence) as the key growth driver for ASML in 2025, but sees rising uncertainty for 2026, even though the company is preparing for growth

Artificial intelligence is currently the main driver for growth for both Logic and Memory. If we look at Logic, we expect Logic to grow compared to 2024 because our customers are adding capacity in the most advanced nodes. Memory remains very strong because there also our customers are investing in their latest HBM and DDR5 products…

…Going into 2026, there the fundamentals of our AI customers remain strong and we are still preparing for growth. However, as we discussed last time, the level of uncertainty is increasing, mostly due to macroeconomic and geopolitical consideration. And that includes, of course, tariffs…

…As we look ahead to 2026, we continue to see strong demand related to AI for both Logic and Memory, and we see the positive impact of a growing number of EUV layers. On the other hand, as we said before, customers are facing increasing uncertainties based on macroeconomic and geopolitical developments. Further, some customers are navigating specific challenges that might affect the timing of their capital expenditure. Against this backdrop, while we are still preparing for growth in 2026, we cannot confirm it at this stage.

ASML’s management is seeing more DRAM customers shifting towards EUV and having more EUV layers in the latest and future nodes, because of AI

Obviously AI is largely driving the latest nodes, both on Logic and on DRAM. And of course, that is a big driver for EUV. Because EUV is more and more significant on those leading nodes. For instance, if you look at DRAM, we do see that customers are more and more shifting towards EUV and have more and more layers on the latest nodes, but also on future nodes for DRAM. So that’s, of course, a positive for EUV…

…What is very positive about the last few months is we see basically this increased adoption of EUV happening, I think, especially with DRAM customer. The trend, I think, will be sustained. That’s what our customers tell us. So we see on the latest node quite a jump on EUV layer for some of the customer. And the DRAM road map, the technology road map is so complex that EUV more and more is seen basically as a way to simplify a bit the process flow and to get to the performance needed faster. So if we look at, I would say, the next 3, 4, 5 nodes, and that includes Four-Square by the way, we see a very positive trend with our DRAM customer. And I think we were foreseeing that last year, and we now have many confirmation points of that.

ASML’s management sees strong growth for the semiconductor market in the long-term, driven by AI, although there are some short-term uncertainties; management thinks the shift of ASML’s customers towards advanced Logic and Memory chips will drive demand for advanced lithography; management thinks ASML’s EUV roadmap will enable the company to convert more multi-patterning layers to single exposure in the next few years

I think long term, the semiconductor market remains very strong. And I think a lot of people say that AI is really a great opportunity. We have seen again the fundamentals around AI to be very, very strong. Now, of course, short term, Roger talked about it. Some uncertainty, there’s a lot happening, discussion around tariffs, export control, macroeconomic uncertainties…

…The shift of our customers towards more advanced Logic, advanced Memory will also drive the need for more advanced lithography. This will basically be a good thing for litho intensity. The progress we make on our EUV roadmap with Low NA, High NA, providing the right cost of technology, will continue to allow us basically to convert more multi-patterning layers into single exposure. And we will see that happening in the course of the next few years

Cloudflare (NYSE: NET)

A rapidly-growing AI company moved all of its inference workloads from a hyperscaler to Cloudflare’s platform, choosing Cloudflare as its only inference cloud platform

A rapidly growing AI company expanded their relationship with Cloudflare, signing a 1-year $15 million pool of funds contract for Workers AI. This is the third contract signed with this customer in the last year as they moved all of their inference workloads from a hyperscaler over to make Cloudflare their single inference cloud platform. The continued expansion with this customer demonstrates not only the tremendous value they realized from the Cloudflare platform, but also the truly unmatched scalability, efficiency and speed of Workers AI. Cloudflare is increasingly the platform the most innovative companies are choosing to power the future of AI.

A rapidly-growing AI company signed a 5-year deal with Cloudflare for a number of products that will help the AI company enhance its security posture at scale

A rapidly growing AI company signed a 5-year $4.6 million contract for AI Gateway, Magic Firewall, Magic Transit and application services. As a highly technical company, this customer turned to Cloudflare as a strategic partner to enable accelerated innovation, provide enhanced security, improve performance and offer unmatched scale with our globally distributed connectivity cloud. This contract is just the beginning with this customer. They’re already kicking the tires on our firewall for AI product.

Cloudflare’s management sees publishers as having 2 key business models from the traditional internet, namely, subscriptions and advertising; management is seeing the rise of AI leading to a dramatic decline in online traffic to publishers; it has become 10x harder to get traffic from Google over the past 10 years; pure AI companies can be up to 30x harder for publishers to get traffic from as compared to the Google of old; management thinks the AI-driven internet will kill the subscriptions and advertising business models of yore; management thinks Cloudflare is in a unique position to establish a new business model for the internet because 20% of internet traffic runs through Cloudflare and 80% of leading AI companies are familiar with or users of Cloudflare; Cloudflare has signed deals with many leading publishers to enable publishers to charge AI companies for content; the deals Cloudflare have signed are small but management sees them as highly strategic; management thinks the same rails Cloudflare has built to power payments from AI companies to publishers can also be used to power transactions between AI agents; management is very bullish on the opportunity to help publishers empower agentic transactions; management thinks it’s too early to tell exactly what kind of business models will emerge from an agentic internet; management has been surprised at the positive reaction from AI companies to Cloudflare’s new business to empower transactions between publishers and AI companies

Historically, publishers online have made money primarily in two ways: subscriptions or ads. In either case, the key was generating traffic. In the past, one of the most effective ways to do that was through search. Over the last 25 years, publishers allowed Google and other search engines to copy their content in exchange for sending them traffic. But recently, that traffic has been falling dramatically. Based on the data that Cloudflare has observed, it’s nearly 10x harder to get traffic from Google than it was just 10 years ago. What’s changed? The interface of the web is switching from search to AI. Even at Google, which has represented the dominant interface for discovering the web, most searches now include an AI overview, which Pew Research has found significantly decreases the likelihood of someone clicking on a link and reading original content. Pew’s data aligns exactly with what we’ve observed based on our customers’ traffic. It’s even worse with pure AI companies. Every AI company we’ve tracked is worse than the Google of old with some being as much as 30,000x harder to get traffic from. As the interface of the web switches from search to AI, it’s clear more people will read derivatives of content rather than the original content itself. That means the new AI-driven web will kill the old Webs business model.

Cloudflare is in a unique position to help. More than 20% of the web sits behind us today. But maybe as importantly, around 80% of the leading AI companies know and use us. So in Q2, we partnered with the who’s who of the publishing world from the Associated Press to Ziff Davis and nearly everyone else in between to help invent the new business model for content creators on an AI-driven web. The deals we are signing with these companies aren’t high dollars, but they are highly strategic. The response has been incredibly positive from publishers for sure, but also from the majority of AI companies who understand that original content is the fuel that powers their engines. When seismic shifts happen in ecosystems as important as the web, new business models inherently emerge. We believe we are uniquely positioned to power the business model of content creation in the coming AI-driven web, but the opportunity may actually be much larger than that.

The same rails that we are building to power payments from AI companies to publishers, we believe will be used to facilitate transactions between AI agents, whatever they happen to be doing for you online. The fact that we sit in front of so much of the web and that more than half of our dynamic traffic is already between APIs means that we are strategically positioned to deliver the agentic web of the future. For those of you who have been following us for a while, you know that we talk about our product areas in terms of acts. Act 1 are our reverse proxy products, WAF, DDoS mitigation, et cetera. Act 2 are our forward proxy products, Zero Trust, VPN, network firewall. Act 3 are our Workers developer tools. What we are doing to help publishers empower agentic transactions is a big enough deal to us that we’ve begun to refer to it internally as Act 4…

…[Question] I wanted to dig into like the business model for the Agentic Web. And maybe, Matthew, you could give us a little bit more color and visibility on what that means in reality. What are the business models that you’re looking to enable for your customers?

[Answer] I don’t think we know exactly the answer to that. And my hunch is that there will be a number of different models that emerge and over time, consolidate. The analogy I’ve been thinking about is risk of hubris. When Apple rolled out $0.99 a song, that was a key turning point in the music industry, but it wasn’t the ultimate model that we ended up with. We came closer to something that was $10 a month with Spotify. And so I think that this is going to go through a number of different stages and iterations. And you could imagine something that is a fraction of $0.01 per transaction. You could imagine different sites charging different things. You could imagine sites that charge agents more or sites that actually discount for agents that are there…

…I wasn’t surprised that publishers were excited about what we were doing. And we literally haven’t encountered a publisher that wasn’t 100% all in on what we were proposing. And it’s been amazing to build those relationships. I was surprised by the reaction from the AI companies. I thought that they would kick and scream quite a bit more than they did. And quite the opposite. I think they all understand fundamentally that content, original content, valuable content is the fuel that runs their engines.

Cloudflare’s management thinks it’s important that all AI companies should have a level playing field in being able to get content

The key point, though, and I think this is what is the most important work that we have to do. The key point is that there needs to be a level playing field. It can’t be that one company has a unique advantage in getting content where others don’t. And so what we are now really working on is making sure that as we figure out what the market looks like going forward for this, that it is a level playing field, that new start-ups have an opportunity to exist that just because you’re a legacy provider doesn’t give you some unique access to content that others don’t have, that there’s a way to make sure that if you’re small, you pay less and if you’re big, you pay more.

The large AI foundation model builders use Cloudflare in 2 important ways, namely, for security, and to run inference closer to the edge; Cloudflare is not the right platform for foundation model builders to run massive models at the edge, but it is a great platform to run smaller models; management is investing to improve Cloudflare’s ability to support larger and larger models

Our best estimate is that about 80% of the major AI companies are Cloudflare customers today. And they use us across a couple of different services, and I’ll highlight two. So the first is security. The challenge if you put up a foundational model is every time that somebody runs a request against that model, it has real cost to you and it’s measured in not fractions of pennies, but often in pennies. And so if somebody who can find a way to run requests against your model at a very high volume or in a way that you can’t control or in a way that is automated and not actually what your subscriber is doing or if they can find a way to do things like longer credit cards, the credits and the tokens on these AI models now act almost as a currency that allow people to take stolen credit cards and turn it into effectively cash. All of those are unique security threats that make Cloudflare just a great partner for those AI companies that we can sit in front of. That, I think, is where most of them start with us…

…Because of the fact that we have deployed GPUs across our entire network and made it so that we can do inference as close as possible to their users as we are all going from seeing these ChatGPT-like systems as miracles and starting to take them for granted, there’s a real need for them to get the best performance as possible. And one of the most effective ways of doing that is moving the inference closer to where the user is. At the same time, increasingly, as we see regulations spring up around the world, targeting AI companies, they need to keep the inference tasks as close to users as possible to meet those regulatory needs. And so Cloudflare Workers AI gives them the ability to run inference tasks as close as possible to users. We would not be today the right place for one of the really massive LLMs to run because those, in many cases, will require multiple different machines working in coordination. It is a more complicated task. But for smaller models, we’re finding that Cloudflare is the best place for anyone who’s building that to run that. And over time, we are investing in making our systems able to support larger and larger and larger models.

Coupang (NYSE: CPNG)

Coupang’s management is excited by the potential of automation and AI in helping Coupang improve its customer experience and operational capabilities; management is using AI for personalised customer recommendations, dynamic pricing, inventory forecasting, route optimization and more; management sees AI as a long-term enabler of both topline growth and margin expansion for Coupang; Coupang has started using AI for software development and in early results, more than 50% of new code is written by AI; management expects Coupang’s operations to be improved in the future partly through humanoid robots

We’re also excited by the potential of automation and AI to accelerate our efforts to innovate around the customer experience and drive operational excellence. As we invest further into these capabilities, we see significant opportunities to enhance service levels while simultaneously achieving meaningful cost savings…

…AI has been core to our operations and strategy for years. We’ve leveraged these technologies to improve nearly every aspect of our customer experience and operations from personalized recommendations, dynamic pricing, inventory forecasting, route optimization to name a few. Those applications and that integration has directly contributed to the results that you’ve seen over the last few quarters and years around customer engagement and improved operational efficiency.

Looking ahead, we see AI as a long-term enabler of both top line growth and margin expansion, especially with generative AI and large language models, our focus remains on practical high-impact applications, practical applications that scale with our core offerings and enable us to deliver meaningful gains in customer experience and productivity. One example where we’re seeing immediate impact is around software development, where in our early implementations, while still early, we’re seeing up to 50% of the new code written by AI. We also expect AI to have a transformative impact on our operations over time through enhanced automation and humanoid robotics, among other things.

Coupang has been building its own AI computing infrastructure for some time for its own internal needs; the investments Coupang has been making for computing infrastructure is still relatively small; management is currently running small-scale tests on providing 3rd-party enterprises with access to the AI computing infrastructure that Coupang has built for internal use 

 I think I should note that we’ve been developing our own AI computing infrastructure to service our internal needs for some time now. In addition to the capacity that we source from external providers, the bulk of the investment today, and it’s relatively small, is dedicated to building out that internal capability for higher performance and cost savings. We’re also exploring the potential to provide access to that technology and service that we’re developing internally to external enterprise customers as a test-and-learn initiative, and that’s being done on a very small scale.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing strong growth in Datadog’s AI-native cohort, with meaningful growth in number of AI-native customers, driven by rapid usage growth in their products; there was consistent and steady usage growth in the rest of Datadog’s business

Overall, we saw trends for usage growth from existing customers in Q2 that were higher than our expectations. We experienced strong growth in our AI native cohort. The number of AI native customers are growing meaningfully with us as they see rapid usage growth with their products. Meanwhile, we saw consistent and steady usage growth in the rest of the business.

Datadog’s management has a recent AI-powered innovation in security known as Bits AI Security Analyst; Datadog’s security products can cover new AI attack vectors across the application, model, and data layers

Our security products cover new AI attack vectors across the application, model and data layers. At the AI data layer, Sensitive Data Scanner can now prevent the leakage of sensitive data and training data as well as LLM prompts and responses. At the model layer, we help secure against supply chain attacks in open source models and prevent model hijacking attacks. At the application layer, we help prevent prompt injection attacks and data poisoning in run time.

Datadog’s management has launched fully autonomous AI agents for investing alerts and coordinating incident response, coding assistance, triaging SIEM signals; management has launched a Datadog MCP (model context protocol) server to allow 3rd-party AI agents to interface with Datadog’s platform; management thinks Datadog’s AI agents work really well; management is busy trying to ship the AI agents to as many customers as they can and the initial response to the AI agents has been pretty positive 

We launched fully autonomous AI agents, including Bits AI SRE Agent to investigate alerts and coordinate incident response, Bits AI Dev Agent, an AI-powered coding assistant to proactively fix production issues and Bits AI Security Analyst to triage Datadog Cloud SIEM signals. To further accelerate our users’ incident response, we announced AI Voice Agent for incident response, so users can quickly get up to speed and start taking action on their phones…

…We launched a Datadog MCP server to enable AI agents to access telemetry from Datadog and to act as a bridge between Datadog and MCP compatible AI agents like OpenAI Codex, Cursor and Claude Code by Anthropic. We work together with OpenAI to integrate our MCP server within the OpenAI Codex CLI, and the Datadog Cursor extension now gives developers access to Datadog tools and observability data directly within the Cursor IDE…

…Tthe AI’s actually works surprisingly well… Right now, we’re busy basically shipping it to as many customers as we can and enabling the customers with it, and that’s a big area of focus in the business as well… The initial response is very positive. We’ve had customers purchase it pretty quickly in their trials, and so we feel very good about it.

Datadog now has end-to-end AI and data observability capabilities, such as (1) GPU Monitoring for visibility into GPU fleets across, cloud, on-prem, and GPU-as-a-service platforms, (2) LLM Observability Experiments for understanding how changes to prompts, models or AI providers influence application outcomes, and (3) Agentic Flows Visualization to understand AI agents’ decision paths

We showcased our new end-to-end AI and data observability capabilities. Engineers and machine learning teams can use GPU Monitoring to gain visibility into GPU fleets across cloud, on-prem and GPU-as-a-service platforms such as CoreWeave and Lambda Labs. With AI Agent Console, enterprises can monitor the behavior and interactions of any AI agent used by their teams. We now offer LLM Observability Experiments to help understand how changes to prompts, models or AI providers influence application outcomes. We added a new Agentic Flows Visualization to LLM Observability to capture and understand the decision path of AI agent. And last but not least, and accelerated by our recent acquisitions of MetaPlan, Datadog now offers a complete approach to data observability across the entire data life cycle from iteration to transformation to downstream usage.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business; management thinks AI is a tailwind for Datadog because increased cloud consumption drives more usage of Datadog; Datadog has hundreds of AI-native customers, including 8 of the top 10 leading AI companies; of Datadog’s AI-native customers, more than a dozen are spending over $1 million per year with Datadog while more than 80 are spending more than $100,000 per year; management continues to see rising customer interest for next-gen AI observability and analysis; 4,500 Datadog customers at the end of 2025 Q2 used 1 or more Datadog AI integrations (was 4,000 in 2025 Q1); management thinks next-gen AI introduces new complexity and new observability challenges; management is incorporating AI into the Datadog platform to deliver more value to customers; Datadog has a large volume of rich, clean, and detailed data; Datadog’s access to data has enabled management to build Toto, Datadog’s foundational model for time series forecasting which shows state-of-the-art performance on all benchmarks; management believes that the growth of Datadog’s AI-native customers is an indication of future opportunity when AI is adopted more broadly; management thinks time series forecasting, the domain of Toto, has very wide applicability, which is a great sign of things to come for Datadog’s efforts in AI

There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers of our business. As we think about AI, we are incredibly excited about our opportunities.

First, AI is a tailwind for Datadog as increased cloud consumption drives more usage of our platform. Today, we see this primarily in our AI native group of customers who are monitoring their cloud-native applications with us. There are hundreds of customers in this group. They include more than a dozen that are spending over $1 million a year with us and more than 80 who are spending more than $100,000, and they include 8 of the top 10 leading AI companies…

…We continue to see rising customer interest for next-gen AI observability and analysis. Today, over 4,500 customers use one or more Datadog AI integrations.

Second, next-gen AI introduces new complexity and new observability challenges. Our AI observability products help our customers gain visibility and deploy with confidence across their entire AI stack, including GPU Monitoring, LLM Observability, AI Agent Observability and Data Observability…

…Third, we are incorporating AI into the Datadog platform to deliver more value to our customers. As I discussed earlier, we launched Bits AI SRE Agent, Dev Agent and Security Agent. We are seeing very good results with those with more improvements and new capabilities to come.

Finally, as a SaaS platform focused on our customers’ critical workflows, we have a large volume of rich clean and detailed data, which allows us to conduct groundbreaking research. A great example of that is our Toto, foundational model for time series forecasting, which shows state-of-the-art performance on all benchmarks, even going well beyond specialized observability use cases…

…We believe that the growth of this AI native customer group is an indication of the opportunity to come as AI is adopted more broadly and customers outside the AI native group begin to operate AI workloads in production…

…We got fantastic results in our first release, research output is really like a state-of-the-art model that beats every single other model in a category that has seen quite a bit of action over the years, time series forecasting is — has very wide applicability in a lot of different domains. So I think we — it shows that we can perform at the highest level there, and I think it’s a great sign of things to come in terms of AI automation and AI agents.

AI-native customers accounted for 11% of Datadog’s revenue in 2025 Q2 (was 8.5% in 2025 Q1); AI-native customers contributed 10 percentage points to Datadog’s year-on-year growth in 2025 Q2, compared to 2 percentage points in 2024 Q2; Datadog has revenue concentration in its cohort of AI-native customers, but even excluding its largest AI-native customer (which should be OpenAI), year-on-year revenue growth in 2025 Q2 was stable relative to 2025 Q1; management thinks AI-native customers will continue to optimise cloud and observability usage in the future; the margins on Datadog’s contracts with AI-native customers are the same as with non-AI native customers that operate with the same volume, as the margins are determined by volume; management is unable to tell when the optimisations by AI-native customers will happen, if they even do

We saw a continued rise in contribution from AI native customers in the quarter who represented about 11% of Q2 revenues, up from 8% of revenues in the last quarter and about 4% of revenues in the year ago quarter. The AI native customers contributed about 10 points of year-over-year revenue growth in Q2 versus about 6 points last quarter and about 2 points in the year ago quarter…

…We do see revenue concentration in this cohort in recent quarters. But if we look at our revenue without the largest customer in the AI native cohort, our year-over-year revenue growth in Q2 was stable relative to Q1. We remain mindful that we may see volatility in our revenue growth on the backdrop of long-term volume growth from this cohort as customers renew with us on different terms and as they may choose to optimize cloud and observability usage over time…

…This isn’t about the AI and margins, the AI cohort versus non-AI cohorts. We price based on volume and on term. So to the extent you would have an AI customer who’s doing much the same things as our other customers in the use of the product, has similar volumes and similar terms to the non-AI, it would be similar margins…

…[Question] There’s obviously been a lot of talk about AI natives around the business. I know you’ve talked about the potential for optimization for several quarters, but we continue to see really strong growth in that segment. So if you were to see optimization, when would you expect that to happen?

[Answer] If I knew when it was going to happen, I would tell you. The nature of our customers is they grow, they have their own businesses to run. They have their own constraints. We’re here to help them deliver their services, and that’s what we work on every single day. Now every now and then, there’s a renegotiation, a renewal on occasions for customers to figure out what they need to optimize and what they need to do for the future. But we never know whether it’s going to happen this quarter, next quarter, in three quarters next year, never.

Datadog’s management sees two layers to the AI opportunity, where the first layer is composed largely of AI inference and applications that are built on largely traditional compute, and where the second layer relates to a new opportunity for observability in understanding how non-deterministic code and AI-written code is working in production; management thinks the second layer largely consists of the AI-native companies today, but the rest of the market will be going there in the future

On the AI opportunity, so there’s really multiple layers to it. The first layer is largely what we see today, which is, companies that are running their inference stack and the application around it, in cloud environments. So that’s the case of the model makers or if you think of the companies that are doing coding agents, things like that. That is what we see today, and it looks a lot like normal compute. So you have normal machine CPUs, some GPUs, quite a few other components, databases, web servers, things like that. So that’s the bulk of what we see today. And there’s going to be more of it as the AI applications come into production. There are more specialized inference workloads and even training workloads in some situations that rely on instrumenting GPUs. And for that, we have a new product out there that does GPU monitoring that we announced at DASH. But all that I would call the infrastructure layer of AI.

Then on top of that, there’s new problems in terms of understanding what the applications themselves are doing and the applications are largely nondeterministic anymore. They either are run by a model that is nondeterministic by nature or they run in code that was not as carefully written as it used to be. It’s not completely written by humans, just largely written by AI agents, and as a result, you also need to spend a lot more time understanding how that code is working and that largely happens in production. So that’s a brand-new area of observability, which is how do you deal with applications that have not been completely defined in development and that have to be evaluated in production. And what we think is the whole market is going there, not just the AI natives. The AI natives are definitely doing that today, both applications are running on models and code that has been largely written by agents, but the rest of the market is going there, and the best proof point you see of that is the very, very broad adoption today, both of the API-gated AI models and of the coding agents, which you see in every single large enterprise today.

Datadog’s management is seeing lesser need to grow headcount in engineering because of the use of AI tools, but there’s still need to grow headcount in sales

[Question] Many CEOs are either holding headcount flat or down. We’ve seen Meta headcount down from 2 years ago, Microsoft headcount flat, others — Palantir saying they’re going to shrink headcount and 10x revenue. Do you believe you can become more efficient with fewer? Or do you think that, that model doesn’t apply that you’re seeing with other software companies?

[Answer] The spend is shifting a little bit on the engineering side. As I said, we compute — we consume more AI training inference, and so that’s definitely changing a bit of the balance between what you have humans do and what you offload to GPUs. That being said, we’re still completely constrained by the amount of product we can put out there. There’s a ton of opportunity in every single direction we look, whether that’s on the AI automation, whether it’s on the security side, whether that’s in the new areas, just better observability or experimentation that we’re going after, and so for us, this very strong ROI in the adds that we’re making at the moment.

Mastercard (NYSE: MA)

Mastercard’s management is seeing fraudsters use artificial intelligence to attempt mischief while Mastercard is also securing cybersecurity for its clients with artificial intelligence; Mastercard’s AI-powered Decision Intelligence Pro solution leverages data from across the internet to predict fraud; customers are happy to pay for Decision Intelligence Pro

On the cybersecurity side, the stakes are getting higher and higher. The fraudsters are using latest technology, using Artificial Intelligence, generative AI to power their solutions to break through on the fraud side, on the cybersecurity side, and we’re doing exactly the same. So I mentioned this in previous calls, Decision Intelligence Pro is leveraging data out of all sources of the Internet, putting it through a generative AI engine for us predicting frauds. Instead of preventing fraud, we’re going to predicting fraud, which is the latest stage of this. This kind of game, and this is a clear identifiable value for our customers that are very happy to pay for.

Mastercard closed the Recorded Future acquisition in 2024 Q4 (Recorded Future provides AI-powered solutions for real-time visibility into potential threats related to fraud); Recorded Future is the world’s largest threat intelligence company; Recorded Future has 1,900 customers in 75 countries, and its customers include Fortune 100 companies and governments; it’s still early days, but Mastercard is already putting out more products with Recorded Future; the combination of Recorded Future and Mastercard’s huge troves of data is the magic sauce; Recorded Future is identifying where the threat vectors are so that customers can be more targeted in their response, and this is a winning proposition for customers

On Recorded Future, if I can just remind everybody, thank you for the question, Tien-Tsin. So world’s largest threat intelligence company, 1,900 customers, 75 countries, so very significant. You see a lot of Fortune 100 companies in there as well as G20 governments…

…We’ve hit the ground running. It is still very early days, obviously, but we’re already putting out more products with them. Malware Intelligence is one that I called out in the last quarter around this. The beauty here is, they have a lot of data, which they get from all sources of the internet, as I mentioned earlier. At the same time, we have a lot of data. The combination of that is the magic sauce here…

…What Recorded Future, what Mastercard is now helping our customer with is identifying where the threat vectors actually are. So you can be much more targeted in your response. That is, first of all, more effective from reducing cybersecurity risk. At the same time, it’s more effective from a cost perspective. So that’s a really winning proposition.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management has introduced an AI-powered search experience in e-commerce that includes infinite scroll and it has increased navigation time in key categories

At the same time, our new AI-powered search experience – with infinite scroll – is increasing navigation time in key categories where we expect these shipping enhancements to have the greatest impact.  

MercadoLibre’s product ads is performing well across the board for MercadoLibre on the back of improved UX and tools for sellers, including an AI-powered budget recommendation tool

Product ads is performing well across countries and sites and not only in Argentina, and this is on the back of improved UX and tools for sellers, such as a new question flow focused on benefits of advertising, smarter item selection, improved budget recommendation using AI and some of the things that I was mentioning before.

MercadoLibre’s management has integrated MercadoLibre’s AI platform, Verdi, into the CRM tool of the Acquiring business, leading to faster activation and higher TPV per new merchant; Verdi has also been used to support online payments merchants with their technical integrations with Mercado Pago and to assist instore merchants facing device issues

Operating efficiently remains a top priority. We have integrated our AI platform, Verdi, into our CRM tool to enhance the productivity of our commercial teams, resulting in faster activation and higher TPV per new merchant. We have also deployed Verdi to support online payments merchants with their technical integrations with Mercado Pago and to assist instore merchants facing device issues. This has enabled more autonomous problem resolution and significantly reduced the number of device replacements. 

 MercadoLibre’s management thinks there is a lot of opportunity for AI to help MercadoLibre improve its marketing execution and advertising spend; management sees AI giving MercadoLibre the opportunity to produce multiple creatives for any given campaign; management is using AI to better onboard its advertising customers onto its technology stack

We definitely think there is huge room for AI to help us improve both our marketing execution and our ad spend as well. So on the marketing side, I think there are many, many dimensions in which we are testing and learning about AI. Just to bring one example out there. For instance, when we think about branding and creativities, AI brings us the opportunity to produce multiple creativities for any given campaign and start testing and learning those creativities across the board and with that, deciding who we want to show what in the online world, and that’s something we are already proving, producing content online…

…We are using AI today in order to help our sellers better understand our ad stack to have onboarded into our ad technology to optimize their bidding and so on.

Meta Platforms (NASDAQ: META)

Meta’s management has seen glimpses of AI systems improving themselves; management thinks artificial super intelligence (ASI) is now in sight; management is optimistic about ASI advancing economies and science, but management’s vision is to bring ASI to everyone to enable creativity and culture to flourish; Meta’s new Meta Superintelligence Labs consists of some of its existing AI teams and a new lab building the next generation of models; Alexandr Wang, Nat Friedman, and Shengjia Zhao will be the important leaders of Meta Superintelligence Labs; management thinks people are excited to join Meta Superintelligence Labs because the company has the ingredients required to build leading models and deliver them to billions of people; management believes that ASI will improve every aspect of Meta’s business; management has seen that the most aggressive predictions for AI timelines have been the most accurate ones; some teams in Meta have used Llama 4 to build autonomous AI agents to improve Facebook’s algorithm in a small way; management is telling the entire company to take ASI seriously; management thinks Meta is the best company in the world at building world class technology and distributing it to billions of people; it appears that Meta will be training its ASI models differently from current frontier AI models; the team-dynamics in Meta Superintelligence Labs will be different from Meta’s core AI team; management expects to continue open-sourcing Meta’s AI models, although not everything will be open-sourced, and ASI may have safety concerns related to open-sourcing

Over the last few months, we’ve begun to see glimpses of our AI systems improving themselves. And the improvement is slow for now, but undeniable and developing super intelligence, which we define as AI that surpasses human intelligence in every way, we think, is now in sight. Meta’s vision is to bring personal super intelligence to everyone, so that people can direct it towards what they value in their own lives. And we believe that this has the potential to begin an exciting new era of individual empowerment. A lot has been written about all the economic and scientific advances that Superintelligence can bring, and I’m extremely optimistic about this. But I think that if history is a guide, then an even more important role will be how Superintelligence empowers people to be more creative, develop culture and communities, connect with each other and lead more fulfilling lives…

…We’ve established Meta Superintelligence Labs, which includes our foundations, product and FAIR teams as well as a new lab that is focused on developing the next generation of our models…

…We are building an elite, talent-dense team Alexandr Wang is leading the overall team. Nat Friedman is leading our AI product in Applied Research and Shengjia Zhao is Chief Scientist for the new effort. They are all incredibly talented leaders, and I’m excited to work closely with them. and the world-class group of AI researchers and infrastructure and data engineers that we’re assembling…

…The reason that so many people are excited to join is because Meta has all of the ingredients that are required to build leading models and deliver them to billions of people. The people who are joining us are going to have access to unparalleled compute as we build out several multi-gigawatt clusters…

…We are making all these investments because we have conviction that super intelligence is going to improve every aspect of what we do…

…There are all these questions that people have about what are going to be the time lines to get to really strong AI or Superintelligence or whatever you want to call it. And I guess that each step along the way so far, we’ve observed the more kind of aggressive assumptions or the fastest assumptions have been the ones that have most accurately predicted what would happen…

…Some of the work that we’re seeing with teams internally being able to adapt Llama 4 to build autonomous AI agents that can help improve the Facebook algorithm to increase quality and engagement, or like. I mean, that’s like a fairly profound thing if you think about it. I mean it’s happening in low volume right now. So I’m not sure that, that result by itself was a major contributor to this quarter’s earnings or anything like that…

…We have this principle that we believe in across the company, which we tell people take Superintelligence seriously. And the basic principle is this idea that we think that this is going to really shape all of our systems sooner rather than later, not necessarily on the trajectory of a quarter or 2, but on the trajectory of a few years…

…When we take a technology, we’re good at driving that through all of our apps and our ad systems and all that stuff, it’s not just going to kind of sit on the line. I think that there’s no other company, I think that is as good as us at kind of taking something and kind of getting it in front of billions of people…

…There’s obviously different scaling paradigms, and I don’t want to get too much into the detail of research that we’re doing on this. But I think that for developing superintelligence at some level, you’re not just going to be learning from people because you’re trying to build something that is fundamentally smarter than people. So it’s going to need to learn how to — or you’re going to need to develop a way for it to be able to improve itself…

…I’ve just gotten a little bit more convinced around the ability for small talent-dense teams to be the optimal configuration for driving frontier research. And it’s a bit of a different setup than we have on our other world-class machine learning system. So if you look at like what we do in Instagram or Facebook or our ad system, we can very productively have many hundreds or thousands of people basically working on improving those systems, and we have very well-developed systems for kind of individuals to run tests and be able to test a bunch of different things. You don’t need every researcher there to have the whole system in their head. But I think for this — for the leading research on superintelligence. You really want the smallest group that can hold the whole thing in their head, which drives, I think, some of the physics around the team size and how — and the dynamics around how that works…

…As you approach real superintelligence, I think there is a whole different set of safety concerns that I think we need to take very seriously, that I wrote about in my note this morning. But I think the bottom line is, I would expect that we will continue open sourcing work. I expect us to continue to be a leader there. And I also expect us to continue to not open source everything that we do, which is a continuation of kind of what we’ve been kind of working on.

Meta is making good progress towards Llama 4.1 and 4.2 while also working on new models in parallel; management thinks the new models will be frontier-level when released in 2026; management has used Llama to lower top line bug reports in US and Canada in Facebook Feed and Notifications by 30% over the past 10 months; Llama is primarily used today to power Meta AI

We’re making good progress towards Llama 4.1 and 4.2, and in parallel, we are also working on our next generation of models that will push the frontier in the next year or so…

…We’re now exploring how to extend the use of LLMs in recommendation systems to our other apps. We’re leveraging Llama and several other back-end processes as well, including actioning bug reports so we can identify and resolve recurring issues more quickly and efficiently. This has resulted in top line bug reports in the U.S. and Canada in Facebook Feed and Notifications dropping by roughly 30% over the past 10 months…

…The primary way we’re using Llama in our apps today is to power Meta AI which is now available in over 200 countries and territories.

Meta’s Prometheus cluster, the first gigawatt-plus AI compute cluster in the world, will come online in 2025 H2; Meta is building its Hyperion AI compute cluster, which can scale up to 5 gigawatts over a few years; Meta has a number of Titan AI compute clusters in development; management expects sufficient compute capacity to be central to Meta’s growth in the coming years; management continues to see very compelling returns in its core ads and organic engagement initiatives from its AI investments; management expects to significantly grow its AI investments in 2026

Our Prometheus cluster is coming online next year, and we think it’s going to be the world’s first gigawatt-plus cluster. We’re also building out Hyperion, which we’ll be able to scale up to 5 gigawatts over several years, and we have multiple more Titan clusters in development as well…

… We expect having sufficient compute capacity will be central to realizing many of the largest opportunities in front of us over the coming years. We continue to see very compelling returns from our AI capacity investments in our core ads and organic engagement initiatives and expect to continue investing significantly there in 2026. We also expect that developing leading AI infrastructure will be a core advantage in developing the best AI models and product experiences. So we expect to ramp our investments significantly in 2026 to support that work.

Meta’s AI investments has unlocked greater efficiency and gains in its advertising systems; management has introduced Meta’s new AI-powered recommendation model for ads to new surfaces and it has led to 5% more ad conversions on Instagram and 3% on Facebook; a meaningful percentage of Meta’s advertising revenue now comes from campaigns using one of Meta’s generative AI features, and management thinks this is especially helpful for small advertisers; management improved the Andromeda ads retrieval system in 2025 Q2, leading to 4% higher conversions on Facebook Mobile Feed and Reels; management improved the GEM (Generative Ads Recommendation System) ads ranking system in 2025 Q2, which partially helped achieve the 5% more ad conversions on Instagram and 3% on Facebook seen; the introduction of new advanced sequence modeling techniques to double the length of event sequences also helped achieve the 5% more ad conversions on Instagram and 3% on Facebook seen; Meta expanded coverage of its Lattice model architecture in 2025 Q2 to earlier-stage ads ranking models, which led to a 4% increase in ad conversions in Facebook Feed and Reels; Meta completed the rollout of its streamlined campaign creation flow for Advantage+ sales and app campaigns in 2025 Q2, which lead to lifts in advertiser adoption; Meta will complete the rollout of the streamlined campaign creation flow for Advantage+ leads campaigns in the coming months; nearly 2 million advertisers are now using Meta’s video generation, image animation, and video expansion generation AI features within Advantage+; Meta began testing AI-powered translation of advertising in 2025 Q2 and prelaunch tests have delivered promising performance lifts; Meta completed the global rollout of its incremental attribution feature – the only product on the market that optimizes for and reports on incremental conversions – in 2025 Q2 

The strong performance this quarter is largely thanks to AI unlocking greater efficiency and gains across our ad system. This quarter, we expanded our new AI-powered recommendation model for ads to new surfaces and improved its performance by using more signals and longer context. It’s driven roughly 5% more ad conversions on Instagram and 3% on Facebook. We’re also seeing good progress with AI for ad creative with a meaningful percent of our ad revenue now coming from campaigns using one of our generative AI features. This is going to be especially valuable for smaller advertisers with limited budgets…

…The Andromeda model architecture we began introducing in the second half of 2024 powers the ads retrieval stage of our ad system, where we select the few thousand most relevant ads from tens of millions of potential candidates. In Q2, we made enhancements to Andromeda that enabled it to select more relevant and more personalized ads candidates while also expanding coverage to Facebook Reels. These improvements have driven nearly 4% higher conversions on Facebook Mobile Feed and Reels.

Our new Generative Ads Recommendation System, or GEM, powers the ranking stage of our ad system, which is the part of the process after ads retrieval where we determine which ads to show someone from candidates suggested by our retrieval engine. In Q2, we improved the performance of GEM by further scaling our training capacity and adding organic and ads engagement data on Instagram. We also incorporated new advanced sequence modeling techniques that helped us double the length of event sequences we use, enabling our systems to consider a longer history of the content or ads that a person has engaged with in order to provide better ad selections. The combination of these improvements increased ad conversions by approximately 5% on Instagram and 3% on Facebook Feed and Reels in Q2…

…We expanded coverage of our Lattice model architecture in Q2. We first began deploying Lattice in 2023 with our later-stage ads ranking efforts, allowing us to run significantly larger models that generalize learnings across objectives and surfaces in place of numerous smaller ads models that have historically been optimized for individual objectives and surfaces. In April, we began deploying Lattice to earlier-stage ads ranking models as well. This is leading not only to greater capacity and engineering efficiency but also improved performance, with the recent Lattice deployments driving a nearly 4% increase in ad conversions across Facebook Feed and Reels in Q2…

…We’re seeing strong momentum with our Advantage+ suite of AI-powered solutions. In Q2, we completed the rollout of our streamlined campaign creation flow for Advantage+ sales and app campaigns, which makes it easier for advertisers to realize the performance benefits from Advantage+ by having it turned on at the beginning. We’ve seen lifts in advertiser adoption of sales and app campaigns since we’ve expanded availability, and are working to complete the rollout for leads campaigns in the coming months. Within our Advantage+ Creative Suite, adoption of GenAI and creative tools continues to broaden. Nearly 2 million advertisers are now using our video generation features, image animation and video expansion, and we’re seeing strong results with our text generation tools as we continue to add new features.

In Q2, we started testing AI-powered translation so that advertisers can automatically translate the caption of their ads to 10 different languages. While it’s early, we have seen promising performance lifts in our prelaunch tests. We’re also continuing to see strong adoption of image expansion among small- and medium-sized advertisers, which speaks to how these tools help businesses who have fewer resources to develop creative. With larger advertisers, we expect agencies will continue to be valuable partners in helping apply these new tools to drive performance…

…In Q2, we completed the global rollout of our incremental attribution feature, which is the only product on the market that optimizes for and reports on incremental conversions, which are conversions that would not have happened without a person seeing the ad.

Meta’s AI investments have significantly improved its ability to show users content they would be interested in and this led to 6% increase in time spent on Instagram and 5% on Facebook just in 2025 Q2 alone; management thinks content on Meta’s platforms can get a lot better, with early progress seen with the launch of AI-powered editing tools; ongoing improvements to Meta’s ranking systems have led to video time growing 20% year-on-year globally for Instagram in 2025 Q2, and video time growing 20% year-on-year in the US for Facebook; management expects to continue delivering additional improvements in content ranking systems throughout 2025; 2/3 of recommended content on Instagram now come from original posts; management is focused on increasing freshness of original posts on Instagram in 2025 H2; Meta is making good progress on its longer-term content ranking innovations; Meta has seen LLMs (large language models) driving a meaningful amount of ranking-related gains in time spent on Threads; management has a roadmap for Meta’s content commendation systems for both the near-term and long-term; the near-term roadmap includes (1) making recommendations even more adaptive to a user at any point in time, (2) helping good content from small creators breakout, and (3) better understand user interests; the long-term roadmap includes (1) foundational recommendation models, and (2) deeper integration of LLMs in recommendation systems 

AI is significantly improving our ability to show people content that they’re going to find interesting and useful. Advancements in our recommendation systems have improved quality so much that has led to a 5% increase in time spent on Facebook and 6% on Instagram, just this quarter. There is a lot of potential for content itself to get better too, we’re seeing early progress with the launch of our AI video editing tools across Meta AI and our new Edits app…

…We continue to see momentum with video engagement, in particular. In Q2, Instagram video time was up more than 20% year-over-year globally. We’re seeing strong traction on Facebook as well, particularly in the U.S., where video time spent similarly expanded more than 20% year-over-year. These gains have been enabled by ongoing optimizations to our ranking systems to better identify the most relevant content to show. We expect to deliver additional improvements throughout the year as we further scale up our models and make recommendations more adaptive to a person’s interest within their session…

…On Instagram, over 2/3 of recommended content in the U.S. now comes from original posts. In the second half, we’ll be focused on further increasing the freshness of original posts, so the right audiences can discover original content from creators soon after it is posted.

We are also making good progress on our longer-term ranking innovations that we expect will provide the next leg of improvements over the coming years. Our research efforts to develop cross-surface foundation recommendation models continue to progress.

We are also seeing promising results from using LLM in Threads recommendation systems. The incorporation of LLMs are now driving a meaningful share of the ranking-related time spent gains on Threads…

…There are a handful of shorter-term things that we’re focused on in the near term. One is we’re focused on making recommendations even more adaptive to what a person is engaging with during their session so that the recommendations we surface are the most relevant to what they’re interested in at that moment. And we’re making optimizations to help the best content from smaller creators break out by matching it to the right audiences sooner after it gets posted. And we’re also working on improving the ability for our systems to discover more diversified and niche interest for each person through interest exploration and learning explicit user preferences. We’re also planning to scale up our models further and incorporate more advanced techniques that should improve the overall quality of recommendations.

But we also have a lot of long-term bets in the hopper around areas like developing foundational models that will support recommendations across multiple services. Incorporating LLM more deeply into our recommendation systems. And a big focus of this work is going to be on optimizing the systems to make them more efficient. So that we can continue to scale up the capacity that we use for our recommendation systems without eroding the ROI that we deliver.

Meta’s management is starting to see product market fit for business AI agents in countries where they are tested; management is integrating business AI agents into advertising shown on Facebook, Instagram, and e-commerce websites; Meta’s click-to-message revenue grew more than 40% year-on-year in the US in 2025 Q2

I’ve talked before about how I believe every business will soon have a business AI, just like they have an e-mail address social media account and website. We are starting to see some product market fit in a number of countries where we’re testing these agents, and we’re integrating these business AIs into ads on Facebook and Instagram as well as directly into e-commerce websites…

…We’re seeing good momentum in Business messaging, particularly in the U.S., where click to message revenue grew more than 40% year-over-year in Q2. The strong U.S. growth is benefiting from a ramp in adoption of our website to message ads, which drive people to a business’s website for more information before choosing to launch a chat with the business in 1 of our messaging apps.

Meta AI has more than 1 billion monthly actives now; management continues to focus on making Meta AI the leading personal AI; management is seeing engagement on Meta AI grow as the underlying AI models improve; Llama is primarily used today to power Meta AI; Meta AI is now available in over 200 countries; Meta AI’s usage primarily comes through WhatsApp and the primary use cases are for information gathering, homework assistance and generating images; management is noticing Meta AI being complementary to the company’s content discovery engines; people are using Meta AI on Facebook to ask about and find content; management expects Meta AI to help with content discovery by automatically translating and dubbing foreign languages

Meta AI. Its reach is already quite impressive with more than 1 billion monthly actives. Our focus is now deepening the experience in making Meta AI the leading personal AI. As we continue improving our models, we see engagement grow…

…The primary way we’re using Llama in our apps today is to power Meta AI which is now available in over 200 countries and territories. WhatsApp continues to be the largest driver of queries as people message Meta AI directly for tasks such as information gathering, homework assistance and generating images. Outside of WhatsApp, we’re seeing Meta AI become an increasingly valuable complement to our content discovery engines. Meta AI usage on Facebook is expanding as people use it to ask about posts they see in feed, and find content across our platform in search. Another way we expect Meta AI will help with content discovery is through the automatic translation and dubbing of foreign language content into the audience’s local language.

Sales of the Ray-Ban Meta smart glasses are accelerating; management will launch new performance AI glasses with the Oakley Meta HSTN; the percent of people using Meta AI with the smart glasses is growing, and retention of new AI users is increasing; management continues to believe that smart glasses will be the primary form factor for people to interact with AI, especially artificial super intelligence; the demand for Ray-Ban Meta smart glasses is still higher than supply and management will ramp supply in 2025 H2; management is exploring smart glasses with different kinds of displays compared to the current iteration; management wants to continue investing heavily in smart glasses because they think it’s going to be an important part of the future

We continue to see strong momentum with our Ray-Ban Meta glasses with sales accelerating. We are also launching new performance AI glasses with the Oakley Meta HSTN’s, they have longer battery life, higher resolution camera and are designed for sports. The percent of people using Meta AI is growing, and we are seeing new users AI retention increase too, which is a good sign for that continued use. I think that AI glasses are going to be the main way that we integrate super intelligence into our day-to-day lives. So it’s important to have all of these different styles and products that appeal to different people in different settings…

…The growth of Ray-Ban Meta sales accelerated in Q2, with demand still outstripping supply for the most popular SKUs despite increases to our production earlier this year. We’re working to ramp supply to better meet consumer demand later this year…

…Right now, we’re building ones that I think are stylish, but aren’t focused on the display. I think if there’s a whole set of different things to explore with displays…

…Because we’ve been investing in this, I think we’re just several years ahead on building out glasses. And I think that, that’s something that we’re excited to keep on investing in heavily because I think it’s going to be a really important part of the future.

Meta’s management’s guidance for capex in 2025 has been narrowed from a prior range of $64 billion to $72 billion to $66 billion to $72 billion (capex was $37 billion in 2024); management expects 2026 capex dollar growth to be similar to 2025’s capex dollar growth; management expects a greater mix of capex in 2025-2026 to be in shorter-lived assets than in prior years; most of the increased capex in 2025-2026 will be for generative AI compute capacity, with significant capex in 2026 also going to core AI; management expects to finance most of the 2026 capex internally while exploring partnerships with financiers

We currently expect 2025 capital expenditures, including principal payments on finance leases, to be in the range of $66 billion to $72 billion, narrowed from our prior outlook of $64 billion to $72 billion and up approximately $30 billion year-over-year at the midpoint. While the infrastructure planning process remains highly dynamic, we currently expect another year of similarly significant CapEx dollar growth in 2026 as we continue aggressively pursuing opportunities to bring additional capacity online to meet the needs of our AI efforts and business operations…

…We also expect a greater mix of our CapEx to be in shorter-lived assets in 2025 and ’26 than it has been in prior years…

…On the CapEx side, the big driver of our increased CapEx in ’26 will be scaling GenAI capacity as we build out training capacity that’s going to drive higher spend across servers, networking, data centers next year. We also expect that we’re going to continue investing significantly in core AI in 2026…

…About how we expect to finance the growing CapEx next year. We certainly expect that we will finance some large share of that ourselves, but we’re also exploring ways to work with financial partners to codevelop data centers. We don’t have any finalized transactions to announce, but we generally believe that there will be models here that will attract significant external financing to support large-scale data center projects that are developed using our ability to build world-class infrastructure while providing us with flexibility should our infrastructure requirements change over time.

Meta’s AI capex for 2025-2026 is purely for internal uses; management has strong ability to measure return on investment (ROI) for Meta’s core AI capex and the ROI remains strong; it’s much harder for management to measure ROI for Meta’s generative AI capex, but they are optimistic about the monetisation opportunities; management continues to have fungibility in mind when building its AI compute capacity

[Question] Your spend is now approaching some of the biggest hyperscalers out there. Do you think of all this capacity mostly for internal uses? Or do you think there’s a way to share or even [indiscernible] with a business model, we’re leveraging that capacity for external uses.

[Answer] Right now, we are focused on ensuring that we have enough capacity for our internal use cases, which includes both all of the core AI work that we do to support the recommendation engine work on the organic content side to support all the ads ranking and recommendation work. And then, of course, to make sure that we are building the training capacity that we think we need in order to build frontier AI models. And to make sure that we’re preparing ourselves for the types of inference use cases that we think might — that we might have ahead of us as we eventually focus not only on developing frontier models, but also how we can expand into the kinds of consumer use cases that we think will be hopefully live — hopefully, widely useful and engaging for our users. So at present, we’re not really thinking about external use case on the infrastructure…

…Around the sort of ROI on CapEx, there are a couple of things. So again, on the core AI side, we continue to see strong ROI. Our ability to measure that is quite good, and we feel sort of very good about the rigorous measurement and returns that we see there. On the GenAI side, we are clearly much, much earlier on the return curve and we don’t expect that the GenAI work is going to be a meaningful driver of revenue this year or next year. But we remain generally very optimistic about the monetization opportunities that will open up, and Mark spoke to them in his script, the sort of 5 pillars, so I won’t repeat them here…

…We are building the infrastructure with fungibility in mind. Obviously, there are a lot of things that you have to build up front in terms of the data center shells, the networking infrastructure, et cetera. But we will be ordering servers, which ultimately will be the biggest bulk of CapEx spend as we need them and when we need them and making sort of the best decisions at those times in terms of figuring out where the capacity will go to use.

Microsoft (NASDAQ: MSFT)

Azure has surpassed $75 billion in annual revenue, up 34%, in FY2025; Azure took share every quarter in FY2025; Azure has more data centers than any other cloud provider; Azure stood up more than 2 gigawatts of compute capacity in the last 12 months; Azure is scaling compute capacity faster than any other competitor; all of Azure’s regions can now support liquid cooling, making them suitable for AI compute; Azure can now deliver 90% more tokens with the GPT4o family of models for the same GPU compared to a year ago through software optimisation alone; Azure grew revenue by 39% in 2025 Q2 (FY2025 Q4) (was 33% in 2025 Q1); management expects Azure to be capacity-constrained through FY2026 H1 despite more capacity being brought online

Azure surpassed $75 billion in annual revenue, up 34%, driven by growth across all workloads. We continue to lead the AI infrastructure wave and took share every quarter this year. We opened new DCs across 6 continents and now have over 400 data centers across 70 regions, more than any other cloud provider…

…We stood up more than 2 gigawatts of new capacity over the past 12 months alone. And we continue to scale our own data center capacity faster than any other competitor. Every Azure region is now AI-first. All of our regions can now support liquid cooling, increasing the fungibility and the flexibility of our fleet…

…We are driving and riding a set of compounding S curves across silicon, systems and models to continuously improve efficiency and performance for our customers. Take, for example, GPT4o family of models, which have the highest volume of inference tokens. Through software optimizations alone, we are delivering 90% more tokens for the same GPU compared to a year ago…

…In Azure and other cloud services, revenue grew 39%, significantly ahead of expectations, driven by accelerated growth in our core infrastructure business, primarily from our largest customers. As a reminder, new cloud and AI workloads are built and scaled using the breadth of our services…

…Even as we continue bringing more data center capacity online, we currently expect to remain capacity-constrained through the first half of our fiscal year…

…I talked about, my gosh, in January and said I thought we’d be in better supply demand shape by June. And now I’m saying I hope I’m in better shape by December. And that’s not because we slowed CapEx. Even with accelerating the spend and trying to pull leases in and get CPUs and GPUs in the system as quickly as we can, we are still seeing demand improve.

The GPT4o family of models from OpenAI has the highest volume of inference tokens

Take, for example, GPT4o family of models, which have the highest volume of inference tokens.

Microsoft’s management thinks Foundry has best-in-class tooling, management, observability and built-in controls for developing AI applications; management sees customers increasingly wanting to use multiple AI models when building applications, and Foundry provides access to more AI models than any other hyperscaler, including models from OpenAI, DeepSeek, Meta, xAI, and more; the Foundry Agent Service is experiencing accelerated adoption and now has 14,000 customers; Nasdaq is using Foundry Agent Service to cut prep time for board meetings by 25%; 80% of the Fortune 500 are using Foundry; Foundry processed more than 500 trillion tokens in FY2025 (was 100 trillion tokens in 2025 Q1), up 7x from a year ago

This year, we launched Azure AI Foundry to help customers design, customize and manage AI applications and agents at scale. Foundry features best-in-class tooling, management, observability and built-in controls for trustworthy AI. Customers increasingly want to use multiple AI models to meet their specific performance, cost and use case requirements. And with Foundry, they can provision inferencing throughput once and apply it across more models than any other hyperscaler, including models from OpenAI, DeepSeek, Meta, xAI’s Grok and, very soon, Black Forest Labs and Mistral AI. We sim-shipped 15 models from OpenAI alone on Foundry this year, providing same-day access to state-of-the-art models deeply integrated with our infrastructure and tools.

And we are seeing accelerated adoption of our new Foundry Agent Service, which is now being used by 14,000 customers to build agents that automate complex tasks. For example, Nasdaq is using foundry to build agents that help customers prepare for Board meetings, cutting prep time by up to 25%. All up, 80% of Fortune 500 already use Foundry. And when we look narrowly at just the number of tokens served by Foundry APIs, we processed over 500 trillion this year, up over 7x. This is a good indicator of true platform diffusion beyond a few head apps and services.

Microsoft’s family of Copilot apps has surpassed 100 million MAUs (monthly active users)

Our family of Copilot apps has surpassed 100 million monthly active users across commercial and consumer.

Across the entire Microsoft product suite, there are 800 million monthly active users of AI features

When you take a broader look at the engagement of AI features across our products, we have over 800 million monthly active users.

Customers are adopting Microsoft 365 Copilot at a faster rate than any other new Microsoft 365 suite, with strong usage intensity; in 2025 Q2 (FY2025 Q4), Microsoft saw the largest quarter of seat adds since launch for Microsoft 365 Copilot; Barclays, UBS, Adobe, KPMG, Pifzer, and Wells Fargo are recent examples of large organisations that have expanded or bought new Microsoft 365 Copilot seats; the Researcher and Analyst deep reasoning agents have been used by tens of thousands of organisations in their first weeks of availability; hundreds of partners have built 3rd-party AI agents that integrate with Copilot; management is seeing more customers build their own AI agents with Copilot Studio; 3 million agents were created by Microsoft’s customers in FY2025; customers can use Copilot Tuning to create agents fine-tuned on their company’s data, workflow and style

Customers continue to adopt Copilot at a faster rate than any other new Microsoft 365 suite, with strong usage intensity as shown by our week-over-week retention. And we saw the largest quarter of seat adds since launch with a record number of customers returning to buy more seats. Barclays, for example, will roll out Microsoft 365 Copilot to 100,000 employees globally following a successful initial deployment of 15,000. UBS is expanding its deployment to all of its employees after initially rolling it out to 55,000 of them. And Adobe, KPMG, Pfizer, Wells Fargo all purchased over 25,000 seats this quarter.

Tens of thousands of organizations have already used our Researcher and Analyst deep reasoning agents in the first weeks of availability. And we have introduced group-level agents in Teams like Facilitator and Interpreter, which generate real-time translation and notes in meetings.

Hundreds of partners like Adobe, SAP, ServiceNow and Workday have built their own third-party agents that integrate with Copilot and Teams. We are also seeing more customers use Copilot Studio to extend Microsoft 365 Copilot and build their own agents. This year, customers created 3 million agents using SharePoint and Copilot Studio. And with Copilot Tuning, they can easily create agents fine-tuned on their company’s data, workflow and style that reflect their unique tone, language and expertise.

GitHub Copilot’s Agent Mode and Coding Agent have great momentum in IDEs (integrated development environments); GitHub Copilot has 20 million users; GitHub Copilot enterprise customers increased 75% sequentially in 2025 Q2; 90% of the Fortune 100 use GitHub Copilot; AI has led to explosive growth in GitHub usage, with AI projects on GitHub more than doubling from a year ago; vibe coding projects are generating more pull requests and reports on GitHub; the Code Review Agent is performing millions of code reviews monthly in GitHub

GitHub Copilot continues to have great momentum in IDE with Agent Mode and new form factors like Coding Agent which is capable of asynchronously executing developer tasks. We have 20 million GitHub Copilot users. GitHub Copilot enterprise customers increased 75% quarter-over-quarter as companies tailor Copilot to their own codebases, and 90% of the Fortune 100 now use GitHub Copilot. More broadly, GitHub usage and repos are seeing explosive growth because of AI. AI projects on GitHub more than doubled over the last year. The surge in vibe coding projects and AI coding agents, whether it is Claude Code, Codex, Cursor or GitHub Copilot, are generating more pull requests and more repos on GitHub. And our Code Review Agent is being used heavily across the platform, performing millions of code reviews each month.

More than half of Microsoft’s cloud and AI-related capex in 2025 Q2 (FY2025 Q4) are for long-lived assets that will support monetisation over the next 15 years and more, while the other half are for CPUs and GPUs, driven by strong demand signals; management feels good about the ROI (return on investment) on Microsoft’s capital expenditure; Microsoft’s capital expenditure is correlated to the company’s contracted backlog; management does not want to focus too much on when capex growth will be slower than revenue growth because doing so will cause Microsoft to be too conservative in winning market share

Capital expenditures were $24.2 billion, including $6.5 billion of finance leases where we recognize the full value at the time of lease commencement. Cash paid for PP&E was $17.1 billion. The difference is primarily due to finance leases. More than half our spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining spend was primarily for servers, both CPUs and GPUs, and driven by strong demand signals…

…When you think about the full year comments I’ve made on CapEx as well as the Q1 guidance of over $30 billion, you first have to ground yourself in the fact that we have $368 billion of contracted backlog we need to deliver, not just across Azure but across the breadth of the Microsoft Cloud. So in terms of feeling good about the ROI and the growth rates and the correlation, I feel very good that the spend that we’re making is correlated to basically contracted on the books business that we need to deliver and we need the teams to execute at their very best to get the capacity in place as quickly and effectively as they can…

…At its core, our investments, particularly in short-lived assets like servers, GPUs, CPUs, networking storage, is just really correlated to the backlog we see and the curve of demand…

…I am not as focused, Kash, on trying to pick a date at which revenue growth and CapEx growth will meet and cross. I’m focused on building backlog, building business and delivering capacity, which we are seeing has a good ROI today in terms of our ability to get that done. So I don’t want people to get overly focused on a pivot point. Because when you’re in sort of these expansive moments, picking a data point usually means you’re going to pick to be too conservative in terms of market share gain and in terms of winning

Microsoft’s management expects to deliver double-digit revenue and operating income growth in FY2026; management expects to continue investing in cloud and AI initiatives; management expects capital expenditure growth in FY2026 to moderate from FY2025’s level; management expects the capital expenditure growth rate in FY2026 H1 to be higher than in FY2026 H2; management expects operating margin to be unchanged in FY2026

Building on the strong momentum we saw this past year, we expect to deliver another year of double-digit revenue and operating income growth in FY ’26. We will continue to invest against the expansive opportunity ahead across both capital expenditures and operating expenses given our leadership position in commercial cloud, strong demand signals for our cloud and AI offerings, and significant contracted backlog. Capital expenditure growth, as we shared last quarter, will moderate compared to FY ’25 with a greater mix of short-lived assets. Due to the timing of delivery of additional capacity in H1, including large finance lease sites, we expect growth rates in H1 will be higher than in H2. We remain focused on delivering revenue growth and increasing our operational agility. And as a result, we expect operating margins to be relatively unchanged year-over-year.

Microsoft’s management is not worried about some of its its largest customers – mostly AI companies – becoming competitors as long as there’s broad diffusion happening behind what the lead companies are building

[Question] You guys have always had software start-ups as customers and potentially emerging competitors. But the AI labs now feel different…. It seems like there’s a lot of potential opportunity in supporting those businesses, but also it’s not certain that they’re going to stay your customers as they scale. They could in-source some of that infrastructure. And they very likely emerge as potential competitors.

[Answer] There’s always been, I’ll call it, head apps or head — new companies that emerge, that in fact are very needed in order to birth a new platform… Then broadly, they — or rather over time, there will be broad diffusion. In fact, one of the things that Amy and I track is not just the head app usage, but also what’s the sort of all the Tier 2 applications that are being built. So that sort of — that speaks a little bit, Keith, to I think your question, is as long as we have head apps shaping the platform and then, after that, we have the broad diffusion happen, which in some sense both of those is what we are seeing. So I feel very good about our being in decent standing going forward.

Microsoft’s management sees that every GPU requires storage and compute and the ratio is really bullish for infrastructure growth

One of the other things we track is every GPU requires storage and compute. That ratio is another thing that is really exponential for infrastructure growth.

Microsoft’s management thinks AI software will be monetised via a combination of per-seat fees and usage fees

[Question] What do you think is the best way that software companies are going to be able to monetize AI for SaaS?

[Answer] We’re seeing very similar monetization tools exist in this transition too, right? There’s a per user logic, there’s tiers of per user. Sometimes those tiers relate to consumption, sometimes there’s pure consumption models. I think you’ll continue to see a blending of these. Especially as the AI model capability grows, you’ll end up with ways that teams are going to want to throttle that usage, use the best models for the best job. And I think the blending of these models will continue to be something we see on a go-forward basis.

Microsoft’s management is noticing that the development of AI applications is becoming more sophisticated than just calling APIs from an AI model; management analogises the current increase of sophistication in the development of AI applications to the historical example of the time it took for ERP (enterprise resource planning) systems to emerge after relational databases were created, but notes that the increase of sophistication in AI application development is much faster

I think what we are noticing in our own build-out of these AI applications and in general is the platform is becoming more than, “Here is the model and here is an API. Make some calls,” right? I mean that, in some sense, was a bit of the state-of-the-art maybe even a year ago. Whereas now you have essentially these very stateful app patterns that are emerging that require quite a bit of rethinking of even the app stack. I mean take even the storage tier stuff, right, the degree of sophistication you have, and hey, how much of an index do you really want to build by preprocessing so that your prompt engineering, or context engineering as I call it, can be better and higher quality? So I think all of that is emerging…

…I always go back and say, hey, when, I don’t know, relational database came out, it took a while for people to build an ERP system, let’s say. And this thing, we’re kind of building pretty sophisticated applications at a very, very fast clip based on, I think, the degree of maturity that’s emerging.

Netflix (NASDAQ: NFLX)

Netflix’s management continues to think that AI will help creators make better content and save costs; Netflix’s creators are already seeing the benefits of AI in production, especially in visual effects; the Netflix series El Eternaut used generative AI to help create a sequence (a) 10x faster compared to using traditional methods, and (b) that was not possible previously from a budget perspective; the AI-produced sequence in El Eternaut is the very first generative AI footage to appear in Netflix’s content

We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper…

…Our creators are already seeing the benefits in production through pre-visualization and shot planning work and certainly, visual effects. It used to be that only big budget projects would have access to advanced visual effects like de-aging…

…This year, we had El Eternaut. It’s a very big hit show for us from Argentina. And in that production, we leveraged virtual production and AI-powered VFX. And there was a shot in the show that the creators wanted to show building collapsing of Buenos Aires. So our Eyeline team partnered with their creative team. Using AI powered tools, they were able to achieve an amazing result with remarkable speed and in fact, that VFX sequence was completed 10x faster than it could have been completed with visual — traditional VFX tools and workflows. And also, the cost of it just wouldn’t have been feasible for a show on that budget. So that sequence actually is the very first GenAI final footage to appear on screen in a Netflix original series or film. So the creators were thrilled with the result. We were thrilled with the result. And more importantly, the audience was thrilled with the result…

…I probably should clarify, that Eyeline is our production innovation group inside of our VFX house at Scanline, and they’re doing a lot of this work with our creators.

Netflix’s management thinks AI can be used to improve the member experience; Netflix is testing an AI-powered user interface where members can have a conversational experience to find content 

The member experience is a place where we feel like there’s tons of opportunity to leverage these new generative technologies to improve the experience. We’ve been in the personalization and recommendation business for 2 decades, but yet we see a tremendous room and opportunity to make it even better by leveraging some of the more newer generative techniques.  We’re also rolling out, have piloted right now a conversational experience that uses, allows our members to basically have a sort of natural language discussion with our user interface thing. I want to watch a film from the ’80s that’s a dark psychological thriller, get some results back, maybe iterate through those in a way that you just couldn’t have done in our previous experiences. So that’s super exciting and we see that all of the work that we do there essentially is a force multiplier to that large content investment that we’re making.

Netflix’s management thinks generative AI can be beneficial for Netflix’s advertising business by lowering the hurdle to create brand-appropriate advertising content that’s relevant in the particular title the advertisement is being shown in

Advertising is another really great area. We’ve seen — it’s a high hurdle to create a brand forward spot in a creative universe of one of the titles that we’re currently carrying. But it’s very compelling for both watchers and for those brands, and we think these generative techniques can decrease that hurdle iteratively over time and enable us to do that in more and more spots.

Paycom Software (NYSE: PAYC)

Paycom’s anagement has introduced a new command-driven AI product called IWant; management thinks IWant is Paycom’s most significant product-release to-date; IWant allows employees, managers, administrators, and executives to use natural language to ask any information about their company; IWant’s command-driven feature means nobody needs to be trained on how to use Paycom software; IWant pulls data from Paycom’s single database, so there are no problems associated with inconsistent or duplicative data sets; early customer-feedback on IWant has been phenomenal; management expects IWant to increase usage of Paycom’s software among non-daily users and to increase customer satisfaction and ROI (return on investment); management expects to activate IWant for all customers by the remainder of 2025 Q3

I’ll focus my comments on our second quarter achievements and highlight our latest AI command-driven product, IWant…

…We recently released IWant, the most significant product in our company’s history. We already have the most automated solution in the industry, and IWant delivers even more value to our clients through AI and automation…

…Hopefully, everyone has seen the demo we linked in today’s earnings press release issued at the close of the market. If you did, you saw numerous use cases for it on the employee, manager, administrator and executive side of the software. You also saw how IWant eliminates the need for a Paycom user to be trained on our software. With IWant’s command-driven AI users either type in or leverage voice-activated functionality to command the system, and IWant is designed to immediately provide the answer with accurate results. This means that navigation and asking others for system information is rendered obsolete.

A critical component of AI is the data it pulls from. And because IWant pulls from Paycom’s single database, it eliminates problems created by inconsistent or duplicative data sets.

On the manager side, IWant supports HR teams and organization leaders with instant employee information. For example, a manager can use IWant to pull data on when an employee returns from vacation, see who’s clocked in for the day or analyze an employee’s pay history…

…Today, in IWant’s executive mode executives using Paycom now have the information they need at their fingertips, enabling them to be daily users of our solution without ever having to be trained on the system. Just tell it what you want and IWant delivers, making executives even smarter and more effective. Now I can quickly find any information about my staff available in our single database because we track the entire employee life cycle and have data from applicant tracking, onboarding, Paycom Learning, expenses, benefits, time and attendance, payroll, schedules, surveys and more, all accessible through IWant.

Early feedback has been phenomenal with clients calling this a total game changer.

IWant’s command-driven AI engine will increase usage among non-daily users in our system. And I fully expect IWant to increase satisfaction and client ROI…

…We’ve turned on 10% of our clients so far this week. I would say by the end of this week, we’re at 15% to 20% activated… we do expect to be able to activate all of our clients throughout the remainder of this quarter…

…The more you add, the more functionality you have in these types of systems and enterprise-type systems, it does require a level of training for someone to really to be able to deploy it. Even some employees require some level of training. This removes all of it. And so it’s the biggest innovation that we’ve ever done at our company since its founding just because of the impact that it has.

Paycom’s management thinks that voice-activated, command-driven software is the way of the future

Voice-activated command-driven functionality is the future for all software and Paycom’s future started last week…

…This is a different way to utilize software. I’m unfamiliar with any other SaaS company that has a command-driven navigation throughout their system. And so I do think this is going to be a thing for not only our industry, but any type of software where users are currently navigating.

Paycom’s management expects IWant to drive more full-solution deployment of Paycom’s products across the company’s client base; management thinks IWant will increase Paycom’s customer retention rate; there’s no requirement for a customer to get BETI in order to use IWant; management does not want to directly monetise IWant; management thinks that Paycom’s competitive environment has gotten a lot better with the release of IWant; management thinks IWant will have a noticeable positive impact on Paycom’s new logos, retention, and new product adoption

If I’m asking IWant — if one of our clients is asking I want for resume information, or if they asked them for prior work history information, and they’re not on our applicant tracking system, they’re not going to have success pulling that information. And so — and one way it will help us is I do think there’ll be more full solution deployments across our client base so that you get access…

…I do think it’s going to, over time, impact our retention as these clients become more engaged in the software and get the full value available to them. IWant removes all the impediments to value. So now you just get, you didn’t have to work for it as much…

…As far as implications for BETI adoption, it’s not required that you’ve implemented BETI to get value out of I want. I do think that the more Paycom’s products that you use, which would include it, the greater the value you’re going to get from it. And the more questions that we’ll answer for you, the more insight it will give you. And so I do think IWant makes it easier to use all that additional functionality, but there’s not a requirement that someone would have BETI…

…[Question] Given how like useful IWant books and how into it is, like why not more directly monetize it on a [indiscernible] basis or a usage basis versus kind of indirectly monetizing it on better sales and and driving attach above the modules?

[Answer] I believe that every client should access their data this way, and we’ve had clients that have been with us a long time, and there’s no reason to make them pay to get the value that’s available for them, where I really think that this is just going to take off for us. So I really just don’t think we need to do that plus. I don’t want to spend a lot of time having to go out and sell clients and charge them on things that I can really get them to use the full utilization of the system…

…From a change in the competitive market, I think they all got a lot less competitive a couple of weeks ago, to be honest with you. And this is going to be a thing. I mean you guys kind of see this will be a thing moving forward. I mean our client feedback has been really good. I think that I know competitors will say they have the most automated, the most this, the most that. But if you can’t talk to it, it’s not the most automated, it’s not the most modern…

…[Question] When you talk about IWant taking off for you, where do you think it shows up most? Is that new logos? Is it retention? Is it new product adoption?

[Answer] I think it’s going to start showing up in all those areas. I mean I’m very bullish on it showing up in all those areas. Obviously, new sales new logo app has always been the largest opportunity. We have to increase and drive revenue growth. So I would definitely expect that to be probably the largest bucket of that. But I will also tell you, I expect to have a huge impact on our retention over time as people are using it becoming more acclimated to it. And I also think it’s going to have an impact on our CRRs being able to go out there and be able to talk to someone about if you want to be able to pull data from the complete employee life cycle. And if you want your employees to actually be able to leverage all this, it’s really important that you have these other modules that we have. And so I also think it’s going to make an impact there.

Paycom has to spend more on capital expenditure as it builds AI-powered products, but management believes the capex is front-loaded

We’ve always developed and hosted our own platforms. And as we move into AI, it does require a certain level of spend. So as we look at that, I do believe it to be more transitory in nature. But as we look at that, that’s going to be front-end loaded for us right now, and that’s really what we’re looking at. And a lot of that’s going to be through CapEx.

PayPal (NASDAQ: PYPL)

PayPal’s management sees agentic AI rapidly changing the landscape for commerce; management gave a reminder that in 2025 Q1, PayPal launched the payments industry’s first remote MCP (Model Context Protocol) server to enable AI agent frameworks to integrate with PayPal APIs

major players in AI have been working with PayPal in the last few months to create agentic commerce experiences; management will continue to build PayPal’s capabilities in agentic commerce

Agentic AI is rapidly changing the commerce landscape and PayPal is at the forefront. We were an early mover, launching the first remote MCP servers for commerce earlier this year. Now we’re helping merchants and developers meet the moment as consumers begin to purchase goods and services through AI agents. As you’ve seen through our announcements over the last few months, the major players in AI, including Perplexity, Anthropic and Salesforce are working with PayPal to create powerful new agentic commerce experiences. These new experiences will enable customers to find the right products, check out directly within the AI client, track purchases and much more. We have differentiated KYC and KYB expertise, access to the largest ecosystem of payment-ready wallets with PayPal World, and we’ll continue to build our capabilities in this nascent space, so that we strengthen our position as the go-to partner for agentic commerce.

Shopify (NASDAQ: SHOP)

Shopify’s management has often been ahead of the curve in providing solutions for Shopify merchants to meet important changes in the commerce landscape; the latest important change is agentic commerce and management has been building a suite of Shopify products for merchants, ranging from discovery to checkout, to thrive in agentic commerce; management is seeing AI platforms become the new way consumers discover products by having conversations with agents; management launched Catalog in 2025 Q2, which provides real-time access to millions of products from the company’s merchants through a single API (application programming interface) or MCP (model context protocol) server; management recently launched Universal Cart in early-access, and it holds items from multiple stores in one cart within an agentic chat; management launched a new version of Checkout Kit that is being used by Microsoft Copilot and it lets partners embed a merchant’s checkout in an AI agent; Shopify is powering conversation-driven product recommendations for consumers; management has observed that with agentic commerce, it’s not the largest product-company that wins, but the product that best serves the consumer; management has no viewpoint on whether agentic commerce is taking share away from search-based commerce, they just want to get Shopify’s merchants ready to handle any shift; management wants Shopify to be the best partner for AI companies to work with

We were ahead of the curve with social commerce, building early integrations for Instagram and YouTube. We saw the opportunity for commerce to meet culture, so we built a Spotify integration. And more recently, we predicted the rise of shopping in the metaverse with a Roblox integration that’s already growing quickly…

…Shopify has been building infrastructure to power agentic commerce. As AI platforms become the new way people discover products, consumers are not just searching, they’re having conversations with agents to find what they need, but powering seamless shopping across millions of brands is a massive technical challenge. And that’s where Shopify comes in. We’ve built a suite of products that make it easy for AI platforms to bring shopping to their agents from discovery to checkout, and our merchants are front and center…

…We launched Catalog in Q2 to give AI partners and shopping apps real-time access to millions of products from across our global merchant network, all through a single connection available as an API or an MCP server. Shopify catalog simplifies the process for apps and AI agents to search and pull product data so the results are clear, accurate and up to date…

…Let’s also talk about Universal Cart, which literally launched yesterday in early access. Universal Cart holds items from multiple stores all in one spot so that shoppers can easily track all their items they want to buy within the chat. And when it comes time to purchase, we’ve built a new and improved version Checkout Kit, and it’s already being used by Microsoft Copilot, a huge player in the AI space. Checkout Kit lets partners embed the merchant’s checkout right in their agent. Now we’re also giving partners the power to theme the Checkout Kit, so it matches their applications look and feel, creating this seamless experience and they don’t have to worry about payments, taxes or regulations…

…For shoppers, we’re powering conversation-driven product recommendations from all of their favorite brands…

…Catalog, which was launched in Q2, that’s already out there. That really helps agents to search, but also to surface exactly what customers want in seconds. And so it uses these very specialized large language models to categorize to enrich, but also to standardize product data at these massive volumes…

……So this is another surface area where there is a very serious potential where commerce could be taking place, whether it takes some of the market share away from search-based commerce or not, we want to be prepared for that….

…One thing that we do think though is really interesting about agentic commerce, in particular, is it’s not necessarily based on who is the largest company, it’s based on what consumers are looking for…

…The reason that you’re hearing about all these new innovative things we’re doing, whether it’s catalog or Universal Cart or Checkout Kit is because we want to make sure that we become the best partner for these AI companies to work with and these agents to work with.

When a consumer asks an AI agent for the best travel bag, Catalog kicks in and the consumer adds a bag into Universal Cart; the consumer can carry on shopping within the AI agent and complete the checkout later without leaving the chat

When a shopper asks an agent for the best travel bag, it instantly searches Shopify’s catalog and shows the top products, live prices, descriptions and inventory. The shopper adds their choice to the cart. They don’t have to check out right away. They can keep shopping. Everything they want is pulled into a single cart. And when they’re ready, the shopper completes their checkout without ever having to leave the chat. Now this unlocks a whole new kind of commerce.

Shopify’s management sees Sidekick as Shopify’s most exciting AI product for merchants; Sidekick has unique data analysis capabilities that delivers insights rapidly; a kids clothing merchant used Sidekick for actionable insights that they used to spend hours searching for; a skin care merchant used Sidekick to know exactly where they were experiencing customer-churn; Sidekick has many other capabilities besides unique data analysis 

Let’s talk about our most exciting AI product offering for our merchants, Sidekick. Sidekick’s unique ability for data analysis continues to shine through, helping merchants address their toughest business challenges. For example, a merchant in the kids clothing category recently shared with me that Sidekick delivers the kind of actionable insights they used to spend hours searching for. Questions like how can I optimize my inventory to avoid sellouts and boost cash flow? Or why am I seeing more customer churn from subscriptions in the last 3 months? Or even help me compare results from our last 3 BFCM campaigns and suggest improvements for the next one. They are all answered, explained and visualized in seconds…

…A skin care merchant recently told us that in real time, Sidekick helped them pinpoint exactly where they were experiencing customer churn down to the cohort, city and even purchase behavior in seconds…

…As I’ve talked about on previous calls, that’s on top of all the other ways Sidekick helps merchants like writing product descriptions, generating logos and images, streamlining workflows and customizing their storefronts and so much more.

Shopify’s management launched an AI store builder in 2025 Q2 that can create a custom online store in seconds

This quarter, we also launched an AI store builder that can create a custom online store in seconds, literally in seconds from a single phrase. Now all you need is an idea and a description of the product you want to sell like stylish athleisure apparel for women, and Shopify will do the rest.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management thinks demand for semiconductors will continue to be robust; management thinks that AI’s long-term demand outlook is very positive, given the explosive growth in token volume; management expects CoWoS (chip on wafer on substrate) demand to remain strong, driven by AI; management is trying to narrow the gap between supply and demand for CoWoS; export restrictions for NVIDIA’s H20 chip was recently lifted by the US government and TSMC’s management thinks this is good news, although they have yet to hear from NVIDIA, so TSMC is not ready to increase its forecast for CoWoS growth; the rapid development of AI data centers is driving high demand for TSMC’s leading edge nodes and management has not seen such strong demand for a long time; management is working hard to support the demand

We believe the demand for semiconductors is very fundamental and will continue to be robust. Recent developments are also positive to AI’s long-term demand outlook. The explosive growth in token volume demonstrate increasing AI model usage and adoption, which means more and more computation is needed, leading to more leading-edge silicon demand. We also see AI demand continuing to be strong. including the rising demand from sovereign AI…

…Demand from the AI getting stronger and stronger, if you pay attention to what the four-trillion company the CEO said. And so the megatrend for the AI continue to be strong and so is the CoWoS. And so now we are — again, we are in a mode trying to narrow the gap. I don’t want to use the balance. The last time you guys misunderstood what I said is — sorry it’s bad worded. So I will say we try to narrow the gap…

…[Question] H20 chip shipping to China. I remember 3 months ago, there was another question on this matter, right, meaning that back then, I believe that chip was suspend, but you’re still very confident about your mid-40% CAGR for CoWoS growth in the coming 5 years. Right now China becomes your addressable market again, do you think that mid-40% CAGR target can be revised up?

[Answer] The H20 now is again, according to the trading companies CEO, we did not receive the signal yet. So it’s too early to give you an estimate. But certainly, this is a good news, right? I mean that’s — China is a big market and my customer can still continue to supply the chip to the big market. And it’s a very positive news for them. And in return, it’s a very positive news to TSMC. Whether we are ready to increase our forecast, not yet. Another quarter probably will be more appropriate to answer your question…

…We saw a lot of announcement of the AI data center all over the world and the demand on 3-nanometer, actually on 5-nanometer, 3-nanometer and the future 2-nanometer are very high. And we did not see this kind of strong demand for a long time, but will be enough to support them, I still want to use my word, say that we try very hard to narrow the gap. between the demand and the supply. We’re working very hard.

TSMC’s 3rd fab in Arizona will utilise N2 and A16 process technologies and construction has already begun, and management is looking into speeding up the production schedule based on strong AI-related demand from customers; after all of TSMC’s Arizona facilities, including the advanced packaging fabs and R&D center, are completed, 30% of TSMC’s 2nm and more advanced capacity will be located in Arizona, creating an independent leading-edge semiconductor manufacturing cluster in the USA

With a strong collaboration, and support from our leading U.S. customers and the U.S. federal state and city government, we announced our intention to invest a total of USD 165 billion in advanced semiconductor manufacturing in the United States. This expansion includes plans for 6 advanced wafer manufacturing fab in Arizona, 2 advanced packaging fabs and a major R&D center to support the strong multiyear demand from our customers.

Our first fab in Arizona has already successfully entered into high-volume production in 4Q 2024, utilizing N4 process technology with a yield comparable to our fab in Taiwan. The construction of our second fab, which will utilize 3-nanometer process technology is already complete. We are seeing strong interest from our leading U.S. customers and are working on speeding up the volume production schedule by several quarters to support their need. Construction of our third fab, which will utilize 2-nanometer and 16 process technologies has already begun, and we will look into speeding up the production schedule as well based on the strong AI-related demand from our customers. Our fourth fab will utilize N2 and A16 process technology and our fifth and sixth fab will use even more advanced technology. The construction and ramp schedule for those fabs will be based on our customers’ needs. Our expansion plan will enable TSMC to scale up to a giga fab cluster in Arizona to support the needs of our leading-edge customers in smartphone, AI and HPC applications.

We also plan to build 2 new advanced packaging facilities and establish an R&D center to complete the AI supply chain. After completion, around 30% of our 2-nanometer and more advanced capacity will be located in Arizona, creating an independent leading-edge semiconductor manufacturing cluster in the U.S. Thus, TSMC will continue to play a critical and integral role in enabling our customers’ success.

TSMC’s A16 process technology has performance and power benefits over N2P; A16 is best suited for specific HPC (high-performance computing) products, which means it is best suited for AI-related workloads; the A16 node will be the first node where TSMC’s AI customers will utilise TSMC’s leading edge node when historically it was just smartphone customers that will do so, because AI customers require chips with the best power efficiency

We also introduced A16 featuring our best-in-class Super Power Rail or SPR. Compared with N2P, A16 provides a further 8% to 10% speed improvement at the same power or 15% to 20% power improvement at the same speed and additional 7% to 10% chip density gain. A16 is best suited for specific HPC products with complex signal routes and dense power delivery network. Volume production is on track for second half 2026…

……[Question] You highlighted A16, which will be very applicable for high-performance compute. Is that the node where AI and HPC would actually be at par with smartphone as an end market that would drive demand for the most leading-edge nodes?… So far, AI has been N+1, N+2. Is that A16 the first node where AI would move to the leading edge?

[Answer] Usually, the HPC’s customers are always one step behind using N+1 or N+2 technologies. Now because of AI demand is so strong, that’s one thing. But the most important thing is we need some kind of performance, but the power consumption is very, very important. And when we talk about A16, we have another power efficiency improvement close to 20%. That’s a big value for all the AI data center applications. So that help my customer moving faster because of — every time when we talk about the AI data center, if you notice that the first thing they talk about is power supply, electricity, right? So they did not tell you say the power efficiency is very important, but they tell you that we have to build a very big electricity power plant to support the AI data centers. So that tells you how important it is. And TSMC is the technology, by the way. A16 is a further improvement of the N2 node. So it’s not a surprise for TSMC to expect for those people in AI data centers industry, they want to use in A16.

TSMC’s management sees the momentum is still going for on-device AI and edge AI; the increase in the number of product-units is mild, but the die size is growing faster; management thinks another 6 months or a year is needed for an explosion in demand

[Question] You talked about on-device AI as a potential future driver. Are you seeing more development on the on-device AI part? Is it better compared to maybe 3, 6 months back?

[Answer] I say that it takes 1 to 2 years for my customer to complete their new design on the product. The momentum is still going. They are still continue to — as time goes by, as I said, the increase on the edge device, the number of the units is actually mild. But then the die size increase. We continue to see that. And the die size increased by about 5% to 10%. And that kind of trend continued. Okay? So you have to wait another probably 6 months or 1 year to see an explosion.

TSMC’s management thinks it’s too early to look at the market opportunity for humanoid robots, but TSMC’s customer (probably alluding to Tesla) thinks humanoid robots will be a massive economic opportunity

[Question] We have learned that humanoid robot started to contribute to TSMC and it is gaining momentum as the next frontier of the AI hardware. How does TSMC evaluate the market size of humanoid robot in the semiconductor and in terms of the potential market TAM, compute and also sensor requirements?

[Answer] It’s too early to say the humanoid robot will play a role in this year. Next year, probably still too early because it’s so complicated. You know that humanoid robot will be most of the time will be used. I think the first one will be used in the medical industry to taking care of the people getting over like me. And I probably someday I need some humanoid robot to help me. But it’s very complicated because it’s not — we are talking about the brand only. Actually, you are talking about a lot of sensor — sensor technology, the image sensor, the pressure sensor, the temperature sensor and all the feedback to the CPU. And so it’s very complicated. And since it’s dealing with human being directly, has to be very, very careful. But then once you start to fly, it was a big, big plus. I talked to one of my customers and he say that the EV car is nothing — is robot will be 10x of that. I’m waiting for that.

Tesla (NASDAQ: TSLA)

Tesla has successfully launched robotaxi in Austin; management has already expanded robotaxi’s service area in Austin since launch, and is looking to expand it further, by up to 10x; management is getting regulatory permission to launch robotaxi in other parts of the US; management thinks it’s likely half of the US population can access robotaxi by the end of 2025; management is being very cautious with the rollout of robotaxi; Tesla has more than 7,000 miles operating in Austin for the robotaxi right now, with only a handful of vehicles; there has been no notable safety critical incidents for the robotaxi so far; management thinks robotaxi has potential to bring the cost of transport down to less than $0.30 per mile partly because the robotaxi cars (Cybercab) have build-plans that are optimised for autonomy

We were able to successfully launch robotaxi, so providing our first drives with no one in the driver seat with paying customers in Austin. And as some may have noted, we’ve already expanded our service area in Austin. It’s bigger and longer. And it’s going to get even bigger and longer. We were expecting to really greatly increase the Austin service area to well in excess of what competitors are doing. And that’s hopefully in a week or so, 2 weeks…

…We’re getting the regulatory permission to launch in the Bay Area, Nevada, Arizona, Florida, and a number of other places. So as we get the approvals and we prove out safety, then we will be launching autonomous ride-hailing in most of the country. And I think we’ll probably have autonomous ride-hailing in probably half the population of the U.S. by the end of the year. That’s at least our goal, subject to regulatory approvals…

…We are being very cautious. We don’t want to take any chances…

…We’ll continue to expand in Austin to probably more than 10x our current operating region…

…We have more than 7,000 miles operating in Austin area. It’s just because service is new, we have a handful of vehicles right now, but then we are trying to expand the service in terms of both the area and also the number of vehicles, both in Austin and other locations. So far, there’s no notable safety critical incidents…

…The Cybercab, which is really optimized for autonomy, that, I think, has got probably sub-$0.30 per mile potential over time, maybe $0.25. It’s really — like if you design a car from scratch to be a cost-optimized robotic taxi like Cybercab — like, for example, we’re not trying to make its cornering like incredibly well like a Model 3 would or Model S would or even a Model Y, like it’s got — all of our cars that are driven by people are super fun to drive. They’ve got incredible acceleration, incredible cornering capability. But we’re confident that very few people in a Cybercab want to be hurtling around. So we’ve produced the top-end speed, which means we can use more efficient tires. We don’t need as much acceleration. We don’t need as much — take breaks to sort of — we want stopping distance, but we’re not expecting it to be heavily used. It’s a gentle ride. Essentially, if you design it for a gentle ride and then you have a much more optimized design point. So that’s why it seems probable we could achieve that. Especially, Optimus is serving, cleaning up the car and doing maintenance and stuff. And doing automatic charging…

There will be a step-change improvement coming soon for the FSD software for US users; management will soon be increasing the parameter count for FSD by nearly 10x; a Tesla car was recently delivered autonomously directly from the factory to the customer’s home; all of Tesla’s vehicles in its current lineup are capable of autonomy and this is a big differentiator for Tesla from the competition; Tesla cars on FSD are 10x safer than cars that are not on FSD; management is seeing a recent uptick in FSD adoption in the USA; since FSD transitioned to version 12, adoption rates have increased; management thinks Tesla vehicles can be delivered autonomously, be default, in the Greater Austin and Bay Area, by end-2025; there has been a 25% increase in penetration rate of FSD subscriptions since the introduction of version 12, and also the reduction in pricing; more than half of Tesla vehicle owners are not aware of FSD’s existence

Within the U.S., as we get confident about safety in different geographic areas, we’ll loosen up on how much somebody has to be laser-focused — to have their eyes laser-focused on the road. That’s been a common complaint. In fact, it does create an odd safety issue where people will sometimes disengage autopilot, then do something, change the radio or maybe look at the phone, drive with their knee and then reengage autopilot, which obviously is less safe than simply keeping autopilot on. So anyway, that experience will improve in the next several weeks. Because of our focus on Austin with no one in the driver seat, the production release of autopilot is actually several months behind what people experience on a robotaxi in Austin. So now we have the robotaxi launched, we’ll be adding back those elements so that there will be a step-change improvement in the autopilot experience for people outside of Austin…

…We’re continuing to make significant improvements just with the software. So we’re expecting to increase the parameter count. Actually, at this point, we think we can probably 10x the — almost 10x the parameter count…

…We rolled out our robotaxi service in Austin and delivered a car completely autonomously directly from the factory to the customer’s home…

…All our vehicles in the lineup are capable of autonomy. This is by far the biggest differentiator between us and the competition…

…We published our vehicle safety report earlier today. And you can see a car on FSD is 10x safer than a car not on FSD. We’ve started seeing an uptick in FSD adoption in North America in recent months, which is a very promising trend. And just to give you a perspective, since the launch of — since we moved to version 12 of FSD, we’ve seen the adoption rates really increase…

…I think we’ll end up delivering cars in the Greater Austin area and the Bay Area by default from the factory by the end of this year. A car will deliver itself to where you are, unless you say you don’t want them…

…Since we have launched version 12 of FSD in North America, we’ve definitely seen a marked improvement in the FSD adoption. And the other thing which we had also done last year is we did bring down the pricing and we’ve made subscription much more affordable. So we have seen a 25% increase since that time, which is an encouraging trend…

…The vast majority of people don’t know it exists. And it’s still like half of Tesla owners who could use it haven’t tried it even once…

…The 25% comment was 25% increase in the penetration rate since we’ve seen the release of V12 and V13 in North America.

Optimus is in version 2.5 at the moment, and Tesla is working on version 3, which management thinks has a great design; management still thinks Optimus will be Tesla’s biggest product; every component of Optimus had to be designed in-house by Tesla; management will train Optimus’s limbs with an AI neural net, using the same techniques for FSD for Tesla’s vehicles; management thinks there will be Optimus 3 prototypes by the end of 2025, and production of the robot will start scaling in 2026; management wants to scale the production of Optimus rapidly, and thinks 1 million units a year in 5 years from now is achievable; it’s difficult to predict the production ramp of Optimus because there are many parts of its supply chain that are new

We’re in Optimus version 2 right now, sort of 2.5. Optimus 3 is an exquisite design, in my opinion, and will be — as I’ve said many times before, I predict it will be the biggest product ever. It’s a very hard problem to solve. You have to design every part of it from physics first principles. There’s nothing that’s off the shelf that actually works. So you got to design every motor, gearbox, power electronics, control electronics, sensors, the mechanical elements. We also got to train Optimus to use its limb sensors with a neural net. But we’ll be applying the same techniques that we applied for our car, which is essentially a 4-wheel robot. And Optimus is a robot with arms and legs. So put the same principles that apply to optimizing AI inference on the car, apply it to Optimus because they’re both really robots in different forms…

…There’s no significant flaws with the Optimus 3 design. But we are going to retool a bunch of things. So there will probably be prototypes of Optimus 3 end of this year and then scale production next year. We’re going to try to scale Optimus production as fast as it’s humanly possible to do, so we’ll try to get to 1 million units a year as quickly as possible. We think we can get there in less than 5 years, it’s my sort of — I guess. That’s a reasonable aspiration, 1 million units a year, 5 years, it seems like an achievable target…

…The production ramp — it’s always difficult to predict the S curve of your production ramp when something has got an entire — when everything is new because the rate of production will move as fast as the least lucky and least confident element of the entire supply chain as well as internal processes. So the more new stuff that is in a product, the slower the ramp could be because of unexpected supply chain interruptions or mistakes made internally.

Tesla’s management thinks Tesla is the best company in the world at real-world AI; management thinks Tesla has the best inference efficiency, measured by intelligence per gigabyte

It is important to note that Tesla is by far the best in the world at real-world AI. Like a clear proof point for that would be — if you compare, say, Tesla to Waymo, Waymo has got — the car is festooned with God knows how many sensors. And yet, isn’t Google good at AI? Yes, but they’re not good at real-world AI. Thus far, they have — Tesla is actually much better than Google by far and much than anyone at real-world AI…

…Tesla has the best inference efficiency. Like I think a key figure of merit for AI is what is the intelligence per gigabyte. And people talk about parameters, blah, blah, blah, but I think we’ll — stop talking about parameters and talk about per gigabytes because with the parameters, you can have 4-bit parameters, 8-bit parameters, 16-bit parameters. But the actual constraints in the hardware are how many gigabytes of RAM and how many gigabytes per second can you transfer from RAM. Therefore, it is not a parameter constraint. It is a byte constraint. And Tesla has the highest intelligence density of any AI by far. And I have a lot of insight into this with xAI. xAI is — Grok is the smartest AI overall, but it’s — Grok 4 is a giant beast sort of at the terabyte level. And so kind of important to note, Tesla has the best intelligence density.

Tesla’s management is targeting Dojo 2, Tesla’s AI-training supercomputer, to be operating at scale sometime in 2026; Tesla’s AI5 chip for inference could be in volume production around end-2026; management is thinking of converging Dojo 3 and AI6 into the same chip; there’s no other AI chip that Tesla’s management is willing to place in Tesla vehicles; management thinks the AI5 chip will be a game changer and it’s so powerful that it has to be nerfed for Tesla’s markets outside of the US because of chip-export restrictions; the AI models that xAI (Elon Musk’s AI startup) is building are very different – much larger – than what Tesla is building

We expect to have Dojo 2 operating at scale sometime next year, with scale being somewhere around 100,000 H100 equivalents. And then AI5, which is really spectacular, too — and I don’t use those words lightly, spectacular, too. The AI5 chip will hopefully be in volume production around the end of next year…

…Thinking about Dojo 3 and the AI6 inference chip, it seems like intuitively, we want to try to find convergence there where it’s basically the same chip, but it’s used where, say, 2 of them in a car or an Optimus and maybe a larger number on a board, kind of 5, 12 on a board or something like that, if you want high-bandwidth communication between the chips, for serving — doing inference serving. That sort of seems like intuitively the sensible way to go…

…There’s still not a chip that exists that we would prefer to put in our car, that is, an AI chip that we would prefer to put in the car over our own, even though it’s been now out for several years. And we’re confident that the AI5 chip will be a profound game changer. In fact, it’s so powerful that we’ll have to nerf it, to some degree, for markets outside of the U.S. because it flows way past the export restrictions. So unless the export restrictions change, we actually will have to nerf our AI5 chip, which is kind of weird. Hopefully, we keep raising the bar on export restrictions…

…xAI is doing like terabyte-scale models and multi-terabyte-scale models. Tesla is 100x smaller models. So one is real-world AI and one is kind of, I guess, artificial superintelligence type of thing.

The Trade Desk (NASDAQ: TTD)

Kokai is powered by Koa, which management thinks is the digital advertising industry’s most advanced AI; AI has been infused throughout Kokai and driven huge performance improvements; Samsung used Kokai to achieve a 43% improvement in reaching its target audience in Europe; Cashrewards used Kokai to achieve a 73% improvement in cost per acquisition in Asia; campaigns that run on Kokai have an average 20% improvement in key KPIs (see Point 28 for more on how AI unlocks the 20% improvement); clients who transitioned the majority of their spend to Kokai are growing their spending on Trade Desk at least 20% faster than those who have not; around 3/4 of all client-spend is now run through Kokai (was 2/3 in 2025 Q1); management expects to transition all clients to Kokai by end-2025; Kokai is able to decide for clients which supply path gives the best ad impression out of the same impression from hundreds of supply path; Kokai helps deliver one of the promises of live sports in a biddable CTV environment, which is the ability for advertisers to target key moments in a game when the audience is most leaned in; Kokai has the industry’s most advanced retail media marketplace (see Point 8 for more on retail media); Koa is able to answer many important questions about digital advertising, such as the value of an impression to a brand, and the price of an inventory-auction; management sees many tasks where AI agents can improve the performance of Kokai because they are always on

Kokai gives advertisers unprecedented power to drive precision and relevance in everything they do, all powered by the industry’s most advanced AI technology, Koa. We have injected AI into so many parts of the system that clients that have adopted Kokai have seen tremendous performance improvements. 

Samsung was able to drive a 43% improvement in reaching its target audience for an omnichannel campaign in Europe. Cashrewards saw a 73% improvement in cost per acquisition for campaigns in Asia using Kokai. In the aggregate, we are seeing more than a 20-point improvement across key KPIs for campaigns running in Kokai. What’s even more encouraging is the clients who have transitioned the majority of their spend on Kokai are increasing their overall spend on The Trade Desk by more than 20% faster than those who have not. This is precisely what we believed was possible when we launched Kokai. Advertisers are getting meaningfully better returns on their ad dollars, and they are doubling down on the open Internet and on us as a result. Around 3/4 of all client spend is now running through Kokai, and we expect all of our clients to be using Kokai by the end of this year…

…We might see the same ad impression from hundreds of supply paths. We don’t want to burden our clients with figuring out which one is best, and it is not efficient to manage that challenge by defaulting to deals. Instead, Kokai does that work for our clients, leveraging AI and data from sources like Sincera, so advertisers can obsess about buying the right impression rather than the delivery mechanism…

…One of the promises of live sports in a biddable CTV environment is that advertisers can target key moments like overtime in an NBA game or the PKs at the end of a soccer game when the audience has most leaned in. Well, now we will be offering this capability with new tooling in Kokai and partnerships with companies such as Disney, Sky TV and Omnicom, which we announced at Cannes a few weeks ago…

…Kokai already has the industry’s most advanced retail media marketplace…

…There are so many specific tasks where AI can massively level up the status quo. What is an impression worth to a specific brand? What is the price that this auction is likely to clear at? What is the best supply chain to maximize transparency and minimize unnecessary costs? These applications of AI are already in our product. Koa is what powers Kokai’s forecasting, which is predicting the reach and performance of a campaign before a single dollar is spent. Distributed AI is foundational in Kokai, and this is only the beginning. There are many tasks where agents can improve performance in part because they’re always on.

Deal Desk is one of the major final pieces of Kokai; Deal Desk uses AI forecasting to help advertisers and publishers understand how deals are performing, how they are pacing, whether the right impressions are being delivered and more; Deal Desk helps underperforming deals get back on track; management is seeing very strong appetite for Deal Desk from both advertisers and publishers; Disney is one of the first publishers to use Deal Desk 

One additional innovation that will help accelerate our supply chain work is Deal Desk. It is one of the major final pieces of Kokai, and it is in beta now. Deal Desk leverages AI, especially AI forecasting, to reshape how we think about deals between advertisers and publishers and intermediaries such as SSPs. It helps advertisers and publishers understand how deals are performing, how they are pacing, whether the right impressions are being delivered and so on. But perhaps just as important, when deals are underperforming, Deal Desk will help those deals get back on track, and it will showcase open market and premium Internet alternatives. We are seeing very strong appetite for Deal Desk across both advertisers and publishers…

…Disney is one of the first publishers to lean into Deal Desk.

Trade Desk’s management sees AI having a profound impact on digital advertising; management thinks the quality of AI will depend on the quality of data; management thinks AI-driven buying requires objectivity; Trade Desk does as many transactions in 30 seconds as Visa and Mastercard does in a year, and this gives Trade Desk a massive data advantage when it comes to AI

AI is changing everything and creating new opportunities. Quality AI requires quality data, and to trust AI-driven buying long term requires objectivity. A black box that just sells owned and operated media will struggle far beyond what ad networks have struggled with for decades…

…We sit on top of one of the most underappreciated data assets on the Internet and frankly, in the world. And given that we do in 30 seconds as many transactions as Visa and Mastercard do in a year, if you add them together, and that quality data is now feeding an AI engine that helps the biggest buyers in the world sort out the most complex supply chain they’ve ever faced in advertising, that means our data plus AI creates an amazing opportunity for us, for the open Internet and for the biggest brands in the world.

Trade Desk’s management  thinks Kokai’s ability to drive an average 20% improvement in key KPIs for campaigns is merely scratching the surface of what is possible over time; the 20% improvement is driven by AI; the 20% improvement sometimes can be found immediately, and in other cases, it takes time to show up

As it relates to the 20% improvement, let me answer the last part of your question first, which is that I believe that, that is merely scratching the surface of what is possible over time. So the unlock that AI can bring to campaign optimization is really just beginning. Whether that is slow or fast largely depends on how campaigns are constrained today. So while there is more supply than there is demand, there is often a bunch of settings on any individual campaign that make it so it really can’t select from all the options that are the very best to help that perform. So essentially, what we’re creating is a dialogue between man and machine to make it easier for people to see what is constraining their campaign and what would be the unlock…

…Sometimes the 20%, if you will, can be found immediately. And sometimes, it just takes a little bit of a ramp.

Visa (NASDAQ: V)

Visa has a solution, Visa Intelligent Commerce, that enables consumers to shop and buy with AI agents; there are more than 30 partners currently testing Visa Intelligent Commerce in a sandbox; management thinks the first live transaction pilot for Visa Intelligent Commerce will soon happen, with general availability to come later this year

Another way that we are advancing a more digital future is with Visa Intelligent Commerce, which enables consumers to shop and buy with AI agents. It combines a suite of integrated APIs, including AI-ready cards with tokenization and authentication, together with a commercial partner program for AI platforms, enabling developers to deploy Visa’s AI commerce capabilities securely and at scale. We are excited to announce that we have more than 30 partners testing in our live sandbox, and we will soon enter the live transaction pilot phase, with general availability to follow later this year as we see agentic commerce becoming a reality.

Wix (NASDAQ: WIX)

Wix’s management is seeing AI make creation on the internet easier, driving demand for AI-powered creation

We’re seeing a fundamental shift in how people create, discover and interact online. AI-driven advancements are lowering the barriers to digital creation. This is allowing more people to turn their ideas into more sophisticated and higher quality projects with greater speed and ease. Demand for AI-powered online creation is growing faster than ever, as AI is undoubtedly bringing more people online in new ways and rapidly expanding the world of what is possible.

Wix’s management recently built algorithms to help Wix users’ content show up in AI-generated responses; Wix is the first CMS (content management system) to offer AI visibility tools; organic search traffic is declining for websites, so management sees the need for Wix’s customers to appear on AI-generated answers

Recently, we developed proprietary algorithms that help our users’ content surface prominently in AI-generated responses with our generative engine optimization offering. This empowers users to understand, monitor and actively improve how their brand appears in LLM-based search engines. Wix is the first CMS to offer this kind of AI visibility natively, setting a new benchmark for AI search optimization tools within website platforms and demonstrating our first-mover advantage, as we transform our core website building offering to align with the next area of Internet…

…When it comes to organic search traffic, we do see a decline. It’s still very small, but we do see a decline. However, there is a new universe now that people have to think about and work very hard to do that, and this is how to appear and be visible on the LLMs themselves, right? And that is actually at least as complicated as being found on Google. As a result, again, I think we need to supply our customers with the best tool and the best technologies to be visible and to be found on LLMs

Wix acquired BASE44 in June 2025 (Base44 is an AI-powered platform that allows users to build web applications using natural language prompts); management thinks the acquisition of Base44 will unlock a new vibe coding addressable market for Wix; management thinks vibe coding is going to be a major growth driver for Wix in the future; Base44’s business is growing very fast; management thinks there are synergies between Wix and Base44, such as Wix providing hosting capabilities, security frameworks, GDPR compliance, payment processing, marketing automation, and more for Base44 users; management thinks vibe coding will be a complement to Wix’s existing core offerings; management thinks vibe coding is complementary to the drag-and-drop way of building websites rather than replacing it; management believes vibe coding is good for building business applications, but it’s not good for building websites; management does not think vibe coding will replace Wix; management intends to keep Base44 separate from Wix Studio for now; Base44’s product is aimed at non-developers

We are also unlocking completely new markets such as vibe coding… 

…We are making big leaps with our June acquisition of BASE44. BASE44 gives us immediate access to a completely new audience. This includes developers, design and product teams, enterprises building internal tools and DIY users building applications, not just websites…

…Vibe coding, whether through BASE44 or native capabilities, yet to come, is going to be a major growth driver in 2026 and beyond. We’re already seeing the fruits of this investment today. With just a few million of ARR at the time of our acquisition, BASE44 is now on track to generate $40 million to $50 million of ARR by the end of this year. This is a supersonic level of growth in just a matter of weeks, and we don’t expect this momentum to slow as we accelerate towards the $100 million ARR milestone. More importantly, there is opportunity to generate long-term synergy between Wix and BASE44. Wix can provide the robust infrastructure that vibe coding platforms need to scale. This includes hosting capabilities, security framework, GDPR compliance, payments processing, marketing automation and more. BASE44 brings the application layer to empower rapid development of ideas while Wix can supply the business and online platform…

…Long term, I strongly believe vibe coating is a natural complement to our existing core offering…

…[Question] As vibe coding grows the way it’s growing, do you think this model of building replaces the drag-and-drop editor?

[Answer] Well, I think it’s complementary, right? I think that if you look at the history, we’ve done the first version of something that is very similar to vibe coding where you type what you want, and we actually build the website around it. We started in 2016. Of course, we continue to improve it, and we’ll continue to improve it. And I think for websites, it’s very hard just with a text interface to move things around and design them the way you want them. But — and we can actually see already the tools that they do, just vibe coding, already started to add a very weak, but existing visual editing elements. So obviously, the solution in the future will be a combination. 

I think the vibe coding has tremendous potential when it comes to building applications. So that way I think it’s very interesting because a lot of the business logic is extremely hard, and that’s where vibe coding shines. I want to point out again that if you build a website with the standard vibe coding tools today, you actually end up with a website that is very poor in terms of a lot of the quality that is needed or required by law.

For example, you don’t have support for GDPR. You don’t have support for accessibility. You don’t have support for cookie burners. You don’t have support to tons of other things that you want to have. So I think the combination should be that vibe coding allow you to start very quickly and then switch between designs very quickly for websites. And of course, for application, allow it to build the logic of the applications with the text interface. For website, it’s a bit different. It’s very hard to write the text of a full-blown e-commerce package, the prompt…

…You can already see some of the signs of that on Wix itself, right? We accelerated, not decelerated. So in theory, if there was a huge amount of competition out there, it would have decelerated and not accelerated. However, I do think that if you look at the 3 different needs that you have mostly for website builder and application builders, it will either be website building, application building and prototype building, okay? So I think for prototype building and application building, you see tremendous use of vibe coding now, and I expect that to continue to go. And I’m sure that we can enjoy at Wix a lot of the new capabilities of AI in order to enhance our offering, which is something that we’ve always been doing…

…[Question] On Base44, just wondering, as you guys work it into the Wix platform, is this a business that you guys intend to kind of run separately? Is it just going to be part of the core Wix’ ADI studio platform?

[Answer] We’re going to keep it separately, at least for the current future that we can foresee. I believe, again, that those are very different needs. People don’t do the same thing on Base44 that they do on Wix. And I think that vibe coding is a great way to build prototypes and applications and not necessarily the best solution for website…

When you look at something like Windsurf or Cursor, they are aimed at developers, right? So the whole experience is very different than the experience that you have in Base44. Base44 is aimed for mostly people that are not a developer or that are developers and do not want to develop and to do something very quick and then continue to innovate on top of it, again, without coding.

Wix’s management believes that the infusion of AI into websites will make it even harder for people to move off Wix as there are fewer platforms that offer all of the necessary capabilities

[Question] Do the barriers to change websites change as we think about more text website capabilities? How do we think about the kind of the component of churn within websites as new capabilities kind of lower the barrier to creation?

[Answer] Well, it’s always been easy to change a website, right? I think that the content has always been owned by the user, and you can always move between different platforms. I think that the reason that we see so many people staying with Wix is because we offer them a better platform for many of the things that they need. I do believe that the more AI capabilities, advanced AI capabilities exist, it’s actually going to be harder to change a website, not easier. I think that there’s going to be less platforms that offer all those capabilities. As a result, the amount of platform we can change between would actually grow down, and we can already see that. 

Wix’s management  does not see back-end of commercial transactions being down on LLMs themselves any time soon

I don’t see any time in the near future where the back end of the transactions will be done on the LLMs themselves. Let me explain. For example, let’s say that you have a yoga studio and you want — and somebody want to go to an LLM and actually order a class video to join a seminar, right? For that, the LLM has to know the seminar exists, how many seats are there, what is the price of a ticket, what are the tax rules, what is the reimbursement rules, what are the refund rules, what kind of coupons go together, how does it all combine to the membership card that you have, do you need a membership card or you don’t need a membership card. All of those things require very complicated back end, which is a very complicated database and a lot of rules on top of that. I don’t see, and currently, all the signs point to the other direction, that LLM’s providers will not develop those, but actually interface with the existing website.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Coupang, Datadog, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Netflix, Paycom Software, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Visa, and Wix. Holdings are subject to change at any time.

What We’re Reading (Week Ending 10 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 August 2025:

1. How we’re making data centers more flexible to benefit power grids – Michael Terrell

That’s why we’ve been working to bring flexible demand capabilities into our data center fleet, which enables us to shift or reduce power demand during certain hours or times of the year. These capabilities, often referred to as demand response, have several advantages, especially as we continue to see electricity growth in the US and elsewhere. It allows large electricity loads like data centers to be interconnected more quickly, helps reduce the need to build new transmission and power plants, and helps grid operators more effectively and efficiently manage power grids.

We’re pleased to report on our progress in the implementation of these capabilities, including two new utility agreements with Indiana Michigan Power (I&M) and Tennessee Valley Authority (TVA). These agreements represent the first time we’re delivering data center demand response by targeting machine learning (ML) workloads. This builds on our successful demonstration with Omaha Public Power District (OPPD) where we reduced the power demand associated with ML workloads during three grid events last year — paving the way for us to pursue opportunities at other locations…

…Advancing Google’s 24/7 carbon-free energy ambition requires a holistic approach, to both procure clean energy and support the grid through demand-side solutions. Flexible demand is an important piece of this portfolio — it can be deployed quickly, helping bridge the gap between short-term load growth and long-term clean energy solutions, and delivers immediate benefits.

The first data center demand response capabilities we developed involve shifting non-urgent compute tasks — like processing a YouTube video — during specific periods when the grid is strained. Through our ongoing partnerships with Centrica Energy and transmission system operator Elia in Belgium, and Taiwan Power Company in Taiwan, we’ve leveraged this capability to help grid operators maintain reliability during those periods of the year when demand is the highest.

As AI adoption accelerates, we see a significant opportunity to expand our demand response toolkit, develop capabilities specifically for ML workloads, and leverage them to manage large new energy loads. By including load flexibility in our overall energy plan, we can manage AI-driven growth even where power generation and transmission are constrained.

2. Why China is building the world’s largest hydropower station in Tibet – Amber Zhang

On July 19, 2025, Chinese Premier Li Qiang stood in the remote southeastern Tibetan city of Nyingchi and announced the official commencement of the Medog hydropower station—what he termed a “project of the century”…

…The mega project is a hydropower station on the lower reaches of the Yarlung Tsangpo River, a plan of such breathtaking scale that it redefines the very concept of a mega-project. With a projected investment of 1.2 trillion yuan (approximately $167-170 billion) and a planned annual electricity output of 300 billion kilowatt-hours (kWh), the facility is designed to generate nearly three times the power of the iconic Three Gorges Dam, China’s previous “project of the century”.

Its output alone would be enough to power the entire United Kingdom [*(2023 statistics)] and is equivalent to 20% of China’s total residential electricity consumption in 2024 [*], enough for 300 to 400 million people…

…The Yarlung Tsangpo project marks the boldest chapter yet. It is located in Medog County, a remote corner of southeastern Tibet, at a dramatic geographical feature known as the “Great Bend” of the Yarlung Tsangpo River. Here, after flowing eastward across the Tibetan Plateau, the river makes a hairpin turn around the sacred Mount Namcha Barwa and plunges south toward India. In just 50 kilometers (31 miles), the river drops between 2,000 and 2,350 meters (over 6,500 feet) [*], through the world’s deepest canyon—three times deeper than the Grand Canyon in the United States.

It is this staggering vertical drop that has long been viewed by engineers as the single most promising site for hydropower generation on Earth, with a water energy density estimated to be seven times that of the Three Gorges [*].

To exploit the potential of the “Great Bend,” China is employing the “run-of-the-river” design, or, figuratively speaking, the “cut-the-bend” approach. Instead of constructing a single massive dam with a vast reservoir—which would be impractical and even more hazardous in this terrain—the project will consist of a series of five smaller “cascade” hydropower stations. These dams will divert a portion of the river’s powerful flow into a network of four enormous tunnels, each stretching approximately 20 kilometers (12.5 miles) and bored directly through the Himalayan mountains [*].

This “run-of-the-river” approach, which utilizes advanced dam-less diversion technology, means water is not consumed but rather borrowed with a resource utilization rate of up to 85% (*). It plunges down these tunnels, gaining immense velocity, to spin turbines located at a much lower elevation at the bottom of the canyon. After generating power, the water is discharged back into the river just before it crosses the Line of Actual Control into India. This design allows China to harness the massive potential energy from the 2,000-meter elevation drop while minimizing the size of the required reservoirs…

…For instance, the engineering challenges require technical capabilities that China has only developed in recent decades.

Beyond basic infrastructure like building roads and bridges, two key technologies have made this project feasible:

The first is tunnel boring machines (TBM)—used to dig long tunnels through mountains, similar to how a pangolin burrows through soil. These machines were once monopolized by German manufacturers and were prohibitively expensive. But after China localized their production, they became widely available and cost-effective.

In the early planning stages of the Medog hydropower station, several construction proposals were considered. Now, only one remains viable: a “run-of-the-river” approach, which involves digging a tunnel over 30 kilometers long to connect both ends of the river’s U-shaped bend, using the more than 2,000-meter drop in elevation to generate electricity. Such an idea would have been unthinkable in the past—but with TBMs, it has become a realistic option.

In fact, there’s already a prototype for this kind of construction: the Jinping II Hydropower Station on the Yalong River. Its surrounding terrain is nearly identical to that of the Medog section. There, engineers cut through both ends of a similar U-shaped bend, building four water diversion tunnels—each 17 kilometers long. The station has been operational for six years and has proven stable.

With this prior experience as a foundation, taking on the challenge of the Himalayas no longer seems so daunting.

The second breakthrough is ultra-high-voltage (UHV) power transmission. Tibet is vast and sparsely populated, and local demand is far below the project’s potential output. Most of the electricity will need to be transmitted to major power-consuming provinces in the east—or exported to Southeast Asia. The only viable solution for such long-distance transmission is UHV, one of the few technologies in which China is globally recognized as a clear leader. Years of experience from the “West-to-East Power Transmission” program have proven that UHV is both mature and reliable…

…Also, the Yarlung Tsangpo Grand Canyon is one of the most inaccessible places on Earth. Until the completion of the Paizhen-Medog Highway in 2022, the area lacked reliable road access and a power supply, making the logistics of transporting millions of tonnes of materials like steel and an estimated 40 million tons of cement a monumental undertaking.

The resulting power output is staggering. The project is designed with an installed capacity of 60 to 70 gigawatts, producing 300 billion kWh of electricity annually. This is enough energy to meet the needs of nearly 300 million people, making it by far the most powerful hydroelectric facility on Earth.

3. The Imitation Game: Defending against AI’s Dark Side! – Aswath Damodaran

A few weeks ago, I started receiving a stream of message about an Instagram post that I was allegedly starring in, where after offering my views on Palantir’s valuation, I was soliciting investors to invest with me (or with an investment entity that had ties to me). I was not surprised, since I have lived with imitations for years, but I was bemused, since I don’t have an Instagram account and have not posted on Facebook more than once or twice in a decade. In the last few days, those warnings have been joined by others, who have noted that there is now a video that looks and sounds like me, adding to the sales pitch with promises of super-normal returns if they reach out, and presumably send their money in. (Please don’t go looking for these scams online, since the very act of clicking on them can expose you to their reach.)…

…To get a measure of what the current AI scams that are making the rounds get right and wrong, I did take the time to take a closer look at both the Instagram post and the fake video that are making the rounds….

…The good news is that this AI scam gets my language and look right, but it is sloppily done in terms of content and capturing who I am as a person. The bad news is that it if this scammer was less lazy and more willing to put in some work, even with the current state of AI, it would have been easy to bring up the grades on content and message. I will wager that the Damodaran Bot that I mentioned earlier on in this post that is being developed at NYU Stern would have created a post that would have been much more difficult for you to detect as fake, making it a Frankenstein monster perhaps in the making. The worse news is that AI technology is evolving, and it will get better on every one of these fronts at imitating others, and you should prepare yourself for a deluge of investment scams…

…It remains an uncomfortable truth that the people most exposed to these scams are the ones who have read little or none of what I have written, and I wish there were a way that I could pass on the following suggestions on how they can protect themselves against the other fakes and scams that will undoubtedly be directed at them.

1. “Looks & sounds like” not good enough: Having seen the flood of fake AI videos in the news and on social media, I hope that you have concluded that “looks and sounds Iike” is no longer good enough to meet the authenticity test. This remains AI’s strongest suit, especially in the hands of the garden variety scammer, and you should prepare yourself for more fake videos, with political figures, investing luminaries and experts targeted.

2. Steer away from arrogance & hype: I have always been skeptical of the notion that there is “smart” money, composed of investors who know more than the rest of us and are able to beat the market consistently, and for long periods. For the most part, when you see a group of investors (hedge funds, private equity) beating the market, luck is more of a contributor as skill, and success is fleeting. In a talk on the topic, I argued that investors should steer away from arrogance and bombast, and towards humility, when it comes to who they trust with their money, and that applies in spades in the world of AI scams. Since most scammers don’t understand the subtlety of this idea, screening investment sales pitches for outlandish claims alone will eliminate most scams.

3. Do your homework: If you decide to invest with someone, based upon a virtual meet or sales pitch, you should do your homework and that goes well beyond asking for their track records in terms of performance. In my class on investment philosophies, I talk about how great investors through the ages have had very different views of markets and ways of making money, but each one has had an investment philosophy that is unique, consistent and well thought through. It is malpractice to invest with anyone, no matter what their reputation for earning high returns, without understanding that person’s investment philosophy, and this understanding will also give you a template for spotting fakes using that person’s name.

4. Avoid ROMO & FOMO: In my investing classes, I talk about the damage that ROMO (regret over missing out) and FOMO (fear of missing out) can do to investor psyches and portfolio.

  • With ROMO (regret over missing out), where you look back in time and regret not buying Facebook at its IPO price in 2012 or selling your bitcoin in November 2013, when it hit $1000, you expose yourself to two emotions. The first is jealousy, especially at those who did buy Facebook at its IPO or have held on to their bitcoin to see its price hit six digits. The second is that you start buying into conspiracy theories, where you convince yourself that these winners (at least in the rear view mirror) were able to win, because the game was fixed in their favor. Both make you susceptible to chasing after past winners, and easy prey for vendors of conspiracies.
  • With FOMO (fear of missing out), your overwhelming concern is that you will miss the next big multi-bagger, an investment that will increase five or ten fold over the next year or two. The emotion that is triggered is greed, leading you to overreach in your investing, cycling through your investments, as most of them fall short of your unrealistic expectations, and searching for the next “big thing”, making you susceptible to anyone offering a pathway to get there.

Much as we think of scammers as the criminals and the scammed as the victims, the truth is that scams are more akin to tangos, where each side needs the other. The scammer’s techniques work because they trigger the emotions (fear, greed) of the scammed, to respond, and AI will only make this easier to do. Looking to regulators or the government to protection will do little more than offer false comfort, and the best defense is “caveat emptor” or “buyer beware”.

4. How has macroeconomic research misjudged China? – Robert Wu and Dongfan Ma

From 2000 to now, China’s economic structure has undergone at least three major transitions:

  • 2000–2010: Export and processing-led growth was the dominant force.
  • 2010–2020: Real estate and household leverage became the drivers, with Total Social Financing (TSF) as the key indicator.
  • Post-2020: Traditional models started to fail, and new growth drivers began to emerge.

Yet, most macroeconomic analyses remain stuck in phase two—still using TSF and real estate sales as core references. What we observe now is that these indicators have lost predictive value. Their correlation with PMI and corporate earnings is quickly fading…

…Processing trade began tapering off after 2010. But from 2016 onward, general trade has grown steadily and rapidly.

This distinction matters: processing trade mostly reflects contract manufacturing for others, while general trade signals the rise of China’s own manufacturing capabilities, brands, and integrated industrial chains. In 2024, China’s total export volume was already three times the size of real estate investment. Back in 2019, the two were roughly equal.

In other words, China’s new economic engine is no longer real estate, nor low-end contract exports—but rather the international expansion of Chinese brands…

…We’ve studied many outbound brands—MINISO, Pop Mart, Xiaomi, innovative pharmaceuticals, EV makers, short-form video platforms, games—and what we see is not a fleeting opportunity but a fundamental shift. It reflects the rise of talent, capabilities, and global competitiveness.

The era of debt-fueled growth is over. Today, growth comes from improvements in corporate strength, from truly competitive products, and from globalized operations…

…Still worried about China’s government debt? Concerned that the debt expansion of 2010–2020 is no longer sustainable? TSF growth slowing down? PPI still falling? Property prices not yet bottomed? Premium liquor sales still sluggish?

We have data and research to show that many former economic pillars and core assets are undergoing a transition. These variables are no longer fatal risks to the Chinese economy or its markets.

5. Wall Street’s Big, Bad Idea for Your 401(k) – Jason Zweig

Money managers are in a desperate race to stuff illiquid, so-called private-market assets into funds anyone can buy, including your 401(k). They say we all can earn high return and low risk with nontraded “alternatives” like private equity, venture capital and private real estate…

…Bluerock Total Income+ Real Estate is an “interval fund.” This is a structure that generally allows investors to buy as many shares as they wish at any time—but only to sell limited amounts at predetermined intervals, typically 5% of shares per quarter…

…Because private assets don’t trade, it’s the fund managers—not the market—that determine what they’re worth. That enables the managers to report much fewer and lower fluctuations than public funds do. Then they get to declare that private funds are low risk.

That’s ridiculous. In the real world, risk is the chance of losing money, which has nothing to do with how often prices are reported…

…Owning an alternative fund is a lot simpler than selling it. When you own it, you might take the manager’s valuations for granted, even if that’s a bad idea. When you sell it, the valuation matters—a lot. That’s a risk.

Until now, investors have been able to sell their shares back to each of these two funds at “net asset value,” or what the manager claims they’re worth. Even if other investors might disagree with some of those valuations, the manager has stood behind them.

That works until the number of people looking to sell swells and the managers can’t raise money because they are holding illiquid or distressed assets…

…The answer for these two funds, and for the alternative-asset industry writ large, is to move assets to public markets. There, the price will be set by what other investors—not the managers—believe the assets are worth. If that’s less than NAV, that’s mainly a problem for investors in the fund, not the managers…

…Most of the Bluerock fund’s holdings are stakes in other private real-estate portfolios. If it lists and ends up trading at a discount to net asset value, that might signal that the public market doesn’t believe the private valuations on dozens of these funds.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (the parent company of Google). Holdings are subject to change at any time.

Potential Bargains In A Niche Corner Of The US Stock Market (Part 3)

Earlier this year, I published two articles on investing in thrift conversions in the US stock market titled Potential Bargains In A Niche Corner Of The US Stock Market and Potential Bargains In A Niche Corner Of The US Stock Market (Part 2). In them, I described what thrift conversions are and why both fully-converted thrifts and first-step thrift conversions could all be huge potential bargains.

I focused on first-step conversions in Potential Bargains In A Niche Corner Of The US Stock Market (Part 2). In it, I referenced an article from the experienced US-community-bank investor Phil Timyan on Rhinebeck Bancorp and used the same bank to explain first-step thrift conversions, how such thrifts can be acquired, and their potential for generating good returns for shareholders. Timyan’s article briefly mentioned two examples of completed or ongoing acquisitions of first-step thrift conversions and I would be delving into their details in this article you’re now reading.

Wake Forest Bancshares, which was the owner of the operating bank Wake Forest Federal Savings & Loan Association, is one of them. In January 2024, Wake Forest Bancshares (shortened to WAKE from here on) was acquired by Piedmont Financial Holding Company for US$34 per share in cash. Before the acquisition, Wake Forest Bancorp MHC owned 0.635 million of the 1.070 million WAKE shares that were outstanding in total. Wake Forest Bancorp MHC was a mutual holding company, so it had no shareholders. At the point of the acquisition by Piedmont Financial Holding Company, Wake Forest Bancorp MHC’s 0.635 million WAKE shares were cancelled, which resulted in 100% of the economics of Wake Forest Federal Savings & Loan Association belonging to WAKE’s remaining shareholders.

Based on the latest financials that I could find for WAKE* prior to the acquisition, it had stockholders’ equity of US$26.507 million, which translates to a book value per share of US$61 based on the 0.435 million shares of WAKE remaining after the cancellation of Wake Forest Bancorp MHC’s stake. At a stock price of US$34 for WAKE, Piedmont Financial Holding Company paid a P/B ratio of just 0.56. But public shareholders of WAKE still enjoyed substantial gains, as WAKE’s stock price was significantly lower than US$20 for months prior to the acquisition. If WAKE’s stock price was, say, US$17 before the acquisition, it would have an optically higher P/B ratio of 0.69 but a true P/B ratio of just 0.28.

CFSB Bancorp, the owner of the operating bank Colonial Federal Savings Bank, is another instance. CFSB Bancorp (shortened to CFSB from here on) completed its first-step conversion process in January 2022. As of 31 March 2025, CFSB has: 

  • 6.549 million outstanding shares, of which 3.587 million belongs to 15 Beach MHC, the mutual holding company – again, with no shareholders – that owns a portion of CFSB. 
  • Stockholders’ equity of US$75.715 million, which gives CFSB a book value per share of US$26 if 15 Beach MHC’s shares are cancelled.

Hometown Financial Group announced on 20 May 2025 that it will be acquiring CFSB for US$14.25 per share, subject to regulatory approval. If the acquisition is successful, it will be a mutually beneficial situation for both Hometown Financial Group and public shareholders of CFSB. Hometown Financial Group will be buying CFSB at an effective P/B ratio of just 0.55, while CFSB’s public shareholders get to earn a healthy return, seeing that the thrift’s stock price was only US$8.19 just prior to the deal’s announcement. For perspective, a US$8.19 stock price for CFSB translates into an optical P/B ratio of 0.70 but a true P/B ratio of just 0.32.

In Potential Bargains In A Niche Corner Of The US Stock Market, I shared the traits I looked out for and they apply to first-step thrift conversions too. In fact, CFSB ticks most of the boxes against my criteria for investing in thrifts:

  • The equity-to-assets ratio: As of 31 March 2025, CFSB has total assets of US$366.2 million and total stockholders’ equity of US$75.715 million, giving it a high equity-to-assets ratio of 20.7%
  • The P/B ratio: Earlier, I mentioned that CFSB’s true P/B ratio was just 0.32 before Hometown Financial Group jumped into the scene
  • Share buybacks: CFSB announced a plan on 5 April 2024 to repurchase up to 0.152 million shares (around 5% of its outstanding shares then); as of the first quarter of 2025, CFSB has bought back more than half of the number of shares under the plan
  • Non-performing assets as a percentage of total assets: CFSB had no non-performing assets in its fiscal years ended 30 June 2024 and 30 June 2023
  • Net income: CFSB was profitable in each of its fiscal years ended 30 June 2022, 30 June 2023, and 30 June 2024, but made a small loss of US$0.16 million in the nine months ended 31 March 2025 (the loss is immaterial against the bank’s total stockholders’ equity)
  • Change in control provisions: CFSB’s CEO, Michael McFarland, can receive up to three times the average of his effective annual compensation in the five years prior to a change in control 
  • Management’s compensation: McFarland controlled 61,549 CFSB shares as of 4 October 2024; the shares were worth slightly more than US$0.5 million at the stock price of US$8.19 before Hometown Financial Group’s involvement and the value of the shares was also higher than McFarland’s annual compensation of US$0.35 million for the fiscal year ended 30 June 2024; It’s worth noting too that McFarland is already 71 this year, so there is even more incentive for him to cash out from CFSB

I also cautioned in Potential Bargains In A Niche Corner Of The US Stock Market that “not every thrift conversion [referring to standard conversions or thrifts that have completed the second-step of the two-step conversion process] leads to a happy ending.” I think this absolutely stands with first-step thrift conversions too. 

If any of you reading this letter is interested to have deeper conversations about investing in thrifts, please reach out, I would love to engage. 

*Publicly-available historical financials for WAKE are currently scarce and the latest we could find was for the fiscal year ended September 2021 (fiscal 2021). Despite the time-gap between WAKE’s acquisition in January 2024 and the financials we could find, we think the numbers are still relevant. This is because WAKE’s total assets just prior to its acquisition and at the end of fiscal 2021 were US$121 million and  US$110.5 million, respectively.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 03 August 2025:

1. The Jamie Dimon Interview – Ben Gilbert, David Rosenthal, and Jamie Dimon

Ben: It seems like your philosophy is that the worst thing will happen. So just plan for it. Don’t say, oh, we’re good as long as this crazy, insane four Sigma event doesn’t happen. You’re like, no. That will happen, and it happens often.

Jamie: Yeah. When I look at it, when I do stress tests and a risk for high yield, I remember getting to J.P. Morgan and going through the risk books. Their stress test was that high yield would move 40%, the credit spread. That time was at 400 or whatever it was. That means 560.

I said, no. Our stress test is going to be worst ever. Worst ever was 17%. They said, that’ll never happen again. The market’s more sophisticated. Well, in 2008, it hit 20% and you couldn’t have sold a bond. There was no market. So those things do happen.

The point isn’t that you’re trying to guess them. The point is you can handle them, so you continue to build your business. I always look what I call the fat tails and manage that we can handle all the fat tails. Not the stress test the Fed gives us, but all the fat tails.

Markets down 50%, interest rates up to 8%, credit spreads back to worst ever. Of course, your results will be worse, but you’re there. The thing about financial services, leverage kills you. Aggressive accounting can kill you, which a lot of companies do. Also, confidence. If you lose money as a financial company—I always knew this too—the headlines are people read that. If they’re a line on putting their money with you, they look at that difference.

Ben: They lose trust.

Jamie: They lose trust, and that’s what’s caused you’ve seen runs on banks. You saw some recently because people take their money out.

Ben: One, there’s a thing that you just said, which is that you might do worse, but you’re there. There’s this trade-off that you make where you’re less profitable in the short-term, but at least you stick around.

If you look back at the companies that you’ve run—Bank One, J.P. Morgan Chase—is that true in the good years that you’ve actually been less profitable than those who are risk on?

Jamie: A little bit. You’re saying that if you look at the history of banks from up until 2007, a lot of banks were earning 30% equity. Most of them went bankrupt. We never did that much. But in 2008 and 2009, we were fine and they weren’t.

But you want to build a real strong company with real margins, real clients, conservative accounting, where you’re not relying on leverage. It’s very easy to use leverage to jack up returns in any business, but in banking it could be particularly dangerous…

…David: And 2006 on Wall Street is like, go, go, go baby. It’s like the 1980s all over again.

Ben: I think you had the same incentives as everyone else, but you behaved very differently. Am I missing something? Did you have the same incentives or did you—

David: You pulled J.P. Morgan back hard on the risk side in 2006.

Jamie: I did. There were cracks out there in 2006. You may remember the quants. There started to be a quant problem late in 2006. We definitely saw subprime getting bad. I pulled back on subprime. I wish I had done more, because if you look at what I did, you say, okay, well you saved half the money, but you would’ve saved more.

David: You still had some losses.

Jamie: Yeah, but we also had, I’m going to say less, maybe a third of the leverage of the big investment banks and a lot more liquidity. So in 2006, I started to stockpile liquidity, and looking at the situation, I was quite worried. You may not remember this, but the leverage, because of accounting rules and Basel III, Basel I, investment banks, particularly the big investment banks, went from 12 times leverage to 35 times leverage. And it was go, go. The CMOs, the bridge loans, the whole thing.

In 2007, the bridge book of Wall Street was $450 billion. Today it’s $40 billion. J.P. Morgan can handle the whole $40 billion today though we’re not the $40 billion today, and they were much more leveraged deals. A lot of them fell apart, collapsed. Of course, and that was before you had the collapse in the mortgage mortgage, which really took down a lot of these banks.

Ben: But you did have the same incentives and you had the same access to information that a lot of these other folks did, but you didn’t blow up. What explains this? Because usually, behavior follows incentives.

Jamie: Well, first of all, if you work for me, I would tell you I don’t care what the incentive is. Don’t do the wrong thing. Don’t do the wrong thing to the client. If you’re the client, how would you want to be treated? I had gotten rid of, I mentioned that one risk thing. There were multiple risk things like that. They were being paid to take the risk.

David: You were telling us about the auto loan business.

Jamie: Yeah, but they’d be being paid. But the second I put in all these new risk controls, all of a sudden you weren’t making money by taking that leverage, because I was looking at how much capital it can actually be deployed if things get bad. So I was looking at earnings through the cycle, but very importantly, all of these investment banks were doing side deals, private deals, three year deals, five year deals, I got rid of almost all of them.

David: This is for comp with senior bankers.

Jamie: Almost all of them. Today at J.P. Morgan Chase, we do do things—and I know some of my partners in the room here—but we all know about it. There are no winks. There are no nods. There are no side deals. There’s almost no one paid on a particular thing, because if you’re paid on a particular thing, you can do the wrong thing, meanwhile not helping the company manage its risk or something like that. So we change the incentive programs.

I’m quite conscious about incentive programs that they don’t create mis misbehavior. But it’s also very important if you’re in a company and you say the incentive programs do that, you should tell the company. This incentive plan is not incentivizing the right behavior versus the customer. And a lot of it was leverage.

If you look at the leverage in some of these securitization and mortgage books, if you have 30 times leverage and you’re getting 20% of the profits, you’ll go to 40 times leverage. It literally will add 25% to your bonus. So I got rid of the profit pool 20% and the leverage. I lost some people too in the meantime…

…[Jamie:] If you look at the financial services, very often it’s the new products that blow up. It takes a while. They haven’t been through a cycle. You had that with equities way back in 1929, you had it with options, you had it with equity derivatives, you had it with mortgages. Even Ginnie Maes at one point blew up, even though they’re government guaranteed.

David: Arguably, you had it with quant and with LT and CM.

Jamie: It happened with quant. It happened with leveraged lending. People then become more rational how they run these balance sheets now they think through the risk.

Ben: I have to ask you, is this private credit today?

Jamie: I don’t really think so. It’s $2 trillion. It’s grown rapidly. That’s an issue. The other thing about Mark is there are some very good actors in it who know what they’re doing. Customers like the product. I always say, well, the customers like it.

But there are also people who don’t know what they’re doing, and it’s grown rapidly. There may be something in there would become a problem one day. I don’t think it’s systemic. That $2 trillion, the mortgage market, when the time it blew up was (I’m going to say) $9 trillion, and a trillion dollars was lost.

David: A trillion dollars was more than a trillion dollars back then.

Jamie: Yeah, a lot of these private credit are not leveraged like that. But that doesn’t mean there won’t be problems. It’s slightly different. You look at the whole system. There are other things out there that are leveraged that can cause problems. Of course, people take secret leverage in the ways you don’t necessarily see it.

Ben: What are some of these in your mind that are potentially problematic today?

Jamie: When you look at asset price, they’re rather high. Now, I’m not saying that’s bad, but if today PEs were 15 as opposed to 23, I say that’s a lot less risk. A lot less to fall, and you have some upside. I would say at 23, there’s not a lot of upside, and there’s a long way to fall. That’s true with credit spread…

…Ben: Silicon Valley Bank and First Republic both fail. You’re there again. Did you see it coming? What lessons did you learn from how 2008 went that you could apply in 2023? Obviously you bought First Republic.

Jamie: Silicon Valley Bank did some very good stuff. They both had something unique that we didn’t know at the time. I’m going to call them concentrated deposits. Not uninsured because people missay that concentrated, so a lot of venture capital.

What happened with Silicon Valley Bank and First Republic is some of these large venture capital companies—hundreds of them, maybe a thousand—told their constituent clients that they invested in, who all banked in the Silicon Valley and First Republic, the banks aren’t safe, get out, and they all removed their deposits.

Silicon Valley Bank (I think) had $200 billion deposits, $100 billion in one day. That caused the problem. But they also had other problems. They didn’t have proper liquidity, they didn’t have their collateral posted at the Fed, and they had taken too much interest rate exposure.

The interest rate exposure was hidden by accounting. It was called held to maturity, where you don’t have to mark even treasuries to market. I always hated held to maturity, but it gives you better regulatory returns and stuff like that. But when that held to maturity, if you said what’s the tangible book value of one of these banks, and you said it was 100, well all of a sudden it was 50 if you just marked that one thing to market.

Now you’re into judgment land. At what point, if you saw a bank where just that one mark had the tangible book value drop to 40 or 30 cents to a dollar, would you panic? I would’ve said, that’s too much risk.

The regulators helped us because they said rates are going to stay low forever. So these banks bought a lot of 3% mortgages. When rates went up to 5% worth 50 cents on the dollar, that was it. They took too much instrument exposure known to management, known to the regulators, and fixable.

2. How Bread vs Rice Molded History – Tomas Pueyo

This means that rice nourishes families on half the land that wheat requires. Which means population density in rice areas can be twice as high as in wheat areas, or four times with double cropping.2 A hectare of land can feed 1.5 families with wheat and 6 with rice.

Yet rice paddies also require a lot of work—twice as much as wheat. And that work is almost year-round: preparing paddies, raising seedlings in nurseries, transplanting every single seedling by hand into flooded fields, managing water, pumping it,3 weeding,4 harvesting, and threshing—often followed by a second rice crop or a winter crop. These tasks peak during transplanting and harvest, creating critical seasons where a huge amount of work must be done in a short window of time…

…Wheat farming historically had a more seasonal rhythm with periods of relative quiet. Wheat is typically sown in the fall or spring and then mainly just left to grow with the rain. Aside from episodic weeding or guarding the fields, there was less continuous labor until harvest time. Harvest itself was a crunch period requiring many hands with sickles—European villages would collaborate during harvest, and farmers might hire extra reapers.

These differences made these regions diverge across politics, culture, and economy…

…Wheat grows in drier, colder areas than rice and requires much less labor, but also produces less calories per unit of land than rice. As a result, rice areas had:

  • More population density
  • Stronger centralized states
  • A psychology and cultures that foster social harmony and collaboration

Meanwhile, wheat encouraged the colonization of the New World, allowed it to grow its wealth through farming fast, and accelerated the development of the Industrial Revolution, which increased the economic divergence between wheat and rice areas.

In other words, climate determined crops, which then heavily influenced our societies. Even decades after most of us have stopped farming, these effects carry into our subconscious cultures.

3. Are Diamonds Even a Luxury Anymore? De Beers Reckons With Price Plunge – Jenny Strasburg and Suzanne Kapner

Now diamonds can be made in labs that mimic the earth’s extreme pressure and temperatures, but for a fraction of the price. A decade ago, such man-made gems were novel. Today they are mainstream, and increasingly challenging the perception of diamonds as a luxury accessory.

Walmart sold its first lab-grown diamonds in 2022, but now the stones make up half of its diamond jewelry assortment.

Signet Jewelers, which says it is the world’s largest retailer of diamond jewelry, with brands that include Kay Jewelers, Zales and Jared, is partnering with De Beers to extol the virtues of natural diamonds in a new marketing campaign. But last month, Signet said it, too, has been adding more lab-grown diamonds to its fashion jewelry, which was among the factors helping to pull the company out of a prolonged sales slump…

…More than half the engagement rings purchased last year in the U.S. had a lab-created diamond, a 40% increase compared with 2019, according to a survey of nearly 17,000 U.S. couples by wedding planning website The Knot…

…Manufactured diamonds are 100% carbon, with the same hardness and sparkle of the original. Nevertheless, De Beers’s future depends on consumers who believe that authenticity can’t be made in a lab…

…De Beers gets its name from two Dutch-Afrikaner brothers, Diederik Arnoldus de Beer and Johannes Nicolaas de Beer, who settled in South Africa and discovered diamonds on their farm in the late 1800s.

De Beers grew to control some 90% of the world’s diamond trade. When diamond demand collapsed during the Great Depression, De Beers hired the advertising agency N.W. Ayer, which convinced Hollywood actresses to wear diamond rings. One of its copywriters in 1947 came up with the now famous tagline “A Diamond is Forever.”

Over coming decades, De Beers broadly succeeded in dictating how much should be spent on a diamond engagement ring: “Isn’t two months’ salary a small price to pay for something that lasts forever?” asked a 1980s De Beers ad…

…Even gem experts need specialized machinery to tell the difference between quality lab-grown and mined diamonds. De Beers is now trying to draw more attention to the hard-to-see differences, by asking jewelers to shell out $9,500 for a new diamond-testing device called DiamondProof.

The device is about the size of an air fryer and designed to be displayed on jewelry-store counters. It takes just a few seconds to show color-coded results: If the stone’s image glows blue, it’s natural—a result De Beers says it can guarantee. If it glows yellow, it’s lab-grown or needs further testing…

…Sales of lab-grown diamonds at Walmart, the country’s second-largest fine jewelry seller behind Signet—according to National Jeweler magazine—soared 175% in 2024 compared with the prior year…

…Signet had been more reluctant to jump on the lab-grown bandwagon than other middle-market jewelers, which some analysts say contributed to a prolonged sales decline, plunging stock price and a large shareholder who had pushed for a sale of the company.

Signet Chief Executive J.K. Symancyk, who took the helm in November, laid out a new strategy in March that includes pushing more heavily into lab-grown diamonds for fashion jewelry like tennis bracelets, earrings and necklaces, while aiming to protect the allure of natural stones for milestone purchases like engagement rings.

Sales of fashion jewelry with lab-grown diamonds increased 60% in the most recent quarter, compared with a year ago, one factor that helped the company’s overall sales return to growth for the first time since April 2022.

He added that nearly two-thirds of Signet’s customers still prefer mined diamonds for special occasions like anniversaries and engagements. “We see natural diamonds as lasting and enduring,” Symancyk says. “Fashion trends change.”…

…The influx of lab-grown diamonds has pushed prices down for both types of stones.

The retail price of a 1-carat lab-grown diamond has plunged 86% since the beginning of 2016, to about $745, Zimnisky estimates. The price of the same size natural diamond is down 40% over that period to $3,925. Back in 2016, there was only about a $1,000 difference between a 1-carat lab-grown and natural diamond. A natural diamond now costs about five times as much as man-made stone.

4. Trump’s Commerce Secretary Loves Tariffs. His Former Investment Bank Is Taking Bets Against Them – Louise Matsakis and Zoë Schiffer

Cantor Fitzgerald, a financial services company led by the sons of US commerce secretary Howard Lutnick, is creating a way for investors to bet that President Donald Trump’s signature tariffs will be struck down in court…

…Lutnick ran Cantor Fitzgerald for nearly 30 years until he was confirmed by the Senate in February, when he turned over control of the firm to his sons, Kyle and Brandon, who are both in their twenties…

…But the investment bank that made Lutnick a billionaire is now letting certain clients wager that Trump’s tariffs will eventually be ruled unlawful, at which point companies that have paid the import duties can apply to get their money back.

In a letter seen by WIRED, a representative from Cantor said the firm was willing to trade tariff refund rights for 20 to 30 percent of what companies have paid in duties. “So for a company that paid $10 million, they could expect to receive $2-$3 million in a trade,” the representative wrote. “We have the capacity to trade up to several hundred million of these presently and can likely upsize that in the future to meet potential demand.”…

…“Secretary Lutnick knows nothing about this decision because he has no insight or strategic control over Cantor Fitzgerald,” wrote Kristen Eichamer, press secretary for the Department of Commerce, in an email to WIRED. “He has fully complied with the terms of his ethics agreement with respect to divesture and recusals and will continue to do so.”

Trump announced in February that the US would put steep tariffs on goods from Mexico and Canada under the International Emergency Economic Powers Act (IEEPA). He widened the trade war in April to include nearly every nation that sells goods to the US, which Trump said would now be subject to “reciprocal” tariffs ranging from 10 to 50 percent.

In response, there was a flurry of lawsuits, including one from a group of small businesses that sued the Trump administration in the US Court of International Trade, arguing that the president exceeded his authority and the tariffs should be ruled illegal. The trade court sided with the plaintiffs, but the Trump administration appealed the decision, and the appeals court allowed the duties to remain in place while the case is pending.

5. Yet Another Munger Masterclass: The 2003 Wesco Financial AGM – Kingswell and Charlie Munger

(7) “The central idea of a margin of safety when you’re making investments will never be obsolete. And the idea of making the market your servant and not your instructor will never be obsolete, either. Those two basic ideas of Ben Graham are basically reality cubed. The idea of being objective and dispassionate, which was also in Graham, that will never be obsolete. So Graham had a lot of ideas that were wonderful.”…

…(8) “I’ve picked up Ben Graham’s main ideas and discarded the practices he used that don’t suit me. I don’t want to go around now buying stocks at a big discount from liquidating value, of businesses that are mediocre or worse, run by people I don’t like, and sit there saying no matter how horrible it is to watch, it will bounce by 25%. I don’t think that approach would work very well given our size of capital. So it’s natural to follow my temperamental attraction toward the better businesses.”…

…(12) “A lot of people rise to power in big corporate bureaucracies who are very nice people and good at doing things in a fairly limited way, but whose general powers of capital allocation are inadequate. And, of course, those who are advising them — the investment bankers, the consultants, and so forth — will mislead you 95% of the time.”…

…(14) “If you could actually sit down and talk to a key manager one-on-one for an hour or so — and if you’re a very smart person — that could be a significant plus. On the other hand, I’m enough of a cynic to believe an intelligent person might be helped 60% of the time and the other 40% of the time he might be misled. So, on balance, whether it’s worth the time, I can’t tell you.”…

…”Years ago”, he said, “we were interested in a particular stock and Warren went and talked to the CEO for two or three hours at lunch — and he thought he was the biggest horse’s ass he’d ever seen. So we sold every share. Well, the thing compounded at 15% per annum for about 20 years thereafter. It finally got a big denouement [and dropped in price], but the idea that meeting the management will always help you… Well, that always amused me — to watch that stock galloping upward.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

The Federal Reserve Is Not All-Powerful

I’ve noticed that many financial market participants tend to think of the Federal Reserve, the USA’s central bank, as an all-powerful entity that controls all aspects of the US financial markets. For example:

  • A Reuters journalist wrote in November 2023: “If investors failed to heed the ‘don’t fight the Fed’ mantra this year, they should be doubly cautious about ignoring it again next year….betting against the Fed is risky, no matter where the economic or policy cycles are”
  • Two journalists from Reuters commented in September 2024: “The Federal Reserve cut U.S. short-term borrowing costs on Wednesday… a lower policy rate should translate to cheaper borrowing costs for most kinds of loans”
  • Howard Marks, who is the co-founder of the distressed debt investment firm Oaktree Capital and an investor I respect deeply, shared the following in a June 2025 podcast with The Motley Fool: “You go through these periods of time, like 2017, 18. I would go travel the country… and speak to audiences or clients even. I would get one question. What month will the Fed raise interest rates? That’s all they ask”

The quote from Marks is one of the best at showing just how important the Federal Reserve looks in the eyes of most financial market participants.

But when it comes to interest rates, the truth of the matter is that the Federal Reserve controls only one interest rate, which is the federal funds rate. The federal funds rate is the interest rate that banks charge each other for overnight loans. 

Most types of loans that consumers and businesses interact with are not pegged to the federal funds rate. In addition, many types of corporate bonds and government bonds have interest rates that are set by market forces, not the Federal Reserve.

Figure 1; Source: St Louis Federal Reserve

Figure 1 above shows the monthly percentage change for a few different interest rates over the past two years. There’s the federal funds rate, which is the blue line; there’s the interest rate for 1-year US Treasuries, which is the orange line; there is the interest rate for 10-year US Treasuries, which is the brown line; and lastly, there is the interest rate for 20-year US Treasuries, which is the red line. The monthly percentage change for these four different interest rates do not move in lock-step. In fact, in the green circle, all three Treasuries saw a monthly increase in their interest rates around October 2024 when the federal funds rate declined. This would not have happened if the Federal Reserve was all-powerful.

As for the stock market, the Federal Reserve’s impact on stocks is unclear, outside of severe crises where the central bank can play a role in stabilising asset prices – as it did during the 2008 financial crisis.

Table 1 shows a few time periods in the past where the interest rate on the 3-month Treasury bill had increased significantly. It’s important to note that the 3-month Treasury bill is a close proxy for the federal funds rate, so the time periods when the interest rate on the 3-month Treasury bill increased would also be times when the Federal Reserve had raised interest rates

Time periodChange in yield of 3-month Treasury billS&P 500 annualised return
1954 – 19641.2% → 4.4%21%
1960s4% → 8%7.7%
1970s8% → 12%6%
Table 1; Source: Ben Carlson

It turns out that the three time periods of rising interest rates actually saw the S&P 500 produce annualised returns ranging from a decent 6% to an outstanding 21%. So, there have been past episodes where US stocks have done well over the long run even when the Federal Reserve was raising the federal funds rate.

Table 2 shows a few dates in the past where the Federal Reserve had cut the federal funds rate and how US stocks performed over the next 12 months. It turns out that US stocks have done very well, as well as done very poorly, in the 12 months after the Federal Reserve had lowered interest rates.

Date of Federal Reserve rate cutReturn of US stocks in the next 12-months after rate cut
October 195717%
October 1973-36%
February 198232%
September 2007-24%
Table 2; Source: Joshua Brown

So the reality of the situation, when it comes to the Federal Reserve, is that it is far from being all-powerful. There are many aspects of the US financial markets where the central bank has little to no control. 

This is also why I spend very little time thinking about or keeping track of what the Federal Reserve is doing when making investment decisions. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 27 July 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 27 July 2025:

1. Introducing pay per crawl: Enabling content owners to charge AI crawlers for access – Will Allen and Simon Newton

Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?…

…We believe your choice need not be binary — there should be a third, more nuanced option: You can charge for access. Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale…

…Pay per crawl, in private beta, is our first experiment in this area. 

Pay per crawl integrates with existing web infrastructure, leveraging HTTP status codes and established authentication mechanisms to create a framework for paid content access…

…At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable. 

We expect pay per crawl to evolve significantly. It’s very early: we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards.

For example, a publisher or new organization might want to charge different rates for different paths or content types. How do you introduce dynamic pricing based not only upon demand, but also how many users your AI application has? How do you introduce granular licenses at internet scale, whether for training, inference, search, or something entirely new?

The true potential of pay per crawl may emerge in an agentic world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on HTTP response code 402, we enable a future where intelligent agents can programmatically negotiate access to digital resources. 

2. How It’s Done – Doomberg

Among the critical minerals China has successfully cornered are the rare earth metals, and the primary means by which it achieved near-total dominance was by capturing the step at which the mined material—a concentrated mix of many valuable metals—is purified into individual components suitable for use in various military and industrial applications. Copious amounts of waste are produced along that processing journey, and treating such waste to Western standards became economically unfeasible at the market prices that prevailed after China entered the field. Last week, The New York Times caught on to how the game is played:

“Chinese mines and refineries produce most of the world’s rare earth metals and practically all of a few crucial kinds of rare earths. This has given China’s government near complete control over a critical choke point in global trade. But for decades in northern China, toxic sludge from rare earth processing has been dumped into a four-square-mile artificial lake. In south-central China, rare earth mines have poisoned dozens of once-green valleys and left hillsides stripped to barren red clay.”…

…With free markets clearly failing to price environmental and national security concerns—let alone the convergence of both—a completely new approach was needed to address the rare earth vulnerability. Last week brought the announcement of just such a move:

“The Defense Department will become the largest shareholder in rare-earth mining company MP Materials by buying $400 million of its stock and helping it build a new processing facility to sidestep the Chinese market, the company said Thursday. The deal underscores how far the Trump administration is willing to go to subsidize production of high-powered magnets, a field dominated by Chinese firms although the materials are critical for U.S. weapons systems.

Las Vegas-based MP Materials owns the only rare-earth mine in the United States, at Mountain Pass, California, near the Nevada border. MP Materials CEO Jim Litinsky said the company aims to restore the full rare-earth supply chain in the U.S. and eliminate a ‘single point of failure’ in the country’s military-industrial base.”

Perusing the company’s press release and other corporate filings, the details of the creative deal become clear. The Pentagon is taking a holistic approach to the objective, investing the capital needed for MP Materials to construct domestic processing and magnetic facilities while also putting a floor price under the company’s products that accounts for the cost of proper environmental stewardship:

“DoD has entered into a 10-year agreement establishing a price floor commitment of $110 per kilogram for MP Materials’ NdPr products stockpiled or sold, reducing vulnerability to non-market forces and ensuring stable and predictable cash flow with shared upside.

For a period of 10 years following the construction of the 10X Facility, DoD has agreed to ensure that 100% of the magnets produced at the 10X Facility will be purchased by defense and commercial customers with shared upside.”

3. Could AI slow science? -Sayash Kapoor and Arvind Narayanan

It’s a common-sense view, at least among technologists, that AI will speed science greatly as it gets adopted in every part of the scientific pipeline — summarizing existing literature, generating new ideas, performing data analyses and experiments to test them, writing up findings, and performing “peer” review…

…The impact of AI on science could be counterintuitive. Even if individual scientists benefit from adopting AI, it doesn’t mean science as a whole will benefit…

… So far, on balance, AI has been an unhealthy shock to science, stretching many of its processes to the breaking point.

Any serious attempt to forecast the impact of AI on science must confront the production-progress paradox. The rate of publication of scientific papers has been growing exponentially, increasing 500 fold between 1900 and 2015. But actual progress, by any available measure, has been constant or even slowing. So we must ask how AI is impacting, and will impact, the factors that have led to this disconnect.

Our analysis in this essay suggests that AI is likely to worsen the gap. This may not be true in all scientific fields, and it is certainly not a foregone conclusion…

…There’s something suboptimal about the way we’ve structured the practice of science, and so the efficiency of converting scientific inputs into progress is dropping. In particular, one subset of hypotheses flags the increase in the rate of production itself as the causal culprit — science is slowing down because it is trying to go too fast.

How could this be? The key is that any one scientist’s attention is finite, so they can only pay attention to a limited number of papers every year. So it is too risky for authors of papers to depart from the canon. Any such would-be breakthrough papers would be lost in the noise and won’t get the attention of a critical mass of scholars. The greater the rate of production, the more the noise, so the less attention truly novel papers will achieve, and thus will be less likely to break through into the canon…

…Another causal mechanism relates to scientists’ publish-or-perish incentives. Production is easy to measure, and progress is hard to measure. So universities and other scientific institutions judge researchers based on measurable criteria such as how many papers they publish and the amount of grant funding they receive. It is not uncommon for scientists to have to publish a certain number of peer-reviewed papers to be hired or to get tenure (either due to implicit norms or explicit requirements)…

…This completes the feedback loop: career incentives lead to researchers publishing more papers, and disincentivize novel research that results in true breakthroughs (but might only result in a single paper after years of work).

If slower progress is indeed being caused by faster production, how will AI impact it? Most obviously, automating parts of the scientific process will make it even easier for scientists to chase meaningless productivity metrics. AI could make individual researchers more creative but decrease the creativity of the collective because of a homogenizing effect. AI could also exacerbate the inequality of attention and make it even harder for new ideas to break through…

…The AI community often advertises AI as a silver bullet without realizing how difficult it is to detect subtle errors. Unfortunately, it takes much less competence to use AI tools than to understand them deeply and learn to identify errors. Like other software-based research, errors in AI-based science can take a long time to uncover. If the widespread adoption of AI leads to researchers spending more time and effort conducting or building on erroneous research, it could slow progress, since researcher time and effort are wasted in unproductive research directions.

Unfortunately, we’ve found that AI has already led to widespread errors. Even before generative AI, traditional machine learning led to errors in over 600 papers across 30 scientific fields. In many cases, the affected papers constituted the majority of the surveyed papers, raising the possibility that in many fields, the majority of AI-enabled research is flawed…

…Older modeling techniques required coming up with a hypothesis for how the world works, then using statistical models to make inferences about this hypothesis.

In contrast, AI-based modeling treats this process as a black box. Instead of making a hypothesis about the world and improving our understanding based on the model’s results, it simply tries to improve our ability to predict what outcomes would occur based on past data…

…AI-based modeling is no doubt helpful in improving predictive accuracy. But it doesn’t lend itself to an improved understanding of these phenomena. AI might be fantastic at producing the equivalents of epicycles across fields, leading to the prediction-explanation fallacy.

In other words, if AI allows us to make better predictions from incorrect theories, it might slow down scientific progress if this results in researchers using flawed theories for longer. In the extreme case, fields would be stuck in an intellectual rut even as they excel at improving predictive accuracy within existing paradigms…

…Researchers across fields are incentivized to find solutions to scientific problems. But this incentive only leads to progress because the process of proving theorems or finding solutions to problems also leads to building human understanding. As the desertion of work on foliations shows, when there is a mismatch between finding solutions to problems and building human understanding, it can result in slower progress.

This is precisely the effect AI might have: by solving open research problems without leading to the accompanying understanding, AI could erode these useful byproducts by reducing incentives to build understanding. If we use AI to short circuit this process of understanding, that is like using a forklift at the gym. You can lift heavier weights with it, sure, but that’s not why you go to the gym…

…If we use AI to bypass human understanding, or worse, retain only illusions of understanding, we might lose the ability to train new scientists, develop new theories and paradigms, synthesize and correct results, apply knowledge beyond science, or even generate new and interesting problems.

Empirical evidence across scientific fields has found evidence for some of these effects. For example, Hao et al. collect data from six fields and find that papers that adopt AI are more likely to focus on providing solutions to known problems and working within existing paradigms rather than generating new problems.

4. AI Comes Up with Bizarre Physics Experiments. But They Work – Anil Ananthaswamy

In the classical physics that describes our everyday world, objects have well-defined properties that are independent of attempts to measure those properties: A billiard ball, for example, has a particular position and momentum at any given moment in time.

In the quantum world, this isn’t the case. A quantum object is described by a mathematical entity called the quantum state. The best one can do is to use the state to calculate the probability that the object will be, say, at a certain location when you look for it there.

What is more, two (or more) quantum objects can share a single quantum state. Take light, which is made of photons. These photons can be generated in pairs that are “entangled,” meaning that the two photons share a single, joint quantum state even if they fly apart. Once one of the two photons is measured, the outcome seems to instantaneously determine the properties of the other — now distant — photon.

For decades, physicists assumed that entanglement required quantum objects to start out in the same place. But in the early 1990s, Anton Zeilinger(opens a new tab), who would later receive the Nobel Prize in Physics for his studies of entanglement, showed that this wasn’t always true. He and his colleagues proposed an experiment that began with two unrelated pairs of entangled photons. Photons A and B were entangled with each other, as were photons C and D. The researchers then devised a clever experimental design(opens a new tab) made of crystals, beam splitters and detectors that would operate on photons B and C — one photon from each of the two entangled pairs. Through a sequence of operations, the photons B and C get detected and destroyed, but as a product, the partner particles A and D, which had not previously interacted, become entangled. This is called entanglement swapping, which is now an important building block of quantum technology.

That was the state of affairs in 2021, when Krenn’s team started designing new experiments with the aid of software they dubbed PyTheus…

…The team represented optical experiments using mathematical structures called graphs, which are composed of nodes connected by lines called edges. The nodes and edges represented different aspects of an experiment, such as beam splitters, the paths of photons, or whether or not two photons had interacted.

Krenn’s team started by first building a very general graph, one that modeled the space of all possible experiments of some size. The graph had output features that represented some desired quantum state…

…The question, then, was how to modify all the other parts of the graph to produce this state. To figure this out, the researchers formulated a mathematical function. It took in the state of the graph and calculated the difference between the output of the graph and the desired quantum state. They then iteratively modified the graph’s parameters, which represented the experimental configuration, to reduce this discrepancy to zero.

When Krenn’s student Soren Arlt tried to use this approach to find the best way to do entanglement swapping, he noticed that the experimental configuration was unrecognizable — nothing at all like Zeilinger’s design from 1993. “When he showed it to me, we were confused,” Krenn said. “I was convinced that it must be wrong.”

The optimization algorithm had borrowed ideas from a separate area of study called multiphoton interference. By doing so, it created a simpler configuration(opens a new tab) than Zeilinger’s. Krenn’s team then did a separate mathematical analysis of the final design. It confirmed that the new experimental design would in fact create entanglement among particles with no shared past.

In December 2024, a team in China led by Xiao-Song Ma of Nanjing University confirmed it(opens a new tab). They built the actual experiment, and it worked as intended.

5. Get Smart: How to Profit in a Fast-Moving Stock Market – Chin Hui Leong

Here’s the good news: when it comes to investing, the winner is not always the one with the fastest fingers.

While news may reach your eyes faster, the actual change in businesses takes time to materialise.

Thus, even if you react faster, it doesn’t necessarily mean you will be right.

Need an example?

In my Business Time article last Wednesday, I highlighted how the initial hype over DeepSeek in late January 2025 has largely died down.

In the process, those who sold Nvidia (NASDAQ: NVDA) right after the DeepSeek news broke out will be rueing the fact that the GPU provider has delivered revenue gains of 78% and 69% year on year, respectively, for the past two quarters.

In turn, shares have risen by nearly 45% from their January low…

…In other words, slowing down, taking your time to assess the situation, and listening to the contrasting arguments will lead to better outcomes…

…But what if a threat turns out to be real and you were right to sell?

It’s possible, of course.

Here’s a common narrative: BlackBerry’s (NYSE: BB) reign as the go-to device in the corporate world was cut short by the rapid rise in popularity of Apple’s (NASDAQ: AAPL) iPhone and Alphabet’s (NASDAQ: GOOGL) Android…

…It’s easy to assume that the decline was immediate, but the opposite is true.

Between fiscal 2007 and fiscal 2011, the Canadian company’s sales actually soared by over sixfold from US$3 billion to almost US$20 billion.

In other words, Blackberry experienced a period of tremendous growth for over four years before its business began to falter.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet and Apple. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q2 2025

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the second quarter of 2025.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the second quarter of 2025 – was held last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: the US economy remains resilient, but significant risks persist

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy remained resilient in 2025 Q2 but significant risks persist

The U.S. economy remained resilient in the quarter. The finalization of tax reform and potential deregulation are positive for the economic outlook, however, significant risks persist – including from tariffs and trade uncertainty, worsening geopolitical conditions, high fiscal deficits and elevated asset prices.

2. Net charge-offs for the whole bank (effectively bad loans that JPMorgan can’t recover) rose from US$2.2 billion a year ago; Consumer & Community Banking’s net charge-offs was relatively flat compared to a year ago 

Credit costs were $2.8 billion, with net charge-offs of $2.4 billion, and a net reserve build of $439 million…

…Now let’s go to our businesses, starting with CCB…

…Credit costs were $2.1 billion, reflecting net charge-offs of $2.1 billion, relatively flat year-on-year, in line with expectations.

3. JPMorgan’s credit card outstanding loans was up 9% year-on-year in 2025 Q2 

Card outstandings were up 9% due to strong new card acquisition.

4. Auto originations were up year-on-year

In Auto originations were up 5%, driven by higher lease volumes.

6. JPMorgan’s investment banking fees had good growth in 2025 Q2, with growth in debt underwriting fees but a decline in equity underwriting fees; management sees a robust pipeline for capital markets activities among companies and the outlook is upbeat, but they’re also aware that sentiment can change in a heartbeat

IB fees were up 7% year-on-year. We continue to rank #1 with wallet share of 8.9%. In advisory fees were up 8%, benefiting from increased sponsor activity. Debt underwriting fees were up 12%, primarily driven by a few large deals. In equity underwriting fees were down 6% year-on-year. Our pipeline remains robust, and the outlook along with the market tone and sentiment is notably more upbeat…

…You’ve seen how rapidly pipelines can grow and shrink. And so that lesson we’ve learned over and over, it may stay wide open for 1.5 years. Something may happen geopolitically that all of a sudden that pipeline slows a little bit. And so I’m always a little cautious to guess what that’s going to be.

7. Management continues to expect credit card net charge-offs for 2025 to be around 3.6% 

On credit, we continue to expect the Card net charge-off rate to be approximately 3.6%.

8. The consumer looks fine to management given the low unemployment rate, although there is a little it more stress in lower income consumers compared to higher income consumers

[Question] If you can expand that into the consumer, any areas of stress from a credit quality perspective that you’re beginning to get more concerned today versus 3 or 6 months ago?

[Answer] We look at it very closely. It obviously matters a lot for us as a company. But we continue to struggle to see signs of weakness. We just — the consumer basically seems to be fine. Now a few things are true. Like if you look at indicators of stress, not surprisingly, you see a little bit more stress in the lower income bands than you see in the higher income bands. But that’s always true. That’s pretty much definitionally true. And nothing there is out of line with our expectations. Our delinquency rates are also in line with expectations. You saw that we kept our net charge-off guidance unchanged. So all that looks kind of fine. And to be honest, as we’ve said before, fundamentally, while there are nuances around the edges, consumer credit is primarily about labor markets. And in a world with 4.1% unemployment rate, it’s just going to be hard, especially in our portfolio to see a lot of weakness.

9. JPMorgan experienced a jump in non-accrual loans within consumer lending, but that is because of forbearance related to wildfires in the Los Angeles area, and the actual loss expectation is de minimis

[Question] In terms of the NPAs, the nonaccruals in consumers seem to have a bit of a jump. Is there something technical there?

[Answer] There is something technical, which has to do with customers in the — Home Lending customers in the L.A. area, using our forbearance availability as a result of the wildfires. So that is resulting in an uptick in the nonperforming. But when you think about land value, and the insurance there, the actual loss expectation is de minimis, I would say.

10. Management thinks tariff-related risks have reduced a little; management has not seen any pressure on loans because of tariffs 

When it comes to tariffs, I think the initial Liberation Day, now there’s more talk as more things getting done, a couple have been announced, a couple have been delayed, that reduces that risk a little bit. And hopefully, they’ll get done. So there’s still risk out there, but I am hopeful that some of these frameworks are completed soon, at least before August 1…

…What’s the tariff pressure with pressure on loans or debt. The answer is no.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.