All articles

What We’re Reading (Week Ending 28 September 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 28 September 2025

1. Is this 1996 or 1999? – Ben Carlson

From 1980 through Greenspan’s speech at the tail end of 1996, the S&P 500 was up more than 1,200% in total or a blistering 16.5% return on an annual basis. Valuations were up, up and away. The Netscape IPO occurred a year earlier. Things felt very toppy.

That didn’t matter…

…From the time of Greenspan’s speech through the rest of the decade the S&P would more than double, good enough for an annualized return of nearly 26% through the end of 1999. The market was up 33% in 1997, 28% in 1998 and another 21% in 1999.

The dot-com bubble finally burst in the spring of 2000, cutting the S&P 500 in half along with a drawdown of more than 80% in the Nasdaq…

…The AI capex spending binge is eerily similar to the telecomm buildout that occurred in the 1990s.

Speculative activity is all over the place too — SPACs, meme stocks, IPOs, leverage, story stocks, high valuations, deregulation, etc…

…Many people are trying to figure out whether this is the early stages of a bubble or the end of the road.

Investing would be a lot easier if there were a simple way to predict these types of markets. Unfortunately, there’s not. No one can predict when human nature will take things too far or when it will stop on a dime. The pendulum always swings; we just don’t know how far in either direction…

…If you had invested in the S&P 500 following Greenspan’s speech in December of 1996 and held on until today, you would be up just shy of 10% per year. You would have had to live through two 50% crashes in the next dozen years or so, 9/11, multiple wars, oil going to $150/barrel then negative, the pandemic, 40-year high inflation, the 2022 bear market and about a dozen other run-of-the-mill corrections…

…If you had invested at the peak of the market just before the dot-com bubble burst at the end of 1999, you would be up a little more than 8% per year. That’s not a terrible outcome considering all of the bad stuff you would have had to live through plus that was the most expensive valuations the U.S. stock market has ever seen.

2. Ethical investing, avoiding blow ups, and salacious indictments $RICK – Andrew Walker

But honestly, saying that “I’ll invest in anything, ethics be damned!” is kind of a trite point. Why do I bring it up?

Because I’m actually interested in the potential wisdom of having ethical limits. I wonder if having “ethical passes” on stocks is actually a way of identifying and passing on stocks with tail risks…

…For example, in the mid-2010s, Valeant was an unstoppable acquisition machine. The business model was truly incredible: Valeant acquired underpriced drugs and brought their pricing in line with what the market would bear. Often that pricing was 10x what the old company was charging. Valeant had kind of discovered the holy grail in pharma: they did no risky R&D, every acquisition was insanely and instantly accrettive, returns on investment were astronomical, the company gushed cash, etc.

Of course, what Valeant was really doing was price gouging. In 2015, Charlie Munger called Valeant “deeply immoral”. Valeant was a hedge fund darling at the time, and Munger’s comments raised a lot of eyebrows. I remember a ton of investors who said Munger had lost it, and some hedge funders3 came out swinging pretty hard against Munger.

Within months of Munger’s comments, Valeant was in deep distress (which continues to this day!). Much like raisins mixed with turds are still turds, when a business is a turd no amount of accrettive acquisitions or clever financial engineering can save it. It’s still a turd and, true to form, Munger called a turd a turd…

…Last night, RICK’s got hit with a pretty salacious indictment from the NY AG (the company denies all wrong doing). And it has me questioning my “no ethics in investing” rule…

…It’s the type of stock I very easily could see myself owning: an asset heavy business (RICK tends to own the real estate under their clubs) operating a sin business with a founder CEO who owned a ton of stock and was openly talking about running an “Outsiders” playbook / was planning to buyback tons of stock when it was cheap while also pursuing extremely accrettive (and low multiple) acquisitions?…

…There were/are a lot of issues at Rick’s that you had to get comfortable with to be long to the stock4; in general, the way you could get comfortable with the issues was something like “it’s a strip club business; the whole industry is shady so you kind of just need to accept that and realize ultimately the cash flow of the business + stock ownership of the CEO pushes this higher.” Given the upside here, I think there was a reasonable chance you could talk yourself into that if you were ignoring all ethics…. but, if you used an ethics based screen, then you wouldn’t have even been tempted by the cash flows / alignment issue. You would have seen the shadiness and instantly passed.

3. How to avoid value traps in Asia – Michael Fritzell

  • Value traps are stocks that look cheap but end up delivering poor returns.
  • The main reasons why stocks end up being value traps include hoarding cash, having obsolescent products, selling commodity products in a market with excess supply, related party transactions, aggressive accounting, industry cyclicality, high debt and government interference…

…How do you avoid the value traps that simply do not return cash to shareholders? Check the company’s cash flow statement.

In IMAX China’s case, you can see that they pulled the dividend in 2023 and spent almost nothing on share buybacks in 2024. So US$17 million of cash built up on the balance sheet, unfortunately out of reach for us minority shareholders…

…So how can you know whether the underlying demand for a product is rising or not? First, check the like-for-like volume numbers reported by the company. Second, observe consumer behavior through customer engagement metrics. Third, check alternative data sources such as Google Trends or Similarweb to see whether interest in the product is rising or falling…

…So, how do you know if a company is selling a commodity product or not?

  • You can check the company’s market share: if it’s greater than 50%, then it probably has some type of competitive advantage.
  • You can ask customers why they buy the product: is price the determining factor, or are they focusing more on other attributes when buying?
  • Finally, is there a market price for the product that fluctuates with supply & demand? If so, then you’re most likely looking at a commodity…

…So, how do you check whether a company has a complex corporate structure? Search on TIKR using the company name and then click on the Ownership tab. If the parent is a holding company, ask ChatGPT what business the parent is involved in. Finally, open up the annual report and search for any related party transactions…

…If the accounting is aggressive, that means that profits are partly illusory. Once the market realizes what the sustainable earnings power of the business truly is, the shares will probably trade down.

How do companies play these games? They might adjust their depreciation schedules, push products to customers on looser payment terms, capitalize expenses, under-estimate credit costs, etc…

…In reality, I sometimes struggle to judge whether an industry has hit a bottom. But I like to look at a company’s operating margins over time, to see whether they’re mean-reverting or not. You can also look at the operating margins of companies in the same industry. Property developers, auto companies and chemical companies are famously cyclical. So to avoid value traps in these industries, consider whether margins may one day head lower…

…At one level, I think it’s helpful to invest in countries with a reliable rule of law, just to avoid negative surprises in the future. But if you have to invest in countries with a poor rule of law, it’s helpful to invest in entities that are aligned with the top leadership. Because if any government interference occurs, it will most likely be on the positive side.

4. Arc’teryx Is Cooked in China – Amber Zhang

On September 19, Chinese firework artist Cai Guoqiang and outdoor apparel brand Arc’teryx jointly staged a fireworks display called “Ascending Dragon” (“升龙”) in Relong Township, Gyantse County, in the Tibet Autonomous Region. The display — set at roughly 5,500 meters altitude — consisted of three sequences of fireworks along the Himalayan mountainous ridge, with imagery meant to evoke a dragon…

…Soon after videos of the event circulated online, the display triggered intense backlash over environmental and cultural concerns. Netizens began calling for a boycott of Arc’teryx, arguing that setting off fireworks in such a fragile alpine ecosystem risked disturbing wildlife, damaging slow-growing vegetation, and polluting the high-altitude environment. Many also criticized the spectacle as disrespectful to local traditions, which hold mountains as sacred and discourage loud disturbances. The sponsored firework show is the complete opposite of environmental protection and respect for nature—values that strongly resonate with China’s affluent urban middle class and outdoor enthusiasts, who form Arc’teryx’s core customer base.

Some netizens have even extended the boycott to Anta Sports (2020.HK), the Chinese sportswear conglomerate that acquired Arc’teryx’s parent company, Amer Sports (AS:NYSE), in 2019 and now effectively owns the brand…

…For a long time, corporate references to “environmental friendliness“ or “social responsibility“ were treated as nice-to-have branding or merely compliance with basic regulations, rather than as priorities with real financial impact. For one thing, investment decisions in China were rarely bound by ESG mandates, and it’s common for consumers to choose price and convenience over whether a brand truly embodied ESG values. (Realistically speaking, many consumers simply lacked the awareness, tools, or access to evaluate how a company performed on ESG benchmarks.)

But that is changing. In recent years, China’s urban middle class has begun voting with their wallets, willing to spend real money to support brands that align with their values…

…Arc’teryx, which first won over hardcore outdoor enthusiasts in the 1990s with its technical hardshell jackets, has in recent years faced criticism in China for drifting away from its image as a serious outdoor brand…

…Many outdoor enthusiasts argue that Arc’teryx’s management has lost touch with the outdoor spirit that once defined the brand. They point out that a true outdoor enthusiast would never have approved a fireworks show that risks damaging the very landscapes where Arc’teryx gear is meant to be worn. To them, the backlash over the event felt less like a one-off mistake and more like the inevitable result of a brand now led by people who no longer live and breathe the outdoors.

Apart from environmental issues, China’s urban middle class — especially those born in the 1980s and 1990s — is paying more attention to how socially responsible companies are. For example, this year more and more netizens are boycotting products from companies that follow the “996 schedule,” the notorious work culture requiring employees to work 9 a.m. to 9 p.m., six days a week, often without clear overtime pay. On Xiaohongshu (Red Note), people are sharing lists of companies that mistreat employees and avoiding their products…

…Generational divides are particularly stark. Those in power today—both in government institutions and corporations—were born in the 1960s and 70s, coming of age during China’s fastest industrialization. Meanwhile, the biggest consumers, born in the 1980s, 90s and 00s, have entirely different mindsets and values. The clash between these perspectives shapes much of the friction we see today.

For instance, apart from Arc’teryx’s terrible marketing decision, another major topic discussed among netizens is how this firework show was even approved in the first place. A firework show of this scale in such a fragile alpine ecosystem would not have been possible without prior authorization from local officials. (In fact, Cai had previously applied to hold the firework displays in Japan and France, only to be rejected by both countries.) While China does have environmental protection laws, the lack of awareness among those in power demonstrates how actual social responsibility, such as law enforcement, has lagged behind the rapidly growing economy…

…For some, over time, it became more about the “I”—the ego—and less about the community that upholds those values. For this reason, I believe the Arc’teryx and Cai firework incident is not merely a case of bad PR or environmental infringement, but an important and valuable reminder—especially for those who take consumers for granted and fail to adapt to these social and cultural changes in China today.

The change in aesthetics also reflects the shifting social climate in China. It offers a glimpse into the many differences in values and the conflicts between generations: how the older generation prizes hard work, while the younger generation rejects the 996 culture; how fierce competition and extreme efficiency has morphed into involution; or how the younger generation increasingly regards “grandeur” as “grandiose” and unnatural rather than aesthetically satisfying.

5. The resilience of consumer spending in the US – Abdullah Al-Rezwan

This graph basically helped me understand how the broader economy is chugging along just fine even though “vibecession” has increasingly become part of the conversation. The vibes are not great because a lot of people are indeed feeling the pinch whereas the high income group remains remarkably resilient. It is because of this high income group macro data may continue to be strong for a while:

the fact that credit card debt levels for the highest-income consumers are currently well below the pre-pandemic trend implies that these consumers have room to spend out of unused credit even if their cash on hand has been depleted.

US economy has increasingly been driven by the high income group for a while as half of the consumer spending (vs ~36% three decades ago) basically comes from just top decile of earners.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

Company Notes Series (#9): CompoSecure

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first eight editions in the series can be found herehereherehereherehere, here, and here. Please share your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!


Start of notes for CompoSecure

Data as of 25 September 2024

Details of CompoSecure

  • Ticker: CMPO
  • Exchange: NASDAQ
  • HQ: New Jersey
  • Founding year: 2000
  • Date of IPO: December 2021, via a SPAC-merger with Roman DBDR Tech Acquisition Corp

Business of CompoSecure

  • CompoSecure led the creation and growth of the metal card form factor through its expertise in material science and has been at the forefront of emerging embedded payment card technology.
  • CompoSecure is a category leader in the design and manufacture of premium metal payment cards. Its metal payment cards are currently issued typically on the Visa, Mastercard, American Express, and China Union Pay payment networks.
  • In 2003, for the American Express Centurion program, CompoSecure created the world’s first metal payment card. In 2009, CompoSecure developed the first commercialized metal payment cards with embedded EMV chips (EMV is an acronym derived from the names Europay, Mastercard, and Visa, and is a high-security payment protocol for payment cards which utilizes an embedded microprocessor that, when paired with an EMV enabled payment terminal, authenticates cardholder transactions; EMV cards are often called “chip cards”). In 2010, for the JP Morgan Chase Sapphire Preferred program, CompoSecure created the first metal payment card targeting the mass affluent segment. In 2017, CompoSecure introduced the first large-scale NFC-integrated dual-interface metal payment cards for the American Express Platinum program; NFC refers to the near-field communications protocol which enables RFID (radio-frequency identification) communications between payment cards and payment terminals.
  • Dual-interface payment cards comprise the majority of CompoSecure’s sales volume today.
  • CompoSecure has many form factors for metal payment cards and the primary ones are shown in Figure 1.
  • In 2022, CompoSecure also began offering its customers the opportunity to include innovative features in their payment cards:
    • Biometrics – Fingerprint sensors for added security
    • Dynamic CVV – Converts the CVV code from a static number printed on the back of a card to one on a tiny e-ink screen that refreshes periodically.
    • LED – LEDs on the face of a CompoSecure Metal Veneer card that lights up when the card is used for transactions; the LEDs can form the issuing bank’s logo or other elements
  • CompoSecure’s customers are global issuers of payment cards. Its largest customers are American Express and JP Morgan Chase. Together these customers represented 70.5% of CompoSecure’s revenue of US$390.6 million in 2023, with American Express representing 28.8% and JP Morgan 41.7%. See Figure 2 for the proprietary and co-branded card programs of JP Morgan Chase and American Express that CompoSecure supports.
  • CompoSecure’s contract with American Express was extended in 2023 and will be up for renewal on 31 July 2026. Under the contract, American Express reserved annual capacity of products and is required to order a certain percentage of that capacity from CompoSecure, and the company may charge American Express for a portion of that capacity even if American Express orders below capacity for any given year. American Express can terminate the contract with written notice. CompoSecure has been working with American Express for nearly 20 years. 
  • CompoSecure’s contract with JP Morgan was extended in 2023 and will be up for renewal on 31 December 2028. Under the contract, JP Morgan Chase agreed to purchase its metal payment cards only from CompoSecure during the contract-term, and reserved annual capacity of products. JP Morgan can terminate the contract with written notice. CompoSecure has been working with JP Morgan for nearly 16 years.
  • CompoSecure’s revenue comes primarily from the sale of its metal cards. In 2023, CompoSecure produced 31 million metal cards, and it served more than 150 card programs. There are recurring elements in CompoSecure’s revenue because the company’s metal cards support its customers’ new customer acquisition and replacement card activity for lost and stolen cards, account fraud, and natural card reissuance cycles. 
  • 82.3% of CompoSecure’s revenue in 2023 came from the USA; the rest was grouped under International.
  • CompoSecure competes with other card manufacturers. But most of the company’s competitors in card manufacturing are large, diversified businesses with areas of strategic focus outside of the payment cards market, and their card operations focus primarily on lower margin plastic cards. CompoSecure’s management also believe that most competitive metal card manufacturers have substantially less production capacity, less technical expertise in the metal form factor, a limited selection of metal card designs and constructions, and less extensive supplier relationships for the raw materials needed for metal cards. CompoSecure’s metal-card competitors include Idemia France S.A.S., Thales DIS France SA, CPI Card Group, Giesecke & Devrient GmbH, Federal Card Systems, Kona I, BioSmart Co., Ltd., and ICK International.
  • CompoSecure designs and manufactures its metal payment cards. It has 5 facilities that total 241,000 square feet, and all are in Somerset, New Jersey.
  • In the third quarter of 2021, CompoSecure entered the cryptocurrency market through the launch of the Arculus Platform, a three-factor security platform with broad industry applicability. The Platform makes it safe, simple and secure for an individual to buy, swap and store cryptocurrencies. CompoSecure started with offering the Arculus Cold Storage Wallet to businesses and consumers. The Arculus Cold Storage Wallet allows users to easily and securely buy and swap cryptocurrencies and store their private keys, providing the convenience of a hot storage wallet with the security of cold storage. Hot storage wallets generate and store private and public keys and digitally sign transactions within Internet-connected devices where storage of the keys is hosted by a third party. Cryptocurrency exchanges typically provide their customers hot storage wallets with the exchange having custody of the user’s private keys. Cold storage wallets store private keys and sign transactions in an offline device, with the private key in the custody of the user, thus protecting the wallet from network-based security vulnerabilities; cold storage wallets are thus less prone to risk of cyber-theft than hot storage wallets. Today, CompoSecure has expanded the Arculus platform into two areas, Arculus Business Solutions, and Arculus consumer products.
  • Arculus Business Solutions consist of:
    • Payments + Arculus Authenticate: The Arculus Authenticate solutions can be seamlessly integrated and paired with CompoSecure’s payment cards, allowing consumers to make secure transactions and gain secure access to personal accounts, all from the same metal card. This custom security solution enables card issuers and other businesses to build multi-factor authentication solutions for their customers, through the convenience of the Company’s premium metal cards
    • White-Labeled Cold Storage: CompoSecure provides white-labeled cold storage wallets in the form of a premium metal cards, to give consumers the ability to make transactions and store the private keys to their digital assets in the same metal cards
    • Payments + Arculus Cold Storage: CompoSecure provides the combination of Arculus Cold Storage combined in premium metal payment cards to give consumers the ability to make transactions and store the private keys to their digital assets in the same metal cards
    • Payments + Arculus Authenticate + Arculus Cold Storage 
  • Arculus consumer products consist of the Arculus Cold Storage Wallet
Figure 1
Figure 2

Market opportunity of CompoSecure

  • CompoSecure’s sales volume of payment cards in 2023 is less than 0.7% of the estimated addressable market for payment cards. Worth noting that CompoSecure’s market share was around 0.5% in 2021.
  • In 2023, CompoSecure produced metal payment cards for 8 of the top 10 U.S. card issuers. Management believes there are substantial opportunities to expand adoption of metal cards for existing customers’ proprietary and co-branded mass affluent card programs in the U.S. which do not currently offer metal payment cards. The number of issuers adopting metal programs continues to increase, and there has been an increase in card issuers expanding their metal card programs to additional proprietary and co-branded portfolios.
  • Management believes that issuers in international markets are still in the early stages of adoption of metal cards and largely untapped opportunities exist across major markets in Europe, Asia, India, the Middle East, and Latin America. In these regions, issuers are developing awareness of the relatively low cost and attractive economics of metal payment card programs.
  • Digital banks and other fintechs are increasingly seeking premium physical touch points to enhance their typically digital-only customer relationships, which mean they are more likely to offer premium metal cards to their customers. 
  • CompoSecure’s metal cards use 65% post-consumer recycled stainless steel and this is a major sustainability advantage over plastic cards.

Management of CompoSecure

  • On 7 August 2024, David Cote announced that his family office will invest US$372 million to buy 60% of CompoSecure’s shares (49.3 million) from existing CompoSecure shareholders and thus become a majority shareholder. The investment equated to a price of US$7.55 per share and it was completed on 17 September 2024. As part of the investment, David Cote became executive chairman of CompoSecure’s board, while CompoSecure’s management team – including CEO Jon Wilk – continued in their current roles. Wilk has been CEO since May 2017.
  • Prior to Cote’s involvement, CompoSecure had Class A and Class B shares, where Class B shareholders could receive certain tax benefits; the entire set-up was very complicated. Cote’s investment cleaned up the capital structure as the sellers of CompoSecure’s shares converted all of their Class B shares into Class A shares, and sold the Class A shares to Cote. CompoSecure now has only one single class of common stock.
  • Cote has a legendary track record of improving companies’ efficiency and margins.
  • Cote first built his reputation with Honeywell, where he was CEO from 2002-2017. 2003 was the first full-year Cote was CEO of Honeywell. Table 1 below shows Honeywell’s revenue, operating profit, and net profit from 2003 to 2017. Notice the strong growth in operating profit and net profit (2017’s net profit was hurt by very high taxes because of the US tax reform). Cote became executive chairman of Vertiv Holdings in February 2020 and is still executive chairman today; Vertiv’s operating margins have increased from 7.7% in 2020 to 15.1% in the last 12 months.
  • In talking about his investment in CompoSecure, Cote said:

“We are excited to begin working with Jon Wilk and the team at CompoSecure to continue driving long-term value for shareholders. We plan to focus our efforts on enhancing the Company’s organic growth and operational efficiency while evaluating ways to further diversify its customer base and business mix through M&A. The Company’s permanent capital base eliminates the duration and transactional constraints of traditional alternative asset structures and can allow it to become the acquiror of choice for companies in need of operational improvement and M&A expertise.”

  • The prior major shareholders of CompoSecure were Mitchell Hollin and Michele Logan. Mitchell Hollin is a leader of LLR Partners, a private equity firm, while Michele Logan is a co-founder of CompoSecure. They were the ones who sold their shares to David Cote.
Table 1

Financials and valuation (numbers as of 2024-09-25) of CompoSecure

  • For 2019-2023, CompoSecure’s revenue CAGR is 12.6%, helped by a big jump of 41.3% in 2022; 2023’s revenue growth is 3.2%
  • For 2019-2023, CompoSecure generated consistent profit and free cash flow.
  • Note that CompoSecure’s net profit in Table 2 includes the portions that accrue to the Class B shareholders; after David Cote’s investment, there is only one single share class as mentioned earlier.
  • As of 30 June 2024, CompoSecure had:
    • 81.7 million Class A and Class B shares, so after David Cote’s investment, we can take the total number of Class A shares to be 81.7 million.
    • A total of 8.4 million restricted stock units, performance stock units, and earnout shares that have yet to be vested.
    • 22.415 million public warrants outstanding; the warrants expire on 27 December 2026, and each public warrant entitles the registered holder to purchase one share of the company’s Class A common stock at a price of $11.50 per share.
    • US$130 million in exchangeable notes that can be exchangeable into Class A common stock at a conversion price of US$10.98 per share, which works out to 11.8 million shares. But CompoSecure has the intention to redeem the exchangeable notes and it’s at the discretion of the company to make the redemption, instead of letting the notes convert. 
    • A total diluted share count of 112.52 million, taking into account: the 81.7 million Class A shares; the 8.4 million RSUs, PSUs, and earnout shares; and the 22.415 million public warrants outstanding
  • CompoSecure’s trailing EPS and FCF per share are US$1.06 and US$0.96 respectively, using the 112.52 million total diluted share count. CompoSecure’s stock price of US$13.75 gives it a PE and PFCF ratio of 12.9 and 14.3. Worth noting that David Cote’s investment (as a result of the simplification of the tax structure) is expected to deliver an additional annual US$20 million in free cash flow. 
  • Is a PE and PFCF ratio of 12.9 and 14.3 too low for a company with an effective monopoly in metal payment cards, and with a new major shareholder on board who has a long history of excellent execution at industrial companies?
Table 2
Table 3

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Mastercard and Visa. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 September 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 21 September 2025:

1. AI Will Not Make You Rich – Jerry Neumann

Fortunes are made by entrepreneurs and investors when revolutionary technologies enable waves of innovative, investable companies. Think of the railroad, the Bessemer process, electric power, the internal combustion engine, or the microprocessor—each of which, like a stray spark in a fireworks factory, set off decades of follow-on innovations, permeated every part of society, and catapulted a new set of inventors and investors into power, influence, and wealth.

Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running…

1. Containerization History: The benefits of the tech are obvious, leading many companies to enter. AI Rhyme: The idea that AI is the next big thing is widespread, and entrepreneurs and tech companies quickly enter.

2. Containerization History: There is immediate government and social attention, leading to pushback. AI Rhyme: The debate over AI that immediately surfaces in society, the media, and government limits experimentation.

3. Containerization History:  Shipbuilders and other infrastructure companies get a quick boost, but not a long-lasting one. AI Rhyme: Chip makers, data center builders, and data providers get a quick boost, but not a long-lasting one.

4. Containerization History:  Competitive intensity makes it difficult to keep prices high or lower costs, and forces high spending on capex, R&D, and talent. AI Rhyme: Prices start to drop even as companies spend heavily on capex, R&D, and talent. Companies will not be especially profitable.

5. Containerization History:  The industry searches for ways to limit competition through cartels and regulatory bodies. AI Rhyme: Investors become alarmed and push for rationalization, resulting in consolidation and convergence on a few business models.

6. Containerization History:  The value created by the innovation is zero-sum: who captures it (provider vs customer) determines the structure of the resulting industry. AI Rhyme: Companies vertically integrate into their customers’ businesses. Companies built on another company’s model have their margins or business model subsumed. Model companies become generalized AI providers.

7. Containerization History:  The longer-term beneficiaries of increased productivity are existing companies that dramatically reduce prices or open new markets to their products. Most incumbents don’t do this. AI Rhyme: The beneficiaries of increased productivity in “thinking” are existing knowledge-industry service providers. Those that won’t adapt will die.

In the “AI rhymes” column, the first four items are already underway. How you should invest depends on whether you believe Nos. 5–7 are next…

…The high capex of AI companies will primarily be spent with the infrastructure companies. These companies are already valued with this expectation, so there won’t be an upside surprise. But consider that shipbuilding benefited from containerization from 1965 until demand collapsed after about 1973.[19 If AI companies consolidate or otherwise act in concert, even a slight downturn that forces them to conserve cash could turn into a serious, sudden, and long-lasting decline in infrastructure spending. This would leave companies like Nvidia and its emerging competitors—who must all make long-term commitments to suppliers and for capacity expansion—unable to lower costs to match the new, smaller market size. Companies priced for an s-curve are overpriced if there’s a peak and decline.

All of which means that investors shouldn’t swim upstream, but fish downstream: companies whose products rely on achieving high-quality results from somewhat ambiguous information will see increased productivity and higher profits. These sectors include professional services, healthcare, education, financial services, and creative services, which together account for between a third and a half of global GDP and have not seen much increased productivity from automation. AI can help lower costs, but as with containerization, how individual businesses incorporate lower costs into their strategies—and what they decide to do with the savings—will determine success. To put it bluntly, using cost savings to increase profits rather than grow revenue is a loser’s game.

The companies that will benefit most rapidly are those whose strategies are already conditional on lowering costs. IKEA’s longtime strategy was to sell quality furniture for low prices and make it up on volume. After containerization made it possible for them to go worldwide, IKEA became the world’s largest retailer and Ingvar Kamprad (the IK of IKEA) became a billionaire. Similarly, Walmart, whose strategy was high volume and low prices in underserved markets, benefited from both cost savings and just-in-time supply chains, allowing increased product variety and lower inventory costs.

2. Getting Rich on Rocks – Joe Raymond

35% per year for 19 years results in a 300x return.

This is a Hall of Fame result. It’s an incredible feat in only two decades for a single stock…

…But what if I told you there was an obscure OTC stock that returned more than 35% annually from 1993 to its acquisition in 2012?

You almost certainly haven’t heard of this company. Its executives aren’t on the covers of any magazines and haven’t written any bestselling books. And its shareholders quietly made their fortune without anybody noticing.

To make matters even more interesting, this was an aggregates business. That’s right, the company sold rocks…

…Let me tell you about Western Lime…

…Western Lime was an unremarkable business in the ’90s.

Growth was around 5-6% per year, and ROE hovered around 10%. Decent, but not particularly noteworthy.

What was noteworthy was the price.

For much of the ’90s, WLC traded between $150 and $160 per share. Trades were very infrequent. The stock only changed hands a few times a year.

At $155 per share in 1993, Western Lime had a market capitalization of only $2 million. The company earned $1.3 million after-tax that year, good for a P/E ratio of 1.7x.

Shareholders’ equity was $12.7 million, so the P/B was 0.16x.

The company had no debt and paid a small quarterly dividend…

…In addition to being incredibly cheap, the company itself was repurchasing shares in private transactions at $550 (more than triple the OTC price). I don’t know if anyone was arbing this, but I bet somebody was…

…By 2010, Tweedy owned 27% of Western Lime’s outstanding shares…

…By this point, word had started to get out on WLC. It was no longer a completely undiscovered stock selling for less than 2x earnings. It was then trading for $5,600 per share…

…Performance had been solid from 1993 to 2009.

Net income grew at 13% per year, the share count was cut in half, and the P/E multiple more than doubled from 1.7x to 3.7x.

The result was a 25% CAGR before dividends from 1993 to 2009…

…In late 2010, we received a string of correspondence between the company and Tweedy, Browne. It was sent to all shareholders. And it made for compelling reading.

The company had offered Tweedy $7,600 per share to acquire their 27% interest (36% above the prevailing $5,600 share price).

This equated to about 5x trailing earnings and 86% of tangible book value…

…Tweedy pegged the intrinsic value of WLC at somewhere between $24,000 and $33,600 per share. This equated to 8.5x EBITDA (15.9x earnings) on the low end and 11.9x EBITDA (22.0x earnings) on the high end…

…Tweedy ultimately rejected the bid, saying that they would much rather buy shares at $7,600 than sell them…

…What’s interesting is that the company upped their bid to $10,300 per share based on “an independent valuation of WLC’s stock” which includes “a discount for lack of marketability of minority blocks of stock.”…

…WLC traded for less than $200 per share 15 years prior. The current market was around $5,600. The company was offering $10,300. And they showed no signs of getting serious about selling the entire company or uplisting the stock.

In other words, there was no other clear “catalyst” on the horizon, other than this seemingly juicy offer from the company.

But Tweedy stuck to their core principles and refused to sell below intrinsic value.

They declined the bid and continued to hold their shares…

…Western Lime ended up selling to Graymont a little over a year later in March 2012…

…Shareholders received $52,000 per share.

That’s more than 5x the price offered to Tweedy less than two years prior, and a 36% CAGR from the 1993 price of $155 (before dividends).

3. From flops to fortune: How tech’s biggest failures create tomorrow’s winners – Chin Hui Leong

Ever since OpenAI launched ChatGPT in November 2022, Alphabet has found itself in an unfamiliar situation – playing second fiddle to OpenAI’s popular artificial intelligence (AI) assistant.

But with the recent launch of Gemini 2.5 Flash Image, Google is starting to look innovative again. The new image feature (code-named Nano Banana) attracted more than 10 million new users in a week, with over 200 million images edited.

Here’s what most people don’t realise: Nano Banana’s success was about 15 years in the making. The story begins with Google+, the company’s catastrophic attempt to challenge Facebook. Launched in 2011, Google+ burned through hundreds of millions before being shuttered in 2019.

But buried within that failed social network was a gem – Google Photos. When Google Photos became a standalone product in 2015, it brought along the image editing and organisation capabilities developed for Google+. Those capabilities – during its failed social network experiment – would give Google’s image AI the headstart it needed.

Fast forward to today, the technology that couldn’t save a social network now powers Google’s comeback in the AI race. Nano Banana’s overnight success took about 15 years of patient failure…

…For investors, the lessons are:

  1. High-profile failures may signal opportunity, not disaster.
  2. Watch how executives handle failure. Do they admit mistakes openly like Nadella?
  3. Look for companies with “failure labs” – autonomous labs, experiment budgets that embrace Bezos’ brutal math of taking a bet that has a 10 per cent chance of a pay-off of 100 times.

4. The bloom is off: the start of the DAT crash? – Andrew Walker

When I was writing my series ~a month ago, MSTR was trading for ~2x mNAV, and every company that announced a DAT [Digital Asset Treasury] deal with any crypto was seeing their stock price skyrocket.

Today, things have changed dramatically. Yes, you’ll still get an occasional squeeze on a buzzy deal in a company with a tiny float (see: OCTO jumping ~2500% on a worldcoin DAT strategy), but for the most part things have cooled down. Just take a look at the king of DATs: MSTR1 has traded down to ~1.5x mNAV….

… and the market seems to be looking at their strategy with increasing skepticism; most of their preferreds are trading below par (in the case of STRD, well below par), and, despite the drop in MSTR’s mNAV, MSTR has been forced to shift most of their capital raise to their ATM program in order to continue to buy bitcoin…

…And we’re already seeing formerly hot DATs need to pivot their strategy as their stocks trade below mNAV. For example, SBET has announced a share repurchase program as their stock slipped below mNAV, and they’re not alone. My favorite is Empery Digital, which announced a share repurchase program and had their CEO make an impassioned plea to shareholders about buying their stock to get discounted access to BTC…

…Despite the shareholder friendliness of the buybacks, I suspect they are a band aid on a bullet wound for most DATs.

Why?

Most of these DATs have fully deployed all of the proceeds they raised into their underlying assets. SBET, for example, has purchased over $3.5B of ETH and had just ~$72m in cash on their balance sheet at their last update; that’s a pittance versus their >$3B market cap…

…I think what’s really interesting about the bloom coming off DATs (the premiums fading away) is that it’s happened while crypto is still generally in favor. ETH is up ~70% over the past three months, while Bitcoin is up ~5%.

If DATs are starting to go out of favor will the underlying crypto is still doing reasonably well, what would happen if we hit another crypto winter and crypto prices traded down meaningfully?

And, if I might speculate a bit, if a lot of the recent rise in crypto has been caused by the huge rush of capital into DATs (which then gets deployed into the crypto, thus supporting the price), what would happen if that unwound for some reason? What if a bunch of DATs said “we’re trading at a discount to NAV; let’s practice good corporate governance, sell crypto, and buy our stock back (option 3 above)”? Or what if a bunch of DATs practice option 2 (leveraging crypto to buy back stock) and get margin called?

I suspect the underlying crypto could go a lot lower real fast as the same flywheel effect that’s sent crypto up recently unwinds.

5. What the Pentagon’s Rare Earths Deal Gets Right and Wrong –  Tracy Alloway, Joe Weisenthal, Arnab Datta, and Peter Harrell

Rare earth elements and magnets manufactured from them are used across defense and industrial applications: An F-35 fighter jet, for example, requires more than 900 pounds of rare earths, and in cars they are used for everything from batteries to power seats. Apple uses a rare earth magnet in the iPhone’s “haptic” engine that makes a user feel buzzes and other vibrations.

China’s dominance of rare earths (it processes nearly 90% of rare earths globally) is relatively recent. For much of the 20th century the U.S. produced both rare earths and rare earths magnets domestically. Indeed, MP’s mine in Mountain Pass, California, located near Las Vegas, started production in 1952.

In the early 2000s, however, low-cost Chinese producers came to dominate global markets, driving most non-Chinese companies out of business: the Mountain Pass mine, for example, stopped operations in 2002. By the time it reopened in 2012, China had built a market infrastructure to dominate all aspects of the trade. The mine closed again in 2015. GM sold America’s leading rare earth magnet manufacturer to Chinese companies in the 1990s. By 2004 it, too, had shuttered U.S. manufacturing. Even after MP acquired the Mountain Pass mine and restarted operations in 2017 it exported most of its product to China to be processed and turned into magnets.

The Defense Department’s deal with MP Materials is designed to end America’s dependency on China with respect to two specific rare earths, neodymium (Nd) and praseodymium (Pr). In addition to expanding mining and processing of the raw metals, the deal is intended to build up America’s capacity to manufacture the metals into magnets, specifically neodymium iron boron (NdFeB) permanent magnets, one of the most important types of rare earths defense and industrial magnets…

…First, MP committed to expand U.S. mining, processing, and magnet manufacturing facilities. The company will increase mining and processing operations, including possibly in heavy rare earths; expand its existing magnet manufacturing facility in California to be able produce 3,000 tons of NdFeB permanent magnets annually (up from 1,000 tons annually currently), and construct a new “10X” facility in Texas that will enable MP to produce a total of 10,000 tons of magnets annually after 2028. Combined, the facilities should be able to meet a substantial portion of U.S. demand for NdFeB magnets, including all of our defense needs.

Second, DoD set a guaranteed price floor of $110 per kilo of MP’s NdPr products, running for 10 years. If the market price, currently below $60/kilo, remains below $110, DoD will pay MP the difference between the market and $110/kilo. If market prices exceed $110/kilo, DoD is entitled to 30% of MP’s extra profits. This ensures that MP can make money on its mining and processing operations even if it has to sell minerals below cost to compete with Chinese producers.

Third, DoD has guaranteed that either it or commercial buyers will purchase all of the 10X facility’s NdFeB magnets, estimated at 7,000 tons a year for the next decade. DoD will pay MP its realized cost of production of the magnets, plus $140 million per year to guarantee MP a profit, with a 2% annual inflation increase in the guaranteed profit figure. With DoD’s consent, MP can sell some of its magnets to commercial buyers, in which case, DoD will take the first $30 million in MP magnet profits exceeding $140 million. Additional profits beyond that will be split 50/50 between MP and DoD. Similar to the price floor, the magnet offtake agreement ensures that MP can profitably make magnets even if low global prices would undercut MP’s manufacturing…

…Beneath the deal’s ambition, its structure raises significant policy design questions. The first is a fundamental question about the extent to which the government (versus the private sector) should bear the costs associated with addressing critical U.S. supply chain risks. The MP deal essentially puts the U.S. taxpayer on the hook for developing a reliable U.S. supplier of rare earths and NdPrB magnets. And while the U.S. government can share in the upside if global prices for rare earths and the magnets exceed expected levels, if the price trajectory looks similar to the last decade, the U.S. government could be on the hook for billions. Potential costs include $1.4 billion in guaranteed profits for MP ($140 million per year, adjusted up at 2% per year). The price floor alone could cost billions over ten years if MP hits their announced capacity of 6,075 metric tons and prevailing market prices stay constant…

…The deal elevates MP Materials as America’s de facto magnet champion, despite having no track record of commercial success in magnet production. By contrast, China’s national champions typically emerge through fierce domestic competition. Firms like CATL did not rise to global leadership through political selection alone; they fought their way to the top by outperforming rivals on innovation and scale: CATL remains one of the top patent recipients globally while leveraging partnerships with major automakers like Tesla and BMW. Government support was structured to spur this competition. Subsidies and pilot programs were spread across multiple firms before consolidating behind the winners.

The U.S. decision to back MP sidesteps this competitive process, effectively granting a monopoly franchise in magnet production. This risks locking the U.S. into a suboptimal path if MP fails to deliver on cost or performance, while crowding out rivals that could prove more innovative…

…The deal also hardwires U.S. fiscal exposure to the same market infrastructure that China uses to determine prices and stifle investment in competitors. Under the price‑protection term, DoD pays the difference between $110/kg and a reference price — specifically the Asian Metal Market price. A substitute ex‑China Index is only allowed at DoD’s election and with the company’s consent. That means core cash flows for a decade depend on a benchmark whose prints reflect Chinese production costs, market structure, trade flows, policy choices, and tax treatment.

This creates three, compounding problems: (1) basis risk; (2) manipulability; and (3) path dependence. NdPr is sold as concentrate, oxide, and metal with varying specs, impurities, tenors, and delivery terms; Asian Metal quotations often embed VAT regimes, logistics premia, and buyer restrictions that diverge from U.S. realizations. Even with upside sharing, those mismatches can cap clawbacks in booms and invite arbitrage in busts. Relatedly, when public payments hinge on a single, quarterly external print, an actor with market power can manipulate spreads, restrict eligible buyers, or flood spot supply to push the index below U.S. breakevens and eat DoD appropriations. And locking federal contracts to Asian Metal deepens liquidity and legitimacy in that price‑discovery ecosystem. The U.S. ends up validating the very benchmark that concentrates market power abroad, raising the fiscal cost of preserving domestic capacity and making future decoupling harder.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms (parent of Facebook), Microsoft (its CEO is Satya Nadella), and Amazon (its founder is Jeff Bezos). Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q2)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q2 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q2). In it, I shared commentary in earnings conference calls for the second quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s second quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management is infusing AI across Adobe’s flagship Creative Cloud applications; there is strong adoption of the Creative Cloud Pro offering, which includes Firefly; recent new AI features in Creative Cloud applications include (1) Harmonize in Photoshop that blends composited objects with the image, and (2) Project Turntable, which rotates 2D artwork to accurately visualize different angles; Creative Cloud had strong new user acquisition, particularly in emerging markets; management will soon be unveiling new AI innovations within Adobe’s Creative Cloud applications

We’re infusing AI across our flagship Creative Cloud applications including Photoshop, Illustrator, Premiere Pro and After Effects and delivering new offerings for next generation creators with Adobe Firefly across web and mobile…

…We’re seeing strong adoption of the Creative Cloud Pro offering which includes Firefly, reflecting the value 5 professionals see in having AI integrated with power and precision creative tools…

… Recent examples include the addition of a new Harmonize feature in Photoshop that blends composited objects with the image by automatically adjusting lighting, colors and shadows. Harmonize has quickly become one of the most used features in Photoshop. We released Project Turntable, a popular sneak from MAX last year, into Illustrator, which helps users rotate their 2D artwork to accurately visualize different angles, eliminating a frequent and time-consuming task. Innovations like these directly translate into measurable value for customers by cutting production times, enabling more content output, and raising the overall quality of creative work and have driven strong migration to our new Creative Cloud Pro offer…

…Continued new user acquisition of Creative Cloud with particular strength in emerging markets like India which grew ending units 50 percent year over year…

…We’re excited to welcome our community at Adobe MAX next month. We’ll showcase incredible innovations that highlight amazing productivity features in our flagship Creative Cloud applications, breakthrough AI capabilities leveraging Firefly and third-party models, new agentic experiences for conversational editing, and significant strides in content production automation for enterprises.

Adobe’s management is making the new Firefly application the single destination for creators’ workflows; the Firefly application includes Adobe’s own AI models as well as 3rd-party models; there is strong adoption of the standalone Firefly subscription; the 3rd-party models in Firefly include Google’s  Gemini, Veo, and Imagen models, along with models from OpenAI and more; new capabilities for Firefly that were added in 2025 Q2 (FY2025 Q3) include avatar generation and sound effects generation; Firefly Services are agentic services that use custom models to automate and personalize image, video, and 3D content for many types of use cases; management recently delivered a no-code interface for Firefly Services; usage of Firefly Services and Custom Models grew 32% and 68% sequentially in 2025 Q2 (FY2025 Q3); the Firefly App for mobile has been downloaded millions of times since launch; the Firefly App’s MAU (monthly active users) was up 30% sequentially in 2025 Q2 (FY2025 Q3); first-time subscribers to Adobe from Firefly app was up 20% sequentially in 2025 Q2 (FY2025 Q3); Firefly has powered 29 billion generations (24 billion in 2025 Q1) since its launch in March 2023, with video generations up 40% sequentially in 2025 Q2 (FY2025 Q3); Nano Banana from Google was integrated with Firefly on the day it was released and the integration of Nano Banana led to an better product than the standalone Nano Banana; management sees the real strength of Adobe in the company’s ability to deeply integrate 3rd-party generative AI models into the workflows of the company’s existing applications; the integration of 3rd-party models into Adobe’s applications is not a trivial project; the majority of AI credit usage in Adobe is being used on the company’s Firefly models, but 3rd-party models are seeing a nice uptick in usage, and management is happy with the current mix

We’re delivering an end-to-end, ideation-to-creation solution in the new Firefly application to make it the single destination for creators’ workflows. It includes our own first-party, commercially safe models and leading third-party models. We are seeing strong adoption of the standalone Firefly subscription offering. We recently added Google Gemini Flash 2.5 alongside Google’s Veo and Imagen models to the roster of partner models from OpenAI, Black Forest Labs, Runway, Pika, Ideogram and others. In the rapidly evolving AI landscape, where each generative AI model has its own aesthetic style, we’re offering customers choice and flexibility to use the right model within Adobe applications, without the friction of switching between workflows and platforms…

…The Firefly app is a powerful, yet accessible AI production studio that helps creators deliver original content faster than ever before. In Q3, we added a slew of new capabilities, including avatar generation, sound effects generation and updates to the growing list of integrated generative models…

…We are delivering incredibly powerful automated content production capabilities through Firefly Services to enterprises of all sizes and across all verticals. These agentic services leverage Custom Models to automate and personalize image, video and 3D content for marketing campaigns, ad creation and postproduction video work, all while maintaining brand consistency. Additionally, we delivered a no-code interface that extends the power of Firefly Services to studio and design teams. Firefly Services are available through GenStudio as well as to individuals through Firefly app subscriptions. Consumption of Firefly Services and Custom Models grew 32 percent and 68 percent quarter over quarter, respectively…

… Millions of downloads of the Firefly App for mobile since launch; Firefly app MAU grew 30 percent quarter over quarter; Firefly app continues to attract next gen creators, with first time Adobe subscribers through the app growing 20 percent quarter over quarter; Generative AI consumption accelerated, with 29 billion generations, and video generations growing nearly 40 percent quarter over quarter…

…[Question] on sort of the demo that you guys gave on that video at the beginning. Really, again, highlighting the Adobe magic with kind of what you’re doing with Nano Banana, and — being able to manipulate images like that.

…Regarding choice, we want to make sure that all third-party models are available, you saw our announcement with Google and Nano Banana, OpenAI, Flux, Runway, Luma, Ideogram, the list continues to grow. And you call out the example of Nano Banana. We actually launched Nano Banana in the first — on the day that it was released as part of the Firefly application, and now we’re integrating it into Integrated Cloud Pro. So the core of the choice of whatever model has the most interesting thing for the thing you want to do, you know you can turn to Adobe, and it will be there.

The second part is the integration as you talked about, right? We have a lot of workflows that we have — that we pulled into the model. You noticed that in the demo you saw, and all the demos that are out there people are using Nano Banana with Photoshop. They’re doing it in a way that they’re blending the precision and the control you get with Photoshop and combining it with the generative capabilities of Nano Banana…

… The magic is clearly in our applications because we can take all of the models that exist and integrate that within our interface. And that’s a completely nontrivial task of what we have done to build. That was actually the rationale for building Firefly because we understand whether they’re diffusion or transformer models better than I think anybody can in the Creative Application. So I wouldn’t underestimate the amount of magic that we have to make it look as seamless as it has…

…[Question] On the mix of AI credit usage between your own Firefly-based solutions and third party, whether you’re seeing any pickup from the third-party models and how users are responding?

[Answer] The majority of generation continues to be Firefly given the commercial safety and the underpinnings of what that is. But we are seeing a nice uptick in usage of the other models. Especially for things like ideation and sort of edit capabilities that are integrated into Firefly. So that mix feels right to us, and we’re going to continue to optimize and drive that discovery in our applications going forward.

Adobe’s management sees Adobe GenStudio as the most comprehensive solution for AI-driven marketing automation; Adobe GenStudio now exceeds $1 billion in ARR, growing 25% year-on-year; there is accelerating adoption and usage of Adobe GenStudio; new capabilities in Adobe GenStudio for Performance Marketing are accelerating video and display ad campaign creation; marketers can produce engaging short-form video ads in Adobe GenStudio with commercially-safe Firefly models; management recently released new capabilities for display ad campaigns for Adobe GenStudio, including on-brand image generation with Firefly

Adobe GenStudio is the most comprehensive solution that brings together workflow and planning, creation and production, asset management, activation and delivery and reporting and insights to enable marketing automation with AI in the enterprise. Our Workfront, Frame, AEM Assets, Firefly Services, and GenStudio for Performance Marketing products – which are key components of the integrated GenStudio solution – now exceed $1 billion in ARR growing over 25 percent year over year…

…We’re seeing accelerating adoption and usage of Adobe GenStudio, the most comprehensive content supply chain solution, as enterprises drive content velocity with AI. New capabilities in Adobe GenStudio for Performance Marketing are accelerating video and display ad campaign creation. Marketers will be able to produce engaging short-form video ads using the commercially safe Firefly Video Model. We released new capabilities for display ad campaigns, including on-brand image generation with Firefly, as well as offerings with Amazon Ads, Google Campaign Manager 360, LinkedIn and Meta to power seamless campaign workflows.

70% of eligible AEP (Adobe Experience Platform) customers are using AEP AI Assistant; management sees AI becoming the new UI (user interface) for brand discovery by consumers; management thinks brands must deliver hyperpersonalized, immersive experiences on owned channels to drive engagement and loyalty, and this is where Adobe shines; management sees new

marketing needs such as LLM (large language model) optimization and LLM advertising as massive opportunities for Adobe; management is infusing agentic capabilities into Adobe Experience Manager; management saw LLM (large language model) traffic grow 4,700% year-on-year in July 2025; management thinks Adobe has AI-first and AI-infused solutions that can orchestrate the customer experience in the era of agentic AI; AEP has agentic capabilities and management launched the 1st phase of the AEP Agent Orchestrator in 2025 Q2 (FY2025 Q3), so that users can build, manage and orchestrate AI agents from Adobe and 3rd parties; Adobe LLM Optimizer is currently available in early access and will be generally available later in 2025 Q3 (FY2025 Q4); Adobe LLM Optimizer help brands shape how they show up in LLM results; subscription revenue for AEP and native apps was up 40% year-on-year in 2025 Q2 (FY2025 Q3)

Customers are leveraging the rich data and customer knowledge in Adobe Experience Platform to enable agentic workflows to scale the capabilities of Adobe’s category-leading customer experience orchestration applications. We’re seeing continued adoption and momentum for Adobe Experience Platform (AEP) AI Assistant with 70 percent of eligible AEP customers leveraging this functionality. 

As AI transforms consumer behavior, it’s reinventing marketing and customer experience. Brand discovery is shifting from primarily search to include generative engine optimization. AI becomes the new UI, guided by conversations rather than menu clicks. Brands must deliver hyperpersonalized, immersive experiences on owned channels to drive engagement and loyalty. In this new reality, Adobe uniquely offers an integrated customer experience platform that delivers automation, agility and scale.

The explosion of content creation and automation in the enterprise and the beginning of new marketing needs such as LLM optimization and LLM advertising are a massive opportunity for Adobe. We’re infusing AI into Adobe Experience Manager with our upcoming LLM Optimizer release, a powerful agentic app to improve brand visibility, drive acquisition and maintain engagement with customers across LLM platforms…

…Our most recent Adobe Digital Index data, which is based on online transactions across over 1 trillion visits to U.S. retail sites, shows that LLM traffic grew 4,700 percent year over year in July 2025…

…Our AI-first and AI-infused solutions spanning GenStudio for content supply chain; AEP and Apps for customer engagement and loyalty; and Adobe Experience Manager and LLM Optimizer for brand visibility and discovery, enable us to power customer experience orchestration in the era of agentic AI… 

…We are innovating on our leading AEP marketing and customer experience platform with built-in agentic functionality, empowering marketers to deliver digital experiences with greater agility and efficiency. Our intelligent agents understand intent, reason and recommend actions to drive outcomes across content, data, and journeys. Purpose-built agents are embedded in our core apps and new AI-first applications, helping brands unlock greater efficiency and precision, automate workflows and personalize experiences at scale. We launched the first phase of AEP Agent Orchestrator in Q3, empowering businesses to build, manage and orchestrate AI agents from Adobe and third parties. These capabilities power the Data Insights Agent and Product Support Agent, which are generally available now and add to our growing portfolio of agents.

Our newest innovation is Adobe LLM Optimizer, available in early access. As customers and prospects increasingly turn to generative AI search and assistants for brand discovery, LLM Optimizer helps shape how brands show up in results which is driving influence, visibility and qualified traffic…

…Strong demand for AEP and native apps with Q3 subscription revenue growing over 40 percent year over year…

…We are excited that the product will be generally available later this quarter.

Adobe’s management recently launched Acrobat Studio, which combines Acrobat and Express; Acrobat Studio has PDF Spaces, which uses AI Assistant to derive insights for users from a collection of PDFs and other content; combined monthly active users of Acrobat and Express are up 25% year-on-year; Acrobat AI Assistant brings a new conversational interface to PDF-consumption; management is seeing accelerated use of AI Assistant across desktop, web and mobile; users can easily create AI agents in PDF Spaces to perform document tasks on their behalf; Acrobat Studio has encouraging early adoption and usage trends; there is rapid adoption of Adobe Express; dentsu is using Adobe Express for its global marketing strategy across its 68,000 employees worldwide, and is seeing measurable impact; there was 40% sequential growth in units for Acrobat AI Assistant in 2025 Q2 (FY2025 Q3), and 50% sequential growth in conversations and summarisations; 14,000 organisations added Adobe Express in 2025 Q2 (FY2025 Q3), up 4x from a year ago; Express usage in Acrobat doubled sequentially; 

We’re integrating creativity with productivity for billions of users with the recent launch of Acrobat Studio, which brings together Acrobat and Express…

… The new Acrobat Studio includes PDF Spaces, which transforms collections of PDFs, web pages and other files into dynamic knowledge hubs that help people work smarter and faster using AI Assistant to derive insights. We’re seeing steady growth across our family of Acrobat and Express products with combined monthly active users growing approximately 25 percent year over year…

…The introduction of Acrobat AI Assistant brought a new conversational interface that enhances the experience of customers consuming PDFs. This unlocks increased comprehension across the trillions of PDFs in the world. We continue to see accelerated use of AI Assistant across desktop, web and mobile…

…Users can leverage PDF Spaces to organize documents and links, discover insights faster through conversational experiences and enable editing and remixing of PDF content into new formats like emails and presentations. I’m particularly excited that anyone can easily create agents to perform document tasks on their behalf. Customers can use PDF Spaces with team members for more impactful knowledge sharing and collaboration. The combination of PDF Spaces, AI Assistant and an integrated Express experience is available through Acrobat Studio, a new, premium offer in our Acrobat line-up. Early reception of Acrobat Studio has been strong, with encouraging adoption and usage trends that highlight the significant customer demand and opportunity ahead…

…We’re seeing rapid adoption of Adobe Express. In the enterprise, Express is helping organizations scale content creation while maintaining brand consistency and quality. A great example is dentsu, which has made Express a core part of its global marketing strategy. Adobe’s platform is being rolled out to all 68,000 employees worldwide and scaled across brands including Carat, iProspect, dentsu X, Dentsu Creative, Tag and Merkle. By enabling creative teams to build content in Creative Cloud and share that content through Express within an overall GenStudio solution, dentsu ensures brand alignment across global teams while empowering marketers to create and remix their own content. This is driving measurable impact at dentsu…

…Ending units for Acrobat AI Assistant grew more than 40 percent quarter over quarter and AI Assistant engagement, with conversations and summarizations grew nearly 50 percent quarter over quarter…

…Over 14,000 organizations added Express in Q3 alone, a 4x increase in the quarter versus a year ago; Express usage within Acrobat nearly doubled quarter over quarter.

Adobe’s AI-influenced ARR is now more than $5 billion (was in the “billions” in 2025 Q1); management expects AI-influenced ARR to continue to rise as a percent of Adobe’s business; Adobe’s AI-first products has already achieved management’s target of $250 million in ending ARR by end-FY2025

Our AI-influenced ARR has now surpassed $5 billion, up from over $3.5 billion exiting fiscal year 2024 and we have already surpassed our full year AIfirst ending ARR target…

…Adobe AI influenced ARR surpassed $5 billion and we expect it to continue to rise as a percent of our business. Notably, ARR from our new AI-first products, including Firefly, Acrobat AI Assistant, and GenStudio for Performance Marketing, has already achieved our end-of-year target of over $250 million.

Adobe’s management thinks that larger advertisers will still prefer to retain control over their advertising campaigns, and not hand nearly all or total control over to digital advertising platforms such as Google and Meta Platforms that are providing near- or fully-automated AI-powered solutions; management sees the large digital advertising platforms as being excited to be supported by Adobe’s performance-marketing solutions

As it relates to how people are going to create and run campaigns and ad placements in all of these different platforms. I think you’re going to see some smaller medium businesses use it. I think all of the larger companies, what we continue to hear in the enterprises, they want the ability to create campaigns, run it across multiple channels, see the attribution, as well as see — what we can do in terms of the analysis.

But in addition to that, I mean, all those advertising channels that you talked about are really excited about Adobe making it seamless which is why you’ve seen in the GenStudio for performance marketing the support third-party channels, whether that’s TikTok, Meta, Google, Amazon, all of that, we’re just going to continue to do.

Adobe’s management created LLM Optimizer after realising that Adobe has a lot of content that matches the questions users were asking AI chatbots regarding PDFs; management thinks LLM Optimizer is a great opportunity for Adobe to drive traffic to itself from AI chatbots, and for other companies to drive traffic to their properties

I was actually working internally with our team, our adobe.com team, which obviously runs a big digital business. That’s how we got going on the LLM Optimizer. We noticed that in terms of some of the traffic, it’s not only the search traffic, but a lot of our customers, our prospects were starting to ask questions within ChatGPT and Perplexity and so on. How do I edit this PDF? I have a large PDF? How do I compress it? Those kinds of questions. And we realized that we had a lot of content available that if we made it available the right channels that will get picked up by the LLMs and that would give us — our Acrobat brand a lot more visibility through the LLMs. So that’s how the idea for the product came about…

…I noticed in a lot of the preview reports folks look at web traffic, and it’s coming from different sources. That’s a really new movement. And so as people about just search traffic and what was happening in search, you really have to start to factor and we’re, I think, one of the leaders in that space, how to really take advantage of what’s happening, not just across search but also what happens across social and now increasingly what happens across LLM. So as Anil mentioned, this is not just an opportunity for us to use ourselves but I think a massive opportunity for us to help every single company deal with this new reality.

Adobe’s management is seeing a new movement of web traffic coming from AI chatbots; management thinks consumers will adopt LLMs for the entire process involving e-commerce transactions

I noticed in a lot of the preview reports folks look at web traffic, and it’s coming from different sources. That’s a really new movement…

…With the LLM, the new LLMs, the discovery to actual consideration, to purchase, maybe even the post purchase, that entire funnel is starting to consolidate and you’re going to be seeing consumers actually adopt LLMs for the entire process.

Even in the AI era, management thinks Creative Cloud has growth opportunities with seat expansion

[Question] There’s a thesis out there for software in general. That AI is the headwind to seats and the seats will need to shift to consumption, the issue is then can capture more consumption revenue than seat. How do you think about the relationship between seats and consumption in the Creative Cloud?

[Answer] On Creative Cloud specifically, we definitely view this as both as seat expansion as well as a marketing automation. And that’s part of the reason, as you know, why we — this customer grouping that we talk about, which is Creative Professional and Marketing Professionals, And in the enterprise that’s playing out exactly the way it is. It is actually still continuing to play out with seat expansion in the enterprise.

Adobe has continued to post healthy margins despite investing in AI capabilities because management has put a lot of effort into controlling training and inferencing costs, and using AI to drive internal productivity

[Question] You’ve been speaking to mid-40s margin profile, you’re still operating a bit above that this quarter. It looks like gross margins are actually up a touch versus last year. Why aren’t you seeing degradation from AI adoption, given some of the metrics you’re providing?

[Answer] I think there’s 2 vectors of productivity that the company is driving to underpin margin delivery. First one, how we drive GPU training fleets to support training, the utilization, the algorithms we use to efficiently get at model construction as well as continually loading that GPU fleet to make sure there’s high utilization over time. The second piece is inferencing. Constantly tuning the algorithms and cost per inference. We watch this maniacally, how we feel these fleets of GPUs to make sure that the reserve instances, which come in at very different price points than on demand that we constantly balance and optimize the cost structure that underpins the usage of that compute power. And then obviously from an internal working standpoint, adoption of these technologies how we drive productivity gains in the company, how we augment individual employees from a productivity standpoint as well as ways of working inside of the company, to continue to drive more and more productivity out of the world’s best employees.

Adobe’s management is seeing users of Adobe’s AI solutions having better retention

The thing that we have seen is a direct correlation between increased use of AI and retention, and we feel very good about that.

Adyen (OTC: ADYEY)

Adyen had been applying AI on payments well before AI became a hot topic; Adyen’s AI-powered Adyen Uplift technology, launched in 2024, improves conversion, strengthens fraud prevention, and reduces payment costs; Adyen Uplift has a full-funnel approach that is superior to legacy systems’ approach; Adyen Uplift uses Adyen’s access to trillions of dollars in global transaction data from 1 billion shoppers to provide the necessary recommendations; Adyen Uplift is modular in design and has 4 components, Optimize, Protect, Tokenize, and Authenticate; Optimize uses Adyen’s IPR (Intelligent Payment Routing) to maximise payment authorisations and reduce transaction costs; each component of Adyen Uplift can be used separately, but work best when used together; merchants have control to test and adjust performance settings within Adyen Uplift; nearly all users of Adyen Uplift are using Optimize, while 68% of Adyen Uplift users are using Protect; in markets such as Australia and the USA where debit cards can route payments through global or domestic networks, IPR uses machine learning to analyze real-time signals to determine the optimal route for each transaction because domestic networks can offer lower fees, but at lower performance; IPR can reduce cost while maintaining or even improving approval rates; adoption of IPR was up 8x in 2025 H1 compared to 2024 H2; US customers of IPR saw average cost reduction of 20% on debit transactions and 89 basis point improvement in authorisations; Australian customers of IPR generated average cost savings of 47%; Adyen Uplift is fully embraced across the Digital pillar; Adyen is only partly charging for Adyen Uplift’s services currently

We’ve been applying machine learning to optimize payment flows well before AI rose to the top of the industry agenda…

…Adyen Uplift was developed around three recurring needs: improving conversion, strengthening fraud prevention, and reducing payment costs…

…While legacy systems often address these issues in isolation, Adyen Uplift takes a full-funnel approach. It uses risk-based intelligence and automation to optimize decisions across the entire payment flow. With access to trillions of dollars in global transaction data from over a billion shoppers across online and in-store channels, we can detect high-risk behavior and reliably recognize trusted shoppers. This combination provides the depth of insight needed to deliver tailored recommendations that customers can test and validate in real-time…

…Adyen Uplift is modular by design so enterprise customers can adopt the capabilities most relevant to their business. Optimize is the decision engine that maximizes payment authorizations and reduces transaction costs. It uses IPR to find the optimal balance between conversion and cost for any transaction with multiple route possibilities. Protect delivers advanced fraud detection, while Tokenize ensures payment credentials remain valid and secure. Authenticate helps businesses meet local compliance requirements without adding unnecessary friction to the shopper experience. Each module can stand alone, but the product suite delivers the most value when its components work together. What seems optimal at one step of the payments flow often isn’t when viewed in full context…

…Merchants now have more control to test and adjust performance settings dynamically. Each recommendation includes clear activation instructions, the ability to test before adoption, and a projected outcome, helping them to assess potential impact and move with confidence. Examples include enabling a local payment method, fine-tuning authentication logic, or activating IPR for US debit payments…

…Optimize is available to all customers, with nearly all utilizing the module. Additionally, 68% of enterprise merchants in our 2025 cohort have adopted the Protect module from day one…

…Intelligent Payment Routing (IPR) within Adyen Uplift is a prime example. This product dynamically selects the optimal route for each transaction based on conversion and cost. We invested in direct connections with local debit networks early on. This enabled us to build a solution that not only ensures compliance but consistently enhances performance. In markets like the U.S. and Australia, dual-branded debit cards can be routed through either global or domestic networks. While local rails often offer lower fees, their performance can vary. IPR uses machine learning to analyze realtime signals, such as scheme performance, issuer behavior, and cost structures, to determine the optimal route for each transaction. The result is a product that reduces cost while maintaining or even improving approval rates.

Adoption grew 8x in H1 2025 compared to the pilot group announced in H2 2024, with major U.S. brands such as Adobe, Microsoft, 24 Hour Fitness, and Indeed using the solution. In the U.S., customers saw an average cost reduction of 20% on debit transactions and a +89 basis point improvement in authorization rates. In Australia, the launch of local routing over Eftpos supported 55 merchants, with average cost savings of 47%…

…Adyen Uplift is now fully embraced across Digital, becoming a core part of how customers optimize for performance, reduce cost, and navigate growing complexity…

…Uplift is a product that we launched in the second half of last year. We are partly charging for it. So it depends a bit on the module that you’re exactly using and some of the parts are free. Of course, ultimately, what we’re building for is that we charge for the products that we offer to our customers. So it’s currently a mix.  

Adyen’s management sees significant potential in agentic commerce and thinks Adyen is well-positioned for the shift; management thinks agentic commerce brings new demands, in particular, a new lens for looking at fraud prevention, because traditional signals used in fraud prevention are absent in agentic transactions; management sees Adyen’s tokenization capabilities as being an important enabler of agentic commerce in being able to improve authorization, reduce fraud, and enable intelligent, context-specific execution; management sees Adyen as being at the leading edge of tokenization in the context of enabling agentic commerce; Adyen’s global risk system, built on nearly €1.3 trillion in annual volume, enables consistent fraud detection in agent-initiated flows; Adyen’s MCP (model context protocol) server enables structured agent-to-business communication; management thinks Adyen’s platform will allow whatever emerges from agentic commerce to work seamlessly with existing global payment methods

One area where we see early momentum and significant long-term potential is agentic commerce: the shift from enhanced search to autonomous, agent-led purchasing. While still emerging, the rapid adoption of large language models signals rising interest and underlying demand. We’re well positioned to support this shift, helping merchants and consumers navigate the next chapter of ecommerce.

Agentic commerce brings new demands: secure information exchange, sandboxed payment permissions, dynamic authorization, and real-time context-awareness. Crucially, it requires rethinking fraud prevention. Traditional signals are often absent when agents transact on behalf of users, making it essential to rely on scalable infrastructure and intelligent risk models that operate without direct human input. Our platform is built for this. Our tokenization suite enables secure, seamless credential sharing between agents, merchants, and shoppers. Agents can initiate payments using standardized tokens that improve authorization, reduce fraud, and enable intelligent, context-specific execution. We’re at the forefront of this space, pushing the boundaries of what tokenization can do. Our recent announcement with JCB highlights how we’re advancing global credential security — Adyen is the first to offer their advanced tokenization to reduce fraud and improve authorization.

Our authentication engine supports adaptive trust models, applying the right protocol based on transaction risk, regulation, and issuer logic. Our global risk system, trained on nearly €1.3 trillion in annual volume, adds consistent fraud detection, even in agent-initiated flows, flagging misuse, and maintaining trust at scale. And with our Model Context Protocol (MCP) server, we’re enabling structured agent-to-business communication, equipping AI agents to securely interpret and act on commerce data…

… Our infrastructure ensures that whatever emerges in this space can work seamlessly with the global payment methods, regions, and consumer journeys our customers rely on today, and in the future.

MongoDB (NASDAQ: MDB)

MongoDB is adding thousands of AI-native customers

We’re adding thousands of AI native customers.

MongoDB Atlas consumption growth in 2025 Q2 (FY2026 Q2) benefitted from a strong start to consumption in May 2025, as well as broad-based strength; Atlas consumption growth in 2025 Q2 (FY2026 Q2) was consistent with 2024 Q2 (FY2025 Q2); Atlas’s growth has been driven partly by an uptick of capabilities such as Search and Vector Search

We had an impressive Atlas growth quarter, which benefited in part from the strong start to consumption in May that we referenced on our last call as well as broad-based strength, especially in larger customers in the U.S…

…In Q2, Atlas consumption growth was strong and relatively consistent with last year’s growth rates. This drove the acceleration in revenue as well as the growth in absolute revenue dollars year-to-date for the first half of fiscal ’26…

…What we’re also seeing is that there’s a great uptick of some of the other capabilities we offer like search and vector search that are also adding to that growth of those workloads.

Many of MongoDB’s recently-added customers are building AI applications and this bolsters management’s confidence that MongoDB is an important part of the AI infrastructure stack; management sees MongoDB emerging as a standard for AI applications

Many of our recently added customers are building AI applications, underscoring how our value proposition is resonating for AI and why MongoDB is emerging as a key component of the AI infrastructure stack…

…MongoDB is emerging as a standard for AI applications.

MongoDB has integrated capabilities such as search, vector search, embeddings and stream processing into its database product; the integrations mean MongoDB has so much more capabilities than competing databases such as Postgres; management thinks AI startups tend to go with Postgres first because the founders are familiar with Postgres and they do not think carefully about their database choices; what the AI startups often realise after choosing Postgres is they run into scaling challenges and then turn to MongoDB; management wants to do more developer education regarding Postgres versus MongoDB

MongoDB has redefined what’s core for the database by natively including capabilities like search, vector search, embeddings and stream processing. Comparing MongoDB to another database like Postgres is not an apples-to-apples comparison. Take a global e-commerce application that manages inventory and order data while enabling product discovery through sophisticated search across millions of SKUs. The choice for this application is not between MongoDB or Postgres, it is between MongoDB or Postgres plus other offerings like Pinecone, Elastic and Cohere for embeddings…

…[Question] Why do we hear so much about Postgres adoption for AI start-ups. You talked about the success you guys are having. But if Postgres has the disadvantages that you’ve talked about multiple times, scalability, JSON support, how come we hear so much about that kind of at least in the early stages of AI?

[Answer] What’s become clear is a lot of these startup founders don’t think that hard about their database choice, they kind of go with what they know. And what we are seeing is that as some of these startups are scaling, they’re running to real scaling challenges with Postgres. And what — and we’ve talked about this in the past, like when you add a JSON — when you use JSONB on Postgres, a 2 kilobyte document or bigger starts really creating performance problems because Postgres has to do something called off-row storage, which creates enormous performance overheads. And so the developers need a platform that can handle structured, semi-structured and unstructured data, they need obviously a platform that performs well, and they need a platform that can scale as they grow. And what we’re hearing clearly from the startup community that Postgres, in many cases, is not scaling for them, and they’re now coming to us…

…We realize we need to do more developer education and do more work. And so we’re investing a lot in the startup community. We’re running a big event in October in San Francisco with a big hackathon, and we’re inviting lots of customers to participate. But that’s just the start of a meaningful investment we’re making in the Bay Area and the AI startup community to rethink their decisions around just going with what they know.

MongoDB’s management is seeing enterprises adopt AI, but the process is still early; most AI use cases for enterprises are related to employee productivity tools and packaged solutions from ISVs (independent software vendors); enterprises are still very early in building custom AI applications; enterprises often fail when attempting to scale vibe-coded software built on relational databases; management is seeing enterprises start deploying AI agents, but it’s still very early; management is hearing from customers that AI is currently providing productivity gains, but it’s not transforming their businesses; management thinks the real value of AI will come when enterprises are able to build custom AI solutions; enterprises are sometimes hesitant to deploy AI for customer-facing applications because it’s not possible currently to guarantee the quality of output of AI models; management thinks there will not be an inflection point in enterprises suddenly adopting AI at a big scale, instead adoption will simply take time to grow

In the enterprise segment, adoption is real but early. Much of the activity today centers on employee productivity tools and packaged ISV solutions. Enterprises are still in the very early stages of building their own custom AI applications that will transform their business. We consistently hear from customers that when teams try to scale from vibe-coded prototypes built on relational back ends to enterprise-grade deployments, these platform quickly hit limits in flexibility, scalability and performance…

…Where it is being deployed is really on end user productivity, whether it’s developers with cogen tools or business users using tools to summarize documents, extract data or things like deflecting tickets from people to systems with like conversational AI. I think you are starting to see the first steps in people deploying agent-based systems, and I can talk a little bit about that, but that’s still very, very early. We’re seeing small ISVs, some of them are taking off, who are really driving most of the impact.

But the real enduring value will come. When you talk to a customer today, most of them when you ask them is AI really transforming a business, they will say no. Yes, we’re seeing some productivity gains here and there, but it’s not really transforming my business. I think the real enduring value will come when they build custom AI solutions that can truly transform the business, whether it’s to drive new revenue opportunities or dramatically reduce their existing cost structure…

…I had 2 meetings today with 2 different leaders of 2 different financial institutions here in New York, and they both talked about what they’re doing in AI. They both admitted that they’ve kind of started with low stakes use cases, but their appetite to start doing more is increasing as they get more and more comfortable with the technology, and they’re quite excited to leverage MongoDB as part of that journey. But again, I think that’s kind of a microcosm into the enterprise market where I think they’re still quite early in their AI journey…

…AI systems are probabilistic in nature, not deterministic in nature. So you can’t always guarantee the output. You can hope that you’ve trained the models well. You’ve hoped that you’ve given it the right information, but you can’t always guarantee the output. So as I mentioned, I had meetings with 2 financial services customers earlier today, and both of them are still hesitant to roll out an end user-facing AI applications for those specific reasons…

…[Question] Some of the comments you were talking about the AI slowdown, and you heard about recent MIT report about 95% AI implementation not getting any kind of return. How do you see — what’s kind of do you think the inflection point?

[Answer] It’s going to take time to be comfortable with technology. It’s going to take time where people start with low stakes use cases and start gravitating to higher state use cases. So I don’t think there’s going to be some seminal inflection point. I think it’s just going to take time. But I think that time is coming.

A leading electric vehicle company chose MongoDB Atlas and Vector Search for its autonomous driving platform; MongoDB Atlas Vector Search had superior performance over Postgres; the electric vehicle company is using MongoDB Atlas to handle over 1 billion vectors and expects 10x growth in data usage in the next 12 months

A leading electric vehicle company chose Atlas and vector search to power its autonomous driving platform. After testing vector search against Postgres PG Vector for their in-vehicle voice assistant, they selected MongoDB for superior performance at scale and stronger ROI. They now rely on Atlas to handle over 1 billion vectors and expect 10x growth in data usage by next year.

AI-native startup DevRev used MongoDB Atlas to build its AgentOS product; AgentOS handles billions of requests per month; MongoDB Atlas helped DevRev speed up product development at lower cost and helped DevRev scale globally; DevRev is using MongoDB Atlas Vector Search

DevRev, a well-funded AI-native platform with proven founders disrupting the help desk market built AgentOS, it’s a complete agentic platform that autonomously handles billions of monthly requests on Atlas. DevRev accelerated development velocity, lower cost and scale globally with low latency by using Atlas. AgentOS also leverages Atlas Vector Search for semantic search enriching its knowledge graph and LLMs with domain-specific content.

MongoDB’s management is very excited about Relational Migrator; Relational Migrator has a new product leader with strong skills around using AI to drive automation in the product; Relational Migrator also has a new go-to-market leader; management does not expect Relational Migrator to contribute much to MongoDB’s business in 2025 (FY2026)

[Question] I know you’ve been investing in Relational Migrator. You’re working with companies like Cognition to accelerate the code migration opportunity. And you’ve seen professional services ramp up a little bit. But where have you started to see sort of the time to migration or replatform improve a bit?

[Answer] we’re super excited about what we call app modernization or legacy app modernization. You’ll hear a lot more about this at Investor Day in September, Tyler. But what I will say you is that the value proposition is very clear. Customers are very, very motivated to try and modernize these legacy systems for a wide variety of reasons. We are seeing a lot of progress. We’ve actually brought in a new leader — new product leader, who brings a lot of depth and scale, especially around AI to help us build the tooling to leverage AI to really drive more automation in terms of how we analyze and refactor the code. We brought in a new leader last quarter to really help drive the delivery and the go-to-market efforts around app mod. So we’re definitely beefing up resources…

…It won’t be as pronounced in terms of this year, but we’re very, very excited about the opportunity.

MongoDB’s management is seeing OLTP (online transaction processing) be the strategic high ground for AI especially in inferencing; many database companies are struggling to develop OLTP platforms and so had to make acquisitions; management thinks MongoDB is positioned really well for the AI opportunity given its strengths as an OLTP platform

What we are seeing is that the strategic high ground for AI, especially when it comes to inference is OLTP. So we talked about this on the last call where some companies that acquired early-stage OLTP start-ups. And what it really spoke to and those companies had spoken about their organic efforts to build an OLTP platform. And I think what it spoke to was the fact that they building an OLTP platform that’s ready and mission-critical and enterprise can serve the most demanding requirements of enterprises is not trivial. And I think they basically threw in the towel and decided to do these acquisitions…

…If now customers are going to be choosing what OLTP platform that they want for AI, just given our architecture, just given the fact that we have a durable architectural advantage in terms of JSON support, which addresses messy, complicated and highly interdependent and constantly changing data structures. The fact that we integrated search and vector search, I think, really helps us position going after AI.

MongoDB’s management thinks real JSON is becoming more important now with AI; management is seeing the hyperscalers hold off on investing in JSON-related capabilities; management thinks JSON is the best way to handle messy and evolving data structures in the real world, and this positions MongoDB well for AI because it is a JSON database

[Question] I’m thinking about Lakebase from Databricks and then DocumentDB in the Linux Foundation. Can you just comment on both those things?

[Answer] Around the Linux Foundation, I think what this really also suggests — shows is that real JSON is much more important now with AI than ever before and the clones and bolt-ons that have traded off features and performance and developer experience have just not met customer expectations. And candidly, what I see this is that the hyperscalers are investing less and really handing off to the open source community to kind of really take on the bulk of the work in terms of product development. Our hyperscaler partnerships remain strong…

…We’re a JSON database. JSON is the best way to express and model the complicated and messy and highly interdependent and constantly evolving data structures that you have to deal with in the real world. So that’s point number one. So it’s much easier to do that in MongoDB than to do that on some kludge kind of set up on top of a relational database.

MongoDB’s management thinks a unique differentiator of MongoDB for AI startups is MongoDB’s database allows sophisticated retrieval of information to be done quickly; another unique differentiator of MongoDB is the presence of Voyage’s embedding models; embeddings act as a bridge between a company’s private data and the AI model, and reduces hallucinations

I would say the AI cohort was not a material driver of the growth. That being said, what we are seeing is a lot of customers very, very interested in our architecture…

…Second is that we integrate search and vector search. You can do very sophisticated things to what people call hybrid search and retrieval, you can do very sophisticated things in finding information quickly, which is a very unique differentiator for us. So what this means that rather than stitching together multiple systems, you can do this all on MongoDB, so it becomes less complexity and lower cost.

The third thing is that we’ve now embedded Voyage models on our platform, right? So the — if you control the embedding layer, you sit at the gateway of needing of AI, right? What the embedding models do is really a bridge between a company’s private data and the LLM. So that becomes really important because the better the quality of the embedding model, the better the quality of the signal of your own data. So that reduces things like hallucinations or just bad outputs. And so customers are now — as people start caring more and more about like higher state use cases, they really want to ensure those outputs are high. And the fact that it’s part of our platform, we can enable you to do auto embeddings. It becomes an incredibly compelling feature.

MongoDB’s management thinks AI agents will be using a company’s systems much more intensely than humans, so it’s important that a company’s systems can massively scale up and down; the need for massive scaling up and down of systems positions MongoDB well; management thinks MongoDB is positioned to win in a world where AI agents dominate because of (1) the strengths of the JSON database, (2) support of vector search, (3) support of memory

Agents require — if you think about — if you’re using agents, agents will use your systems much more intensely than humans will because they can do things much more quickly. So you need platforms that can massively scale up and down, which is, again, a good sign and support indicator for MongoDB…

…[Question] If we were to fast forward 5, 10 years and we start to see a real paradigm shift where instead of agents built on kind of the traditional GUI mobile interface that we’ve been in for the past 30 years, we actually entered kind of a multi-agentic world where maybe the interaction vector may move away from what we’ve been used to into more natural language. Can you talk about why MongoDB still has a strong role and some of the investments that you might be making to position yourself well for the world, understanding that’s at the very least several years away?

[Answer] We believe that agents essentially do 3 things. One, they perceive or understand the state of things. So you need essentially a way to understand the state of what’s happening in your business, then you need to decide what to do or plan. So basically, you have to come up with the plan saying, “I want to take this action or these sets of actions.” And then you have to act. You actually have to go execute those actions, right?

So why is MongoDB good for agents. One, as I said before, the JSON document database is the best of being able to model the real world, the messiness, the complicated nature. The real world does not fit easily in rows and columns. And that’s why our document database, I think, is the best way to do that. Two, we obviously support search and vector search. So you can do very sophisticated hybrid search. So that becomes super important. And then with memory, if agents didn’t have memory, they would act like goldfish. They could only react to the last thing — last piece of information that they saw.

So memory lets agents connect the dots across time and situations. So you have different kinds of memory, things like short-term context, past experiences, knowledge, skills, et cetera, they need to be able to share quickly. You need to be able to orchestrate those agents because you may have multiple agents doing a certain task. You need to register and have governance policies around those agents. We think that the underlying platform needs to be able to support those things while there’s a lot more work needs to be done, the underlying architecture that we have in MongoDB is well suited to address those needs.

Nu Holdings (NYSE: NU)

Nu Holdings’ management has seen significant improvement in its ability to do credit underwriting for credit cards, driven by (1) AI-powered improvements to Nu Holdings’ credit models, and (2) new data acquired by the company; Nu Holdings is now the leader in open finance consent, which helps in Nu Holdings collecting data; the AI-related improvements in credit models comes from Nu Holdings’ 2024 acquisition of Hyperplane; the credit models that were improved by Hyperplane are largely focused on the mass market at the moment, but management expects the AI-enabled architecture from Hyperplane to be applied to more models in the future; management expects to see meaningful changes to Nu Holdings’ models across many different use cases in the future by applying Hyperplane’s technologies 

We have been seeing kind of a fairly material improvements in our ability to do credit underwriting and to continue to expand the credit card portfolio. It has to do with the adoption of new models and technologies to how we do credit underwriting, going all the way to better kind of traditional machine learning models, but also neural networks and predictive AI technologies, but more and more by the adoption of new data that we acquired…

…So the more customers stay with us, the more data we accumulate, we are now the leaders in open finance consent. The combination of better modeling technique with more data has allowed us to consistently increase kind of credit underwriting, credit limits and utilizations…

…[Question] The Hyperplane expansion in the credit limit that you talked about, is there any particular segment of customer base where it is more targeted towards higher income or mass market or your super core segments?

[Answer] So far has been mostly focused on mass market, but we expect that a lot of these new AI enabled architecture will be now applied to a number of different models…

…We expect a number of new models coming in for a number of different segments for the different countries and for different applications, such as collections, fraud, cross-sell. So we’re very excited about this, and it’s early days of applying this new technology to a lot of the decisioning that we have across Nubank. But we expect to see meaningful changes across the board.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s Data Center revenue again had very strong growth in 2025 Q2 (FY2026 Q2), driven by the Blackwell family of chips

Data center revenue grew 56% year-over-year. Data center revenue also grew sequentially despite the $4 billion decline in H20 revenue. NVIDIA’s Blackwell platform reached record levels, growing sequentially by 17%…

…The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue.

NVIDIA’s management sees $3 trillion to $4 trillion of AI infrastructure by 2030; management sees $600 billion in data center capital expenditures in 2025; management expects AI infrastructure investments to continue growing, driven by (1) agentic AI’s requirement for orders of magnitude more training and inference compute, (2) sovereign AI, (3) enterprise AI adoption, and (4) robotics; NVIDIA’s management sees the market for AI inference expanding rapidly; the capital expenditures from the CSPs (cloud services providers) has doubled over the last few years to $600 billion; management expects enterprises beyond the cloud hyperscalers to contribute to the expected $3 trillion to $4 trillion of AI infrastructure spend by 2030; management sees NVIDIA’s chips accounting for the majority of spend in AI data centers

We see $3 trillion to $4 trillion in AI infrastructure spend in the — by the end of the decade…

…Capital expenditures from the cloud to enterprises, which are on track to invest $600 billion in data center infrastructure and compute this calendar year alone, nearly doubling in 2 years. We expect annual AI infrastructure investments to continue growing, driven by the several factors: reasoning agentic AI requiring orders of magnitude more training and inference compute, global build-outs for sovereign AI, enterprise AI adoption, and the arrival of physical AI and robotics…

…The market for AI inference is expanding rapidly with reasoning and agentic AI gaining traction across industries…

…The last couple of years, you have seen that CapEx has grown in just the top 4 CSPs by — has doubled and grown to about $600 billion…

…The CapEx of just the top 4 hyperscalers has doubled in 2 years. As the AI revolution went into full steam, as the AI race is now on, the CapEx spend has doubled to $600 billion per year. There’s 5 years between now and the end of the decade, and $600 billion only represents the top 4 hyperscalers. We still have the rest of the enterprise companies building on-prem. You have cloud service providers building around the world…

…Out of a gigawatt AI factory, which can go anywhere from $50 billion to plus or minus 10%, let’s say, $50 billion to $60 billion, we represent about $35 billion plus or minus of that and $35 billion out of $50 billion per gigawatt data center.

The Blackwell family of chips is seeing widespread adoption and its users include high-profile model builders; the transition from the GB200 to the GB300 has been seamless, with the current run rate for the GB300 rack at 1,000 racks per week, with acceleration in output expected throughout 2025 Q3 (FY2026 Q3); the GB300 has a 10x higher inference performance on reasoning models compared to H100; GB300 has a 10x improvement in token per watt energy efficiency compared to the previous Hopper family of chips; management thinks Blackwell is the new standard for AI inference performance; the GB300 platform has a 50x increase in energy efficiency per token compared to Hopper; management believes a company investing in GB200 can earn 10x the amount in revenue; the performance of the Blackwell family of chips has already improved by 2x since its launch because of NVIDIA’s software innovations, including a groundbreaking numerical approach to LLM (large language model) pretraining; the new numerical approach means the GB300 can achieve 7x faster training than the H100; the AI industry’s major companies have adopted the new numerical approach

The GB200 NVL system is seeing widespread adoption with deployments at CSPs and consumer Internet companies. Lighthouse model builders, including OpenAI, Meta and Mistral are using the GB200 NVL72 at data center scale for both training, next-generation models and serving inference models in production…

…The transition to the GB300 has been seamless for major cloud service providers due to its shared architecture, software and physical footprint with the GB200, enabling them to build and deploy GB300 racks with ease. The transition to the new GB300 rack-based architecture has been seamless. Factory builds in late July and early August were successfully converted to support the GB300 ramp, and today, full production is underway. The current run rate is back at full speed, producing approximately 1,000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online.

We expect widespread market availability in the second half of the year as CoreWeave prepares to bring their GB300 instance to market as they are already seeing 10x more inference performance on reasoning models compared to H100. Compared to the previous Hopper generation, GB300 NVL72 AI factories promise a 10x improvement in token per watt energy efficiency, which translates to revenues as data centers are power limited…

…Blackwell has set the benchmark as it is the new standard for AI inference performance…

…New NVFP4 4-bit precision and NVLink 72 on the GB300 platform delivers a 50x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale. For instance, a $3 million investment in GB200 infrastructure can generate $30 million in token revenue, a 10x return…

…NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell’s performance by more than 2x since its launch. Advances in CUDA, TensorRT-LLM and Dynamo are unlocking maximum efficiency. CUDA library contributions from the open source community, along with NVIDIA’s open libraries and frameworks are now integrated into millions of workflows. This powerful flywheel of collaborative innovation between NVIDIA and global community contribution strengthens NVIDIA’s performance leadership. NVIDIA is a top contributor to OpenAI models, data and software.

Blackwell has introduced a groundbreaking numerical approach to large language model pretraining. Using NVFP4 computations on the GB300 can now achieve 7x faster training than the H100, which uses FP8. This innovation delivers the accuracy of 16-bit precision with the speed and efficiency of 4 bit, setting a new standard for AI factor efficiency and scalability. The AI industry is quickly adopting this revolutionary technology with major players such as AWS, Google Cloud, Microsoft Azure and OpenAI as well as Cohere, Mistral, Kimi AI, Perplexity, Reflection and Runway already embracing it. NVIDIA’s performance leadership was further validated in the latest MLPerf Training benchmarks, where the GB200 delivered a clean sweep. Be on the lookout for the upcoming MLPerf Inference results in September, which will include benchmarks based on the Blackwell Ultra.

NVIDIA’s next generation of chips, the Rubin family, are in fab now and remains on schedule for volume production in 2026; 6 different chips go into a Rubin AI supercomputer

The chips of the Rubin platform are in fab, the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 scale up switch, Spectrum-X scale out and scale across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rack scale AI supercomputer with a mature and full-scale supply chain…

…It takes 6 chips just to build — 6 different types of chips just to build a Rubin AI supercomputer.

The US government recently started reviewing licenses for sales of NVIDIA’s H20 chips to China customers; some of NVIDIA’s China customers have received licenses for H20 chips, but NVIDIA has yet to make any shipments; management sees the US government as expecting a 15% revenue-share from the sales of H20 chips to China customers, but the US government has yet to publish regulations on this; management has not included H20 sales in its 2025 Q3 (FY2026 Q3) guidance; management expects revenue of $2 billion to $5 billion in 2025 Q3 from H20 chips if they can be shipped once geopolitical uncertainty subsides; NVIDIA has capacity to fulfill more orders for H20 beyond the $5 billion expectation; management continues to advocate for the sale of Blackwell chips to China as they believe the sales will benefit the US economy; management sees the sales of Blackwell chips to China as being for commercial uses only; China revenue declined sequentially; management thinks China represents a $50 billion revenue opportunity for NVIDIA in 2025, with growth of 50% annually, if the company is able to sell chips there; management sees China as the home of AI researchers with about 50% of AI researchers being in the country; management sees China as the home of the leading open-sourced AI models, and that it’s important for American AI companies to be able to serve China because of the country’s lead in open source

In late July, the U.S. government began reviewing licenses for sales of H20 to China customers. While a select number of our China-based customers have received licenses over the past few weeks, we have not shipped any H20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales, but to date, the USG has not published a regulation codifying such requirement.

We have not included H20 in our Q3 outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2 billion to $5 billion in H20 revenue in Q3. And if we had more orders, we can bill more.

We continue to advocate for the U.S. government to approve Blackwell for China. Our products are designed and sold for beneficial commercial use, and every license sale we make will benefit the U.S. economy, the U.S. leadership. In highly competitive markets, we want to win the support of every developer. America’s AI technology stack can be the world’s standard if we race and compete globally…

…China declined on a sequential basis to low single-digit percentage of data center revenue…

…The China market, I’ve estimated to be about $50 billion of opportunity for us this year if we were able to address it with competitive products. And if it’s $50 billion this year, you would expect it to grow, say, 50% per year…

…It is the second largest computing market in the world, and it is also the home of AI researchers. About 50% of the world’s AI researchers are in China.

The vast majority of the leading open source models are created in China. And so it’s fairly important, I think, for the American technology companies to be able to address that market. And open source, as you know, is created in one country, but it’s used all over the world. The open source models that have come out of China are really excellent. DeepSeek, of course, gained global notoriety. Qwen is excellent. Kimi’s excellent. There’s a whole bunch of new models that are coming out. They’re multimodal. They’re great language models. And it’s really fueled the adoption of AI in enterprises around the world because enterprises want to build their own custom proprietary software stacks. And so open source model’s really important for enterprise. It’s really important for SaaS who also would like to build proprietary systems. It has been really incredible for robotics around the world. And so open source is really important, and it’s important that the American companies are able to address it. This is — it’s going to be a very large market. We’re talking to the administration about the importance of American companies to be able to address the Chinese market.

NVIDIA saw an increase in shipments of Hopper 100 and H200 chips in 2025 Q2 (FY2026 Q2), which indicates the breath of AI workloads that run on NVIDIA’s hardware

In the quarter was an increase in Hopper 100 and H200 shipments. We also sold approximately $650 million of H20 in Q2 to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of our platform. 

NVIDIA’s RTX Pro servers, for world models, are now in full production; nearly 90 companies are already adopting the RTX Pro servers, including Hitachi for digital twins, Eli Lilly for drug discovery, Hyundai for factory design, and Disney for immersive story telling; management believes RTX Pro can become a multi-billion business 

NVIDIA RTX PRO servers are in full production for the world system makers. These are air-cooled PCIe-based systems integrated seamlessly into standard IT environments and run traditional enterprise IT applications as well as the most advanced agentic and physical AI applications. Nearly 90 companies including many global leaders are already adopting RTX PRO servers. Hitachi uses them for real-time simulation and digital twins, Lilly for drug discovery, Hyundai for factory design and AV validation, and Disney for immersive storytelling. As enterprises modernize data centers, RTX PRO servers are poised to become a multibillion-dollar product line.

NVIDIA’s management sees sovereign AI continuing to grow; NVIDIA is involved with Europe’s landmark AI initiatives; the European Union has plans to invest €20 billion to build 20 AI data centers; management sees NVIDIA being on track to earn $20 billion in sovereign AI revenue in 2025 (FY2026), up more than 100% from a year ago

Sovereign AI is one on the rise as the nation’s ability to develop its own AI using domestic infrastructure, data and talent presents a significant opportunity for NVIDIA. NVIDIA is at the forefront of landmark initiatives across the U.K. and Europe. The European Union plans to invest EUR 20 billion to establish 20 AI factories across France, Germany, Italy and Spain, including 5 gigafactories to increase its AI compute infrastructure by tenfold. In the U.K., the Isambard-AI supercomputer powered by NVIDIA was unveiled at the country’s most powerful AI system, delivering 21 exaflops of AI performance to accelerate breakthroughs in fields of drug discovery and climate modeling. We are on track to achieve over [ 20 billion ] in Sovereign AI revenue this year, more than double than that last year.

NVIDIA’s networking revenue had very strong sequential as well as year-on-year growth in 2025 Q2 (FY2026 Q2), driven by strong demand across Spectrum-X Ethernet, InfiniBand and NVLink; management thinks Spectrum-X Ethernet has the highest throughput and lowest latency network for Ethernet AI workloads; Spectrum-X grew double-digits sequentially and year-on-year in 2025 Q2 and has more than $10 billion in annualised revenue; management recently introduced Spectrum-XGS Ethernet technology that can double GPU-to-GPU communication speed; CoreWeave will be an initial adopter of Spectrum-XGS Ethernet technology; Infiniband’s revenue was up nearly 100% sequentially, driven by XDR technology; XDR technology has nearly 100% higher bandwidth improvement over the previous generation; management sees NVLink as the world’s fastest data switch; NVLink Fusion, which allows semi-custom AI infrastructure, has received widespread positive reception; NVLink Fusion will be used by Japan’s quantum computing research center, FugakuNEXT; the difference between NVLink 8 and NVLink 72 is that NVLink 8 makes each node a computer, whereas NVLink 72 makes each rack a computer; NVIDIA has 3 networking technologies that addresses scale up (NVLink), scale out (Inifiband), and scale across (Spectrum Ethernet); management sees NVLink 72 as being excellent at amplifying memory bandwidth

Networking delivered record revenue of $7.3 billion, and escalating demands of AI compute clusters necessitate high efficiency and low latency networking. This represents a 46% sequential and 98% year-on-year increase with strong demand across Spectrum-X Ethernet, InfiniBand and NVLink.

Our Spectrum-X enhanced Ethernet solutions provide the highest throughput and lowest latency network for Ethernet AI workloads. Spectrum-X Ethernet delivered double-digit sequential and year-over-year growth with annualized revenue exceeding $10 billion. At Hot Chips, we introduced Spectrum-XGS Ethernet, a technology design to unify disparate data centers into giga-scale AI super factories. [ CoreWeave ] is an initial adopter of the solution, which is projected to double GPU-to-GPU communication speed.

InfiniBand revenue nearly doubled sequentially, fueled by the adoption of XDR technology, which provides double the bandwidth improvement over its predecessor, especially valuable for the model builders.

The world’s fastest switch, NVLink, with 14x the bandwidth of PCIe Gen 5 delivered strong growth as customers deployed Grace Blackwell NVLink rack scale systems. The positive reception to NVLink Fusion, which allows semi-custom AI infrastructure, has been widespread. Japan’s upcoming FugakuNEXT will integrate Fujitsu’s CPUs with our architecture via NVLink Fusion. It will run a range of workloads, including AI, supercomputing and quantum computing. FugakuNEXT joins a rapidly expanding list of leading quantum supercomputing and research centers running on NVIDIA’s CUDA-Q quantum platform, including [ ULIC ], AIST, [ NNF ] and NERSC, supported by over 300 ecosystem partners, including AWS, Google Quantum AI, Quantinuum, QuEra and PsiQuantum…

…This last year, we transitioned from NVLink 8, which is a node scale computing, each node is a computer, to now NVLink 72, where each rack is a computer…

…We now offer 3 networking technologies. One is for scale up. One is for scale out and one for scale across. Scale up is so that we could build the largest possible virtual GPU, the virtual compute node. NVLink is revolutionary. NVLink 72 is what made it possible for Blackwell to deliver such an extraordinary generational jump over Hopper’s NVLink 8. At a time when we have long thinking models, agentic AI reasoning systems, the NVLink basically amplifies the memory bandwidth, which is really critical for reasoning systems. And so NVLink 72 is fantastic.

We then scale out with networking, which we have 2. We have InfiniBand, which is unquestionably the lowest latency, the lowest jitter, the best scale-out network. It does require more expertise in managing those networks…

…For those who would like to use Ethernet because their whole data center is built with Ethernet, we have a new type of Ethernet called Spectrum Ethernet. Spectrum Ethernet is not off the shelf. It has a whole bunch of new technologies designed for low latency and low jitter and congestion control. And it has the ability to come closer, much, much closer to InfiniBand than anything that’s out there. And that is — we call that Spectrum-X Ethernet.

NVIDIA’s new robotics computing platform, Jetson Thor is now available, and it delivers an order of magnitude higher AI performance and energy efficiency than its predecessor; NVIDIA’s full stack robotics platform is growing rapidly with more than 2 million developers and 1,000-plus hardware-software applications; leading enterprises involved with robotics, including Amazon Robotics and Boston Dynamics, have adopted Jetson Thor

Jetson Thor, our new robotics computing platform, is now available. Thor delivers an order of magnitude greater AI performance and energy efficiency than NVIDIA AGX Orin. It runs the latest generative and reasoning AI models at the edge in real time, enabling state-of-the-art robotics.

Adoption of NVIDIA’s robotics full stack platform is growing at rapid rate, over 2 million developers and 1,000-plus hardware software applications and sensor partners taking our platform to market. Leading enterprises across industries have adopted Thor, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta.

Robotic applications require exponentially more compute on the device and in infrastructure, representing a significant long-term demand driver for our data center platform. NVIDIA Omniverse with Cosmos is our data center physical AI digital twin platform built for development of robot and robotic systems. This quarter, we announced a major expansion of our partnership with Siemens to enable AI automatic factories. Leading European robotics companies, including Agile Robots, NEURA Robotics and Universal Robots are building their latest innovations with the Omniverse platform.

Singapore was 22% of NVIDIA’s 2025 Q2 (FY2026 Q2) revenue; Singapore is an important geography for NVIDIA because its US customers use Singapore

Singapore revenue represented 22% of second quarter’s billed revenue as customers have centralized their invoicing in Singapore. Over 99% of data center compute revenue billed to Singapore was for U.S.-based customers.

Management shipped GeForce RTX 5060 desktop GPUs in 2025 Q2 (FY2026 Q2); the RTX 5060 desktop GPU has double the performance of the previous generation; management will soon bring Blackwell to the GeForce NOW; management thinks RTX GPUs brings the best on-device AI performance; NVIDIA has partnered with OpenAI to optimise their GPT models for inference on RTX-powered Window devices

This quarter, we shipped GeForce RTX 5060 desktop GPU. It brings double the performance along with advanced ray tracing, neural rendering and AI-powered DLSS 4 gameplay to millions of gamers worldwide. Blackwell is coming to GeForce NOW in September… 

…For AI enthusiasts, on-device AI performs the best RTX GPUs. We partnered with OpenAI to optimize their open source GPT models for high-quality, fast and efficient inference on millions of RTX-enabled Window devices. With the RTX platform stack, Window developers can create AI applications designed to run on the world’s largest AI PC user base.

AI workloads on NVIDIA’s chips have now transitioned strongly to inference; NVIDIA’s management is seeing a huge jump in inference demand; major NVIDIA customers, such as OpenAI, Microsoft, and Google, are seeing huge leaps in AI token generation; Microsoft processed 100 trillion tokens in 2025 Q1, up 5x year-on-year; inference-serving startups have tripled their token generation rate and revenues

AI workloads have transitioned strongly to inference…

…We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis…

…Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.

NVIDIA’s automotive revenue had strong growth in 2025 Q2, driven by self-driving technologies; NVIDIA has started shipping Thor SoC (system on a chip); management sees the self-driving automotive market shifting towards a vision language model architecture, generative AI, and higher levels of autonomy; NVIDIA’s full stack Drive AV software platform is now in production and management thinks it can produce billions in new revenue opportunities for NVIDIA

Automotive revenue, which includes only in-car compute revenue, was $586 million, up 69% year-on-year, primarily driven by self-driving solutions. We have begun shipments of NVIDIA Thor SoC, the successor to Orin. Thor’s arrival coincides with the industry’s accelerating shift to vision language model architecture, generative AI and higher levels of autonomy. Thor is the most successful robotics and AV computer we’ve ever created. Thor will power. Our full stack Drive AV software platform is now in production, opening up billions to new revenue opportunities for NVIDIA while improving vehicle safety and autonomy.

NVIDIA’s management sees agentic AI requiring 100-1,000x the amount of computation compared to 1-shot AI models; agentic AI is driving tremendous growth in the amount of computation; management thinks agentic AI has reduced hallucination significantly; management thinks agentic AI has helped deliver breakthroughs in robotics 

Where chatbots used to be one shot, you give it a prompt and it would generate the answer, now the AI does research. It thinks and does a plan, and it might use tools. And so it’s called long thinking; and the longer it thinks, oftentimes, it produces better answers. And the amount of computation necessary for 1 shot versus reasoning agentic AI models could be 100x, 1,000x and potentially even more as the amount of research and basically reading and comprehension that it goes off to do. And so the amount of computation that has resulted in agentic AI has grown tremendously…

…Because of agentic AI, the amount of hallucination has dropped significantly. You can now use tools and perform tasks. Enterprises have been opened up. As a result of agentic AI and vision language models, we now are seeing a breakthrough in physical AI, in robotics, autonomous systems.

NVIDIA’s management sees NVIDIA’s chips as having plenty of advantages over ASICs (application-specific integrated chips); management thinks very few ASICs go into production because the problem set of delivering an accelerated computing platform, which is a full-stack design, is really complicated; management thinks building a data center with NVIDIA brings the best utility as compared to ASICs; management sees NVIDIA’s platform has the most energy efficient, with the best performance per watt; management thinks a world where data centers are limited by power is one where performance per watt is incredibly important

NVIDIA builds very different things in ASICs. So let’s talk about ASICs first. A lot of projects are started. Many start-up companies are created. Very few products go into production. And the reason for that is it’s really hard. Accelerated computing is unlike general-purpose computing. You don’t write software and just compile it into a processor. Accelerated computing is a full-stack co-design problem. And AI factories in the last several years have become so much more complex because of the scale of the problems have grown so significantly…

…The models are changing incredibly fast from generative based on auto regressive to degenerative based on diffusion to mixed models to multi-modality. The number of different models that are coming out that are either derivatives of transformers or evolutions of transformers is just daunting…

…The diversity of our platform, both in the ability to evolve into any architecture, the fact that we’re everywhere, and also, we accelerate the entire pipeline, everything from data processing to pretraining to post training with reinforcement learning, all the way out to inference. And so when you build a data center with NVIDIA platform in it, the utility of it is best. The lifetime usefulness is much, much longer…

…People talk about the chip itself. There’s one ASIC, the GPU that many people talk about. But in order to build Blackwell the platform and Rubin the platform, we had to build CPUs that connect fast memory, low — extremely energy-efficient memory for large KB caching necessary for agentic AI to the GPU to a SuperNIC to a scale up switch, we call NVLink, completely revolutionary, we’re in our fifth generation now, to a scale out switch, whether it’s Quantum or Spectrum-X Ethernet, to now scale across switches so that we could prepare for these AI super factories with multiple gigawatts of computing all connected together…

…We’re in every cloud for a good reason. Not only are we the most energy efficient, our perf per watt is the best of any computing platform. And in a world of power-limited data centers, perf per watt drives directly to revenues.

The US currently represents 60% of the world’s compute

United States represents about 60% of the world’s compute.

NVIDIA’s management thinks AI will accelerate global GDP growth

You would think that artificial intelligence would reflect GDP scale and growth and so — and would be, of course, accelerating GDP growth.

NVIDIA’s management is seeing year-to-date AI startup funding at already $180 billion and this compares with $100 billion for the whole of 2024; AI startups’ revenues are expected to increase by 10x to $20 billion in 2025; management thinks it’s reasonable that AI startups’ revenues could 10x again in 2026

Native-AI start-ups was $100 billion was funded last year. This year, the year is not even over yet, it’s $180 billion funded. If you look at AI native, the top AI-native start-ups that are generating revenues last year was $2 billion. This year, it’s $20 billion. Next year be 10x higher than this year is not inconceivable.

NVIDIA’s AI products are sold out

The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs.

Okta (NASDAQ: OKTA)

Okta’s management’s approach to securing nonhuman identities (NHIs), which are effectively AI agents, is to give them the same level of visibility, access control, governance and remediation as human identities; management believes no other company can deliver the level of sophistication Okta can to secure AI agents; the Auth0 for AI Agents product from Okta’s Auth0 platform enables developers to build AI agents that are secure by design; management thinks AI agents will significantly amplify the identity-security problems related to machine identities that are currently faced by enterprises; management is hearing from the leaders of the largest companies in the world that they will not be able to get projects involving AI agents to work if their identity-security problems are not addressed; management is building a new product that will model the identity of an AI agent so users can have even more control in managing the security of the AI agent; the new product is still in its very early days because management is seeing very few companies putting AI agents into production despite many companies testing out these agents; management wants to eventually have Okta be the system of record for AI agents so the AI agents can choose what technologies they want to work with

Take our approach to securing nonhuman identities, or NHIs. Okta’s unified platform helps ensure they receive the same level of visibility, access control, governance and remediation as human identities. This includes the ability to detect and discover NHIs wherever they exist, provision and register them properly, authorize and protect them with appropriate policies and govern and monitor their behavior continuously. That’s the power of an identity security fabric enabled with Okta’s unparalleled breadth of modern identity security products. No other company can deliver that level of sophistication.

With our Auth0 platform, we’re enabling developers to build agents that are secure by design and identity security fabric-ready from day 1. Auth0 for AI Agents, formerly known as Auth for GenAI, delivers user authentication that works seamlessly with AI workflows, token vaults that securely manage credentials, async authorization that lets agents work autonomously while maintaining user control and fine grained authorization that permits AI agents to only access authorized data…

…Our perspective is quite simple. It’s that you have many problems today in your enterprise that are clear and present and you can get a lot of security benefit by addressing these problems. These are the problems that we talk about a lot. These are service accounts. These are machine identities. These are putting the right vaulting and governance workflows around all of these things. These are like the bread and butter of our identity platform across Governance and Privileged Access and Identity Threat Protection with Okta AI and the bread and butter of what we’re talking about. These are clear and present things today. In addition to that, every company is going to make a huge investment in AI agents. And what that’s going to do, first and foremost, is it’s going to make that problem I just described 5x worse because every agent wants to connect to 10 service accounts and is going to have its own tokens…

…The last week, I’ve had conversations with CIOs of massive companies that everyone’s heard of that say, there’s no way we’re going to be able to do this AI stuff if we don’t get our identity foundation in order…

…There are investments we are making in innovation we’re building that is going to even take it a step further, which is actually modeling the identity of an agent and giving more power to the customer to manage and secure these things because it’s a native thing inside of Okta, which is also very exciting. 

But that’s very early because the amount of companies that are actually playing with AI agents is 100%. The ones that are actually putting them in production at scale is very small. So the timing is right here to solve this problem they all have today, the surface accounts and token vaulting, et cetera. 

And then over time, be the system of record for the AI agents themselves, and give them choice and flexibility on if they want to use Salesforce or what they want to use Salesforce for, agents, or ServiceNow agents or build their own agents and give them the fundamentals across all of that, which are security, control and governance.

Okta’s management recently introduced a new open standard, Cross App Access, that helps with securing AI; Cross App Access enables AI agents to safely connect with other technologies; management is seeing strong interest from Okta’s partners and ISVs (independent software vendors) for Cross App Access; management has been working on Cross App Access for 3 years; Cross App Access is an industry-wide effort that started with other SaaS companies wanting to have the ability to connect their products with their customers’ other products; management sees the emergence of AI as aggravating the problem of product-to-product connections

Securing AI is the next frontier, and our introduction of a new open standard called Cross App Access is a key part of the solution. This is an important innovation that helps control what AI agents can access, allowing us to help make our customers and ISVs more secure and providing better end-user experience. In short, Cross App Access allows for support of AI agents within the identity security fabric and the flexibility to safely connect to other technologies. Already, there is strong interest in Cross App Access from partners and ISVs, including AWS, Boomi, Box, Ryder and Zoom, and we had over 1,100 attendees at our Identity Summit on the topic earlier this month…

…Cross App Access is an industry-wide effort. It’s actually 3 years old. We’ve been working on this for 3 years. And it came out of Mike from Atlassian and Eric from Zoom and many other SaaS leaders wanted a way to standardize how when they sold their products into companies, how those products were then hooked up to everything else in the company. So Zoom wants to connect to your calendar, wants to connect to a note taking. Atlassian wants to connect to all of your other software development tools. So we invented this protocol and this concept and have published this open standard to solve a very important problem. How do you give your IT teams and your security teams visibility into all these application connections that happen between apps. Now guess what? That’s a problem that’s existed for a long time. And guess what’s happening with AI. AI is supercharging this problem. Now every agent gets what it wants to do. It wants to connect to 15 applications and guess what you need. You need an open protocol for all of those applications that are letting those agents connect, publish and share that information with the security team so they can have visibility and control and audit that. So that’s why Cross App Access is so important.

Okta’s management is not seeing any difference in terms of the Okta products that AI native companies are choosing compared to other types of customers; AI native companies are aware that they are very attractive targets for hackers, so they are really investing in identity security; management thinks Okta can help with AI native companies’ identity security needs

[Question] When we look at the AI native cohort, are there any interesting adoption trends that you’re seeing there in terms of what products they’re taking, how they’re using the platform?

[Answer] It doesn’t seem dramatically different than other cohorts in terms of the adopting workforce solutions or Auth0. It looks pretty much the same, except they’re growing very fast. I guess that’s a difference, especially, actually, the revenue metrics. It’s growing very fast, and we think we’re well positioned in that cohort. And I think similar to every company, they’re trying to figure out how they can be secure internally as they’re growing very fast. They know from a workforce identity and identity security perspective for their internal operations, they’re sitting on a lot of very valuable data and definitely hackers want to attack them like they want to attack every important company. So they’re really investing in identity security, and Okta helps them with that.

Okta’s management thinks that Okta’s 2 open standards, IPSIE and Cross App Access, will help the entire identity market become valuable; management thinks about the monetisation of the open standards from the perspective of the open standards making machine and AI agent identities more widely accepted and thus making Okta’s products more important for customers

These are 2 open standards we’re pushing out there with the ecosystem. And the effect of both of these things for Okta is going to be basically identity providers are going to be more valuable tools to the customers. So they’re going to have better control, fine-grained control, into resources, better policies, more value. So the whole identity market gets more valuable and bigger…

…The clear and present issue today, which is service accounts, nonhuman identities. We monetize that through Okta Privileged Access and Identity Security Posture Management. So Identity Security Posture Management detects the nonhuman identities and the risks in a proactive way that’s comprehensive across all platforms. And Okta Privileged Access and Okta Identity Governance can vault the credentials and rotate the credentials and have the right governance workflows…

…In a world of AI agents, our belief is strong that you are going to manage AI agents with your identity system. And so that’s how we’re going to monetize that. You’re going to — when you put a bunch of AI agents inside Okta, that’s going to be more valuable from an identity security perspective and we’re going to be able to have — we’re going to be able to charge for that with our customers…

…But it all is kind of predicated on a vibrant, healthy growing AI agent ecosystem, which I think there’s a lot of different thoughts on how that exactly play out, but who’s the vendor going to be, who’s the platform, SaaS vendors versus custom development, whatever. I think whatever happens, you’re going to need to manage this stuff.

Salesforce (NYSE: CRM)

Salesforce has won 12,500 AgentForce deals since it was launched 3 quarters ago, of which, 6,000 are paid; 40% of new AgentForce bookings in 2025 Q2 (FY2026 Q2) came from existing Salesforce customers; AgentForce had 60% sequential increase in customers going from pilot to production in 2025 Q2; AgentForce now can support the public sector and has FedRAMP High certification, so Salesforce can now sell more to the US government than before; management thinks Salesforce’s consumption model is showing strong early success; management recently announced new flexible payment options for AgentForce, with Flex Credits accounting for 80% of AgentForce new bookings in 2025 Q2 (FY2026 Q2); DIRECTV is one of Salesforce’s biggest Flex Credits customers; Falabella refilled the Flex Credits tank 3 times in a quarter

In the 3 quarters since we launched Agent Force, we have now won more than 6,000 paid deals and more than 12,500 overall…

…40% of our Agent Force new bookings this quarter came from existing customers extending their investment with Salesforce. And it’s demonstrating the value that they’re getting and how the flywheel is really working. We’ve seen a 60% increase quarter-over-quarter in customers who’ve gone from pilot to production and they’re expanding use cases and scaling consumption…

…Now with Agent Force for public sector and FedRAMP High certification, we’re able to sell more to the government than ever before because we’re bringing the power of the agentic enterprise directly to the government…

…Our consumption model is showing strong early success…

…Last month, we announced new flexible payment options for Agent Force, including pay-as-you-go, to lower the barrier to adoption and encourage experimentation. And following their launch last quarter, Flex Credits now account for 80% of Agent Force Q2 new bookings…

…Marc alluded to DIRECTV. Incredible business value. This is one of the biggest flex credit customers that we have globally…

…There is a customer that in just 3 or 4 months, they refilled the tank 3 times. I gave you the example of Falabella.

Salesforce’s management sees all of Salesforce’s customers becoming agentic enterprises; management sees AI agents represent a complete transformation for Salesforce and its customers; management sees the end goal of agentic AI as humans and AI agents working together with trusted data; management is adding native agentic capabilities into all of Salesforce’s products; Salesforce is pairing every salesperson with an AI agent and is using AI agents in Sales Cloud to call every single person back; in customer service, agents are handling millions of conversations, with AI agents handling 1.5 million conversations in 9 months within Salesforce’s help site; in field service, AI agents are helping technicians orchestrate scheduling and logistics, and helping technicians solve problems; the new version of Salesforce’s Tableau has AI agents that surface insights and recommendations instantly; Salesforce’s marketing product will soon have AI agents that can turn every one-way email to customers into 2-way conversations; Salesforce employees are using Slack as the interface for communicating with AI agents built with AgentForce; management thinks Salesforce will lead the way in this agentic enterprise wave because it has (1) the software infrastructure, and (2) the metadata platform; management will soon unveil all of Salesforce’s agentic products at Dreamforce; management is seeing very healthy growth in the pipeline for agentic transformation among enterprises

One thing is extremely clear to me, every single one of our customers is becoming an agentic enterprise…

…This isn’t simply just some automating some existing business process these agentic enterprises. Well, for Salesforce, it’s certainly true. It’s a complete transformation. And for our customers, the agentic enterprise is a complete reinvention in many cases of who they are and what their potential is. It’s a shift from traditional hierarchies to reshaping the entire company from busy work to orchestrating workflows, from siloed teams to seamless collaboration, from clicking and routing to natural conversations…

…But ultimately, it’s about this. It’s about humans and agents working together with every decision grounded in trusted data…

…Across our portfolio, we are adding these native agentic capabilities into every single one of our products…

…Our Sales Cloud for years has been an app that thousands or millions of salespeople use to manage their sales every single day. But now riding alongside every salesperson is an agentic salesperson. And that agentic salesperson is calling every single person back. And how that relates to Salesforce, well, let me tell you that, well, maybe somewhere between 20 million and 100 million people who have contacted Salesforce in the last 26 years, they haven’t been called back. It’s just because we didn’t have enough people. But now with our new agentic sales, everybody is getting called back…

…In service, we’ve been talking about that now for months, you can see our agents are handling millions of conversations while humans are delivering the empathy and expertise. Well, it’s a bigger story than that, where you know that we have delivered in the last 9 months about 1.5 million conversations just for our own company on help.salesforce.com

…In field service, agents orchestrate scheduling and logistics so technicians can focus on solutions. I saw it myself at my home. I have this incredible device from Eaton, one of our large customers using our field service product. And it actually connects my air stream trailer to my house. And when the technician comes out to work on it, well, they’re able to use the agentic capability to learn as much as possible about the product that I’m using and how to fix it and how to repair it, while also managing the traditional system of record that’s on the field service capability, managing all the field service and service operations through the field service capability…

…We’ve been showing now for a few months, starting at our Tableau conference, the new version of Tableau, where agents surface insights and make recommendations instantly and where agents and humans are working together to make smarter, faster decisions…

…We’re demonstrating to our customers and about to release our new e-mail platform that provides every one-way conversation into a 2-way conversation. And agents are going to turn these one-way e-mails into 2-way conversations…

…If you’ve seen anyone from Salesforce recently, have them show you how we’re using Slack as our interface to our own agentic enterprise where we have dozens of agents with people and apps and LMs, all in one conversational agentic workspace. It’s pretty cool. And these agents are operating across apps, departments, silos, all running off of our data cloud, all running off of AgentForce…

…Salesforce is going to lead the way. There’s no question about that. We’ve built the software infrastructure for the agentic enterprise, we have our metadata platform unifying our apps, our data and agents into one powerful agentic operating system. We are rebuilding every single 1 of our products to be agentic. We’re delivering almost every single one of those products at Dreamforce. And at Dreamforce, you’re going to see all of these products…

…I see the pipeline into H2. Pipeline is growing in the high teens. And for big deals, it’s actually approaching 20% growth. That’s a really good sign. We haven’t seen that kind of pipeline in a long time. The agentic enterprise is really the next incredible investment cycle.

Data Cloud is a critical foundation for Salesforce’s agentic ambition because it provides the data and metadata for accurate output by AI agents; management believes Data Cloud enables Salesforce to have the most accurate AI agents in the industry, with about 90%-ish accuracy; management thinks Data Cloud will be the most strategic and important business for Salesforce; Data Cloud is now a $7 billion business; Data Cloud had 140% year-on-year growth in customers, and usage numbers are growing rapidly; more than half of Fortune 500 companies are on Data Cloud; FedEx is using Data Cloud to save a lot of costs and grow the percentage of customers who signed a contract and proceed to start shipping by double-digits; Salesforce’s Data Cloud and AI ARR (annual recurring revenue) reached $1.2 billion in 2025 Q2, or FY2026 Q2 (was $1 billion in 2025 Q1, up 120% year-on-year), up 120% year-on-year; Salesforce closed 60 deals in 2025 Q2 (FY2026 Q2) exceeding $1 million that included Data Cloud and AI; management sees Informatica, together with Data Cloud and Mulesoft, as the 3 components for every company’s AI foundation

Data Cloud is the heart and soul of the success of these agents because it is providing the data and the metadata that you need and the context to get the accuracy. We probably have the highest accurate agents in the industry, and the way that we’re achieving that is through our data cloud. It’s this Data Cloud as well as Tableau and MuleSoft and soon Informatica, all working together to really helping our customers to clean and harmonize their data and provide it in a way that can be consumed by our Agent Force platform to provide this level of accuracy.

I think the data business is probably the most strategic and most important business for Salesforce going forward. And already, it’s a $7 billion business. And Data Cloud is having a great year. It had 140% year-over-year growth in customers and 326% growth in row access by zero-copy integration. The usage numbers are really just off the charts. But over half of the Fortune 500 are already on Data Cloud, but it’s really just the very, very beginning…

…FedEx, and you’re going to see them at Dreamforce, their Chief Operating Officer, Richard Smith, is coming to be part of my keynote. Well, let me tell you that they’ve got unified data across all their platforms now with Data Cloud, and the numbers that they’re telling us that they’re saving, well, I’m not going to — I’m not going to take away Richard’s punchline from the Dreamforce keynote, it’s like numbers I’ve never heard in terms of what the amount that can be saved by technology. And now if a business customer [ isn’t ] actively shipping, our own marketing cloud campaign is automatically triggered and sales reps are alerted and it’s all happening through our Data Cloud. And this idea that FedEx has seen a double-digit increase in the percentage of customers who signed the contract and proceeded to start shipping, it’s dramatically surprised them what has been possible in such a short period of time…

…Data Cloud and AI ARR continues to scale, reaching $1.2 billion in Q2, growing 120% year-on-year…

…Data and AI products were in 60 deals greater than $1 million…

…Because AI, as we all know, these large language models only have a certain level of accuracy and it’s not 100%. It’s probably about in the 90s when it really gets well-architected with our data cloud and with all the different kind of capabilities and kind of really advanced techniques that we’ve come up with to make our AI as accurate as it can…

…We think that every customer is going to need an Informatica, every customer is going to need a MuleSoft and every customer is going to need a Data Cloud. And together, we think that’s called the AI foundation. And that AI foundation is the Data Cloud plus MuleSoft plus Informatica. And if you’re going to roll out Agent Force, you’re going to need an AI foundation made up of those 3 things.

DIRECTV used AgentForce to (1) save billing reps 300 hours of inquiry-handling and (2) execute 50,000 actions in a week with Employee AI Agent; enGen expects to save millions of dollars annually by cutting call times with AgentForce; PenFed expects to save millions of dollars annually by using AgentForce for loan underwriting; Under Armor used AgentForce to double its case deflection rate and increase its customer satisfaction rate by double digits, all in less than 60 days; Reddit used AgentForce to reduce average resolution times from 8.9 minutes to 1.4 minutes; Telepass used AgentForce to power 275,000 agentic conversations over 5 months, and has become one of the fastest-growing AgentForce customers; Pandora has scaled from 1 agent to 3 agents with AgentForce in a single quarter; Indeed has doubled the number of actions taken by its customer-facing agents and has added another agent for internal productivity; Williams Sonoma has deployed AgentForce for only a few weeks, but has expanded from the initial use case of customer support for 1 brand, to customer support for 8 brands and other use cases; the US army is planning to use AgentForce to support its Human Resource Command; Salesforce has expanded 24/7 instant support to 6 new languages and agents now cover 94% of its global case volume; Salesforce recently launched many new agents for internal use cases; management is aware of the recent MIT study showing that 94% of AI projects in enterprises have failed, but Salesforce’s customers are getting great results; Falabella is using Salesforce’s AI agents to track its order locations and has seen its NPS (net promoter score) increase, its call volume drop by 25%, and 70% of its conversations shift to WhatsApp

DIRECTV save billing reps nearly 300 hours of inquiry handling with Agent Force. And Employee AI Agent executed 50,000 actions in a week…

…enGen, an incredible company, projecting millions in annual savings by cutting call times.

PenFed, we talked about, many scripts that we’ve had, already projecting millions in annual savings by using Agent Force in its loan underwriting…

…Under Armour and Kevin Plank, well, he more than doubled his case deflection rate and boosted customer satisfaction by double digits. And they did it in under 60 days…

…A lot of our employees are excited about Reddit because they’ve reduced average resolution times from 8.9 minutes to 1.4 minutes…

…Telepass, well, they’ve powered more than 275,000 agentic conversations over 5 months. And the way they got it in the script is “We can’t believe the speed and growth of these conversations just in the last few weeks,” a conversation with the management level that they’ve become one of our fastest-growing AgentForce customers…

…Pandora, the amazing jewelry retailer, Alex’s entire team scaled from 1 agent to 3 in a single quarter…

…Indeed have more than doubled the number of actions taken by their customer-facing agents and added another agent in Slack to drive internal productivity…

…Williams Sonoma, and we’ve only been live for a few weeks, started with Agent Force powering customer support for just one of their brands. I think you know they have like quite a few amazing brands like Pottery Barn and West Elm and others. Well, now it’s rolled out along 8 of their brands and as well as agents for other use cases, including a sous chef agent, that is helping customers choose cookware and guiding them step-by-step through recipes. They are finding incredible new ways to use the Agent Force platform. And they’re doing it side by side across their entire sales force deployment…

…The Army is already planning to launch a digital front door for its Human Resource Command, providing 24/7 powered service and support to all soldiers and personnel and millions of veterans…

…In Q2, we expanded 24/7 instant support to 6 new languages, which combined with English now cover over 94% of our global case volume. Earlier this year, we launched our IT and HR agents in Slack to support our employees. And in July, we launched dozens more specialized agents in Slack…

…Over the weekend, I read that MIT study that’s becoming very popular, which really goes to show that a lot of companies have thought they were on the right path with generative AI, building their own models, doing it themselves, hooking it all up. And now they’re claiming about 94% of those projects have failed. But we’ve been saying that was going to happen for the last several years, as you know. But that’s not what our customers are saying. Our customers are saying that they’re getting phenomenal results and that they have humans and agents working together to create a new level of customer success, or we say it at Salesforce as an agentic enterprise…

…Falabella, is the largest retailer in Latin America. Their main use case, they have several, but their main use case is: Where is my order? And they solved that question to the customers across the web, in-app and WhatsApp. The pilot took 2 months from idea to production. They access their OMS system. They leverage the CRM data in Salesforce, knowledge articles that we put in Data Cloud. They connect Data Cloud to GCP. And the value is extraordinary. The NPS has increased by 10%, 10 points, from 70% to 70%. All the digital interactions, most of them, 70% of them have shifted to WhatsApp, and the call volume has dropped by 25%.

Salesforce will soon launch its agentic IT service platform; many Salesforce customers have been asking for IT services from Salesforce; the agentic IT service platform will be integrated with Slack; the agentic IT service platform will see every IT request become a conversation; management thinks the agentic IT service platform will be a huge growth driver for Salesforce; management thinks traditional ITSM (IT service management) products have served only the very high-end market, but Salesforce’s agentic IT service platform can serve a much wider demographic of customers; Salesforce itself is the first customer of its agentic IT service platform 

The world of ITSM and IT service. It’s an application area that we just haven’t gone to before. But I’m very excited that next month, and you’re going to see this at Dreamforce as well, that we’re launching our own agentic IT service platform. A lot of our existing customers have been asking for this. We’re bringing a whole new level of capability. It’s agent-first and it’s Slack-first, that is right inside Slack, you’re going to be using our agentic IT service capability. It’s natively embedded where employees already work with 0 learning curve…

…With agentic IT service, well, every request is becoming a conversation where agents work hand-in-hand with IT teams proactively fixing their problems. It’s going to be an incredible growth driver for the company…

…It’s a very democratic platform. A lot of the ITSM products have only served the very highest end of the market with maybe 1,000 customers here or 1,000 customers there. But the thing about Slack is that it’s used by about 1 million customers worldwide. And I think all of them are going to be able to be able to benefit from this IT service platform. No one else is delivering this level of agenda capability and digital labor at scale. Now we know how to do this because our own first customer for this, well, it’s us. We are Customer 0.

Salesforce’s management thinks being agent-first will expand Salesforce’s margins in the long run; Salesforce has cut its customer support workforce by 40% because of the efficiency of AI agents

We believe that being agent-first is a key driver of our own long-term margin expansion…

…[Question] We’ve heard software companies say that they have held their head count flat in their support organizations. We haven’t heard anyone saying that they reduced head count by close to 40% there like you have.

Salesforce’s management thinks AI is an extension of SaaS, and not an eliminator of SaaS, because there are still problems that AI cannot solve

There’s a lot that we can resolve automatically through these agents with the customers, but there’s also a lot that cannot be resolved. And that has to be escalated to the humans. And so it’s humans and agents working together to satisfy customer success. And this is what has been extremely important…

…So it’s not about the fundamental, I would say, elimination of SaaS. What I would say, it’s the fundamental extension of SaaS…

…Nothing lasts forever, okay? But I just look at how I’m running my own business and the business of our customers, I don’t understand what the replacement is. So I just look at this incredible next-generation transformational capability, and I’m going to lay it all out at Dreamforce. And by the way, my keynote, I kind of threw away all my slides and I said, let’s just have 12 CEOs of the largest companies on the planet just show you exactly what they’re doing with this technology, because it’s crystal clear what the value proposition is. But to hear some of this nonsense that’s out there in social media or in other places, people say the craziest things, but it’s not grounded in any customer truth.

Salesforce’s management sees Salesforce as being the only company that can bring together deterministic workflows and agentic reasoning

We are the only platform, the only software infrastructure that can bring the deterministic workflows, the data and the agentic reasoning and actioning on the same platform.

Salesforce’s management thinks AGI (artificial general intelligence) will not be coming any time soon

The idea that there is, I’ll just say, again, an AGI, that seems like a fantastical term. I know it’s coming in the next week or 2 evidently. But this idea that there’s some kind of AGI that’s about to take over the whole world. Well, let me just help everybody understand that’s not exactly what’s about to happen.

Salesforce’s management thinks Salesforce is going to see incredible growth in the next 2 years because of AI

We think we’re going to see some incredible growth over the next 6 to 8 quarters…

…My focus is accelerating bookings. I’m very happy with the execution of my team. I’m very positive about what is coming ahead, not just in H2, but also what is coming in the next fiscal year. We’re already thinking about the next fiscal year. We wouldn’t be investing at the rate that we are investing with very — a lot of intentionality in the areas that are growing, in the areas that have higher margin if we didn’t see a great opportunity.

Sea Ltd (NYSE: SE)

Sea’s management is using AI to improve Shopee’s advertising business; sellers who used Shopee’s advertising products rose 20% in 2025 Q2, and sellers who used Shopee’s advertising products grew their ad spend by more than 40% from a year ago

Since early last year, our dedicated ad-tech team has worked hard to improve algorithms, enhance traffic allocation efficiency, and deploy AI technologies to better serve our ad-paying sellers. And we have seen very encouraging results. During the second quarter, the number of sellers using our ad products

rose by around 20%, and ad-paying sellers’ average quarterly ad-spend grew by more than 40% year-on-year. Our tech enhancements have allowed us to more effectively optimize Shopee’s GMV and advertising revenue at the same time. We saw an 8% uplift in Shopee purchase conversion rates and improved our ad take rate by almost 70 basis points this quarter, year-on-year.

Sea’s management has provided AI tools for Shopee sellers to produce high-quality video content; livestreaming and short-form video orders in Southeast Asia accounted for more than 20% of Shopee’s physical goods order volume from the region; there are now 7 million Youtube videos with Shopee product links embedded, up 60% sequentially (was 4 million in 2025 Q1)

Our AI tools empower Shopee sellers to produce high-quality video content, helping them improve user conversion and make more money without having to invest in their own studio set-up. In Southeast Asia, orders from livestreaming and short-form videos accounted for more than 20% of our total physical goods order volume in the second quarter. Our collaboration with YouTube has also continued its strong momentum. As of June, more than seven million YouTube videos featured Shopee product links across our Southeast Asian markets, an increase of more than 60% quarter-on-quarter. 

Sea’s management sees Monee has having 3 unique advantages, namely, (1) integration with Shopee, (2) a large user base who are growing their credit records with Monee, and (3) use of AI to improve credit models; 

…Three unique advantages that Monee has. First, deep and seamless integration with our Shopee ecosystem. Second, a very large base of users who are growing their credit track records with us over the years. Third, our increasing use of AI to improve our credit models. Together, these advantages uniquely enhance our underwriting capabilities in each market, enabling us to very effectively push for growth across our three credit product lines: on-Shopee SPayLater, offShopee SPayLater, and cash loan products.  

Sea’s management has been using AI a lot in general recommendations, leading to improvement in conversion rates as the system can better understand user intention

We also use AI a lot our general recommendations, and this improved our conversion rate quite a lot by understanding user intention better, by understanding the buyer’s query better.

Sea’s management is using AI to generate images for product descriptions

We also spent a lot of effort on the AIGC initiatives that we can generate a lot more attractive pictures for the product descriptions.

Shopee’s customer service chatbot is 80% managed by an AI agent; the use of AI in Shopee’s chatbot helps sellers both reduce cost and increase the potential for upselling when interacting with consumers

On the customer interaction side, we — our customer service chatbot is 80% managed by AI agent. We’re also helping the seller to interact with the buyers through the CS chat by agent as well, not only reducing the cost for the sellers, but also improve the upselling potential for the sellers while talking to the buyers.

Sea’s management is actively using AI to improve Sea’s internal operations

The second type is to improve our internal operations. For example, obviously, the product development side, but also many of our daily operations like, for example, if you look at the way we run our marketing campaigns, a lot of my campaign are very automated right now through AI tools. Many of the process to process the payment are AI-enabled through the agent, et cetera.

Sea’s management is very excited about the use of AI in the gaming industry; management thinks the gaming industry will be among the first batch of industries to benefit from advancements in AI; management has seen AI improve productivity in game development by generating art work; management thinks AI agents can improve the gaming experience for players who prefer to play solo games; management wants to explore the use of AI to generate content and have personalised gaming experiences instead of the current format where the gaming experience is preset

We are very, very excited about the AI perspective in the game industry. And personally, I believe game industry will be among the first batch of industries largely benefited by the AI advancements and the technologies.

And so far, like we have seen a lot of kind of upside on the — actually on the development and the production side. And say, for example, like for — to develop any new content new map, we need to generate a lot of original arts. And now a lot of like very, very basic arts can be generated by AI. So it’s — the quality is very, very decent in terms of the efficiency, the volumes are generated and the varieties are generated is I mean, you can imagine it’s much, much better than what human can do. So this has largely improved our productivity, and it’s really, really exciting.

And like on the — as you mentioned from the gamers like engagement perspective, like — so there is a very, very clear opportunity we have seen in the use cases like we do believe like, say, for example, Free Fire is a very, very social game. It’s designed for team play. So it’s like there’s much, much more fun if you play with other players, and there’s a much more combination of the strategy, the technique you can use than you play as a solo gamer. But we observed in Free Fire, we still have a very, very sizable gamers like only play solo games. I mean they enjoyed, but I think they haven’t really fully experienced the amazing part of the game. And maybe because of they’re shy, they don’t know how to reach out to other players. So as we think like the AI-enabled bots, it’s kind of like their — it’s an AI game agent like as their teammates as peers for them to play the game together kind of play a brother’s roles, sister’s roles and coach roles in the game and give them a little bit flavor of how this interaction will kind of feel and taste in the game play and as an encouragement for them to reach out to play as a team rather than individuals. I think that largely helped on the retention.

And furthermore, I think we are very actively experiencing and trying to figure out how to kind of leverage the generative AI to let gamers and to generate the content rather than, okay, so now all today’s game experience are preset and how the experience will look like. And I think with the AI tools, actually, this experience can be much more immersive and much more interactive and much more individualized.

Tencent (OTC: TCEHY)

Tencent’s management added AI-powered citation to content on Weixin; management is using LLMs (large language models) to help merchants with customer inquiries and personalized product recommendations; Yuanbao, Tencent’s AI chatbot, can now be added as a Weixin contact for users to interact with; management is enhancing the Yuanbao app and is pushing for growth in DAUs (daily active users)

On the AI front, we added AI-powered citation to content so that users reading official accounts articles or video accounts comments can activate contextual AI commentary on related information. We upgraded Mini Shops customer service with large language model capabilities to provide merchants with more intelligent responses to customer inquiries and personalized product recommendations. We enabled Yuanbao as a Weixin contact to interpret and summarize video accounts content. Meanwhile, we are rapidly enhancing the functionalities of our AI native app Yuanbao, and we’ll share more details about how we are growing the DAU later this year.

Tencent’s management is seeing AI becoming an increasingly important driver of growth in Tencent’s Domestic Games and International Games businesses; management is applying more AI tools to increase the speed and scale of content production in Tencent’s games; AI allows Tencent to provide more human-like virtual teammates to solo-gamers and more realistic non-player characters in games; management is using AI in marketing activities for its games for more efficient targeting

Reviewing the progress of our game business domestically and internationally in recent months, AI has become an increasingly important driver of its growth in terms of game content, game engagement and game monetization. We’re increasingly applying AI tools to boost the speed and scale of content production across our major games. AI allows us to provide more human-like virtual teammates in our competitive PvP games and to power more realistic nonplayer characters in our story-driven PvE games. And we’re using AI in our game marketing activities to more efficiently target marketing spending towards the users most likely to activate and remain in each game.

The Marketing Services segment’s revenue was up 20% year-on-year in 2025 Q2 because of AI upgrades in its advertising platform, and more closed-loop advertising involving Weixin’s ecosystem; the AI upgrades included better AI capabilities in ad creation, placement, recommendation and performance analysis; the AI upgrades led to higher click-through rates, conversions and ROI for advertisers; Video Accounts’ Marketing Services revenue grew 50% year-on-year in 2025 Q2; Mini Programs’ Marketing Service revenue grew 50% year-on-year in 2025 Q2; Weixin Search revenue grew 60% year-on-year in 2025 Q2, driven by the use of Tencent’s LLM (large language model) to deepen understanding of merchandise and of user consumption intent; most of the advertising revenue growth in 2025 Q2 came from higher revenue per impression partly because of AI-driven increases in the click-through rate

For Marketing Services, revenue grew 20% year-on-year to RMB 36 billion in the quarter, benefiting from AI-powered adtech upgrades and from increased closed-loop advertising arising from Weixin’s transactional ecosystem. We expanded AI capabilities in areas including ad creation, placement, recommendation and performance analysis, which had the effect of boosting click-through rates, conversions and ROI for advertisers. Specifically, we upgraded our ad platform architecture by deploying a scaled-up foundation model, which analyzes advertisement click-through rates and transactions across multiple apps and services as well as user interactions across text, image and video to determine user interest and optimize ad performance in real time.

By property, Video Accounts marketing services revenue rose approximately 50% year-on-year due to more traffic and more transactional activity within Video Accounts. Mini Programs marketing services revenue also increased about 50% year-on-year. Activity within Mini Games and Mini Dramas created a flywheel effect, which drives more developers to use our closed-loop marketing solutions to promote their services. And Weixin Search revenue grew around 60% year-on-year due to more consumer and advertiser interest in Mini Program search results and to enhance ad relevance as we leverage our large language model to deepen understanding of merchandise and of user consumption intent…

…In the second quarter, the majority of the advertising revenue growth of 20% year-on-year arose from higher revenue per impression. And that, in turn, was primarily due to a higher click-through rate arising from deploying AI, although also to higher revenue per click arising from more closed-loop activity with mini shops and mini games.

Within the Fintech and Business services segment, Business Services revenue grew in the teens year-on-year in 2025 Q2; Cloud Services revenue accelerated in 2025 Q2 from increased revenue from providing GPUs and API tokens for customers’ AI needs; management is focused on growing Business Services at an accelerated rate without being hampered by fluctuations in GPU supply Business services revenue grew at a teens rate year-on-year. Cloud services revenue growth accelerated versus recent quarters, benefiting from increased revenue from providing GPUs and API tokens for customers’ AI needs. Fees collected on Mini Shops transactions continue to grow at a rapid rate and business services gross margin rose year-on-year due to improved efficiency and positive mix shifts…

…We’ve put our cloud business onto a more sustainable base as well as improve the cost competitiveness of the supply chain for our cloud business, we can — we are refocusing on growing revenue at an accelerated rate versus the prior rate without depending too much on the vagaries of the GPU supply situation. So if we do have sufficient GPUs that we can rent out more in the cloud, then we’ll do so. But our cloud strategy is not dependent on the GPUs. We’re also growing in CPU, in storage, in database, in CDN and so forth. So that’s on the cloud side.

Tencent’s management has enhanced the data quality and diversity of Hunyuan, Tencent’s proprietary foundation model; Hunyuan 3D model has become the No.1 3D generative model on Hugging Face; game developers, 3D-printing companies and designers are increasingly using Hunyuan 3D; management wants to continue improving Hunyuan, and sees many dimensions for doing so; when Hunyuan improves, all of Tencent’s AI services also improve

For HunYuan, we enhanced our data quality and diversity through data augmentation and synthesis and implemented more effective pretraining and post-training scaling. HunYuan 3D model has become the top ranked 3D generative model on Hugging Face due to its geometric precision, texture fidelity and prompt 3D alignment capabilities. Game developers, 3D printing enterprises and design professionals are increasingly using the HunYuan 3D model for their digital asset generation needs…

…In terms of the model, I would say there’s actually a lot to be done, right? And I would say sort of in the broad bucket, there is the large language model itself, and we want to keep improving the LLM itself. And that actually involves improvement along a number of different dimensions, including making sort of the data sort of higher quality and more comprehensive. That includes making the pretraining more efficient and more effective and improving the pretraining model that includes improving the post-training and reinforced learning processes in basically extracting the capability of the pretrained model and that includes improving our infrastructure so that we can actually train more efficiently as well as inference more efficiently, right?…

…When we have an improved LLM, it’s actually sort of the foundation for all our AI services. And in particular, it would improve our search and productivity-related services…

…We also want to improve the multimodal capability of our model so that we can actually provide more customized functions for the users in Yuanbao, right? Within Yuanbao, it’s not — people are not just using it for search and productivity-related activities. They are using it for all kinds of different multimodal activities. They may want to speak, they may want to turn text into pictures, turning pictures into text and there are a lot of multimodal conversions within Yuanbao, which we actually need to have very strong capability for…

…I think the third broad category is actually coding and agents, right? So that if we can sort of keep improving, then basically, we can provide much better coding environment for both ourselves as well as our enterprise customers. And at the same time, that would enable a better agent and instruction follow capability for our agent. I think that’s particularly important for Weixin going forward and as we build an agent for Weixin that can be personalized assistant to the Weixin users in a personalized way.

Tencent’s management thinks Tencent’s advertising revenue growth can grow at a healthy rate for a long time; the drivers of future growth for the advertising revenue come from (1) higher click-through rate, where AI delivers better targeting and thus more clicks, (2) more traffic, including traffic within Tencent’s AI-native experiences, (3) higher revenue per click, as generative AI used for creating the ads results in more ad demand, (4) closed-loop e-commerce transactions driving higher advertising demand, and (5) higher advertising load; management does not expect any meaningful impact to Tencent’s advertising business from the new advertising law for gaming company sales and marketing because the advertising business has ample diversification, and the AI-related improvements management is making is a far more important variable; management could crank the lever for advertising-growth if the cost of deploying AI throughout Tencent suddenly spikes

On the advertising and the potential, we continue to believe that we enjoy a long and lengthening runway for continuing to grow our advertising revenue at a reasonably healthy rate. And that length of the runway reflects upside in a number of the key variables that determine our marketing services revenue, including the click-through rate where AI delivers better targeting and thus more clicks, including traffic where we see growth in video accounts traffic and search traffic over time, in traffic within our AI native experiences, including revenue per click as generative AI used for creating the ads results in more ad demand as well as e-commerce closed-loop transactions resulting in more ad demand. And then finally, in ad load, where, as you know, for short video, our ad load is currently in the low to mid-single digits versus our peers who are in the low to mid-teens…

…[Question] About the impact on the new advertising law for gaming company sales and marketing. Under the new ad regulation effective in July, sales and marketing spending in excess of 15% of revenue will need to pay an additional 25% tax. So how do you expect this to affect our advertising income, especially for mini games, which heavily rely on traffic acquisitions, i.e., the sales and marketing could easily surpass this 15% revenue threshold?

[Answer] We don’t expect a meaningful impact. Our advertising business has become quite broad-based over time. And if you look at the second quarter, there was an adverse impact from the food delivery companies and some of the e-commerce companies ramping up in food delivery, reducing their advertising spend as they invested more in subsidies. But despite that, our advertising revenue grew 20% year-on-year. So in our view, there’s always going to be individual blips up and down in terms of individual categories. But what we’re doing in terms of deploying AI within advertising is a much more important variable…

…Now of course, if the cost of deploying AI, including GPU depreciation was suddenly to step up and become very burdensome, we could accelerate the advertising monetization, but we don’t see the need to do that right now.

There are 4 broad categories of AI features across Tencent’s ecosystem, namely (1) the AI-native app Yuanbao, (2) AI-enabled search, (3) features within games, and (4) features within productivity tools; management thinks it’s still early in observing user behaviour 

In terms of the AI features, right, I think there is sort of broadly speaking, a number of these features. One is obviously our Yuanbao, which is an AI native app. And then I would say it’s related to search, AI-enabled search. So that lands on our browser that also lands on WeChat search. And then there’s a whole host of different features within even games, right, when we have AI-enabled players or in our productivity tool, for example, summary of meetings in our Tencent Meeting and assistance within our Tencent docs, right, to help people to write. I would say we’re still at an early stage in observing the user behavior.

Tencent’s management has so far not seen any major negative impact on Search from the use of AI to produce search results

The one sort of negative impact that you are pointing to is when there is AI-assisted search, whether it would just show the content rather than leading people to the pages. We have not seen a very big impact on that. I think overall, people tend to be more satisfied in getting the answer directly. And if they want to explore the topic more, they would click on the different links and articles. So I think overall, it’s actually not that much of an impact.

Tencent’s management is currently providing a lot of AI features for free and they are managing the AI-related costs of these features in a granular way such as using smaller models when applicable and improving the efficiency of inference with software; management wants to eventually monetise these AI features, but they think it is really hard for the user-paid model – popular in the US now for monetising AI models – to work in China; management currently prefers monetisation through advertising; management is seeing AI being monetised in Tencent by contributing to the growth of the overall business

[Question] You guys continue to offer increasingly more AI features to consumer free of charge, the delivery of these AI features is a lot more expensive than mobile Internet services, which will potentially hurt Tencent’s cost structure. Will management consider to start directly monetizing these consumer-facing AI features in the next 1 or 2 years?

[Answer] We are actually managing the cost in a relatively granular way, right? I think there are a lot of places in which if we can use smaller models, we’ll be using smaller models and the cost will be sort of much lower than using the flagship model. And so in a lot of these use cases, the cost is manageable if we can use smaller models. And at the same time, if we continue to improve the efficiency of inference through software upgrades.

And as it relates to whether we would be monetizing eventually — I think eventually, there should be some monetization. I think in China, in reality, it’s actually very hard to use the user paid model, which now populates the U.S. AI tools. And I think over time, we’ll try to figure out whether there will be some ad-supported way of monetizing. But at the same time, I want to point out that AI is already contributing to the growth and monetization of our existing businesses in different ways, right? So somehow we could also fund part of this “subsidy” for AI usage by the users through the growth in our other businesses.

Tencent’s management does not have a definitive answer on the import of US chips for AI but Tencent has sufficient chips for model training; management thinks Tencent has many options for chip-providers for AI inference; management is using software to drive inference efficiencies

With respect to the acquisition of chips, especially the U.S. chips, right, the answer is that we don’t really have a definitive answer on the import situation yet. I think there’s a lot of discussion between the 2 governments, right, and waiting to see what exactly come out of that.

But from our own perspective, we do have enough chips for training and continuous upgrade of our existing models. And we also have many options for inference chips. And we are also executing a lot of software improvement and upgrade in order to drive efficiency gain in inference so that we can actually put more workload on the same number of chips.

Tencent’s management sees higher depreciation expenses in the future because of AI-related investments but Tencent’s business is also growing because of the use of AI; the increase in expenses and revenue may not always match up, but both are definitely growing

I would say the depreciation cost related to AI will definitely continue to go up. But at the same time, we also see that we continue to reap the benefits of AI. And the issue is that these 2 may not match each other completely, but I think both of them will be moving in the same general direction.

Tencent’s management is tracking Tencent’s progress in AI in a number of ways, namely, (1) how AI is helping Tencent’s existing businesses, (2) performance and quality of Hunyuan, (3) usage of the Yuanbao app, (4) progress in AI products within the entire Tencent ecosystem 

We do track our AI development progress very closely. And I think there are a number of indicators that we use right in tracking the progress.

And the first one is that we focus on tracking how AI is actually helping our existing businesses such as ads, such as games, such as FinTech. And I think that’s one area. And when we see that AI is actually being applied in driving the efficiency gain as well as the growth of these businesses, then that’s good. 

Secondly, we focus on tracking the performance and quality of our large language model, HunYuan. And I think there’s a lot of metrics that we actually have to use in order to track the capability as well as the quality of the model.

The third one is we do track how our AI app is actually growing. How many users are using our AI app. And that would include users of our Yuanbao and users of our browser and user of our AI-powered search.

And finally, I would say we do track what’s the progress in the design of other AI-related innovative products within our entire ecosystem. And that would include, for example, the AI agent for WeChat that would include agents within our productivity tools. And these are the metrics that I think we will use in terms of tracking the progress of our AI development.

Veeva Systems (NYSE: VEEV)

Veeva’s management has made great progress with Veeva AI, an initiative launched in April 2025 that will see the company build industry-specific AI agents within its applications; the first AI agents under Veeva AI, for Vault CRM and commercial content, is on track for a December 2025 launch; management plans to release new AI agents and improve existing AI agents 3 times a year; management plans to deliver a host of new AI agents in 2026 and will launch Clinical data agents in 2027; management sees Veeva Business Consulting as an important part of Veeva AI because AI enables new ways of working for Veeva’s customers; Veeva is already working on its first AI-related Business Consulting project; management thinks Veeva AI will increase the value of integration between clinical data management and clinical operations for customers; management thinks Veeva will lead in industry-specific AI agents in Life Sciences because of the deep data that resides in Veeva’s software products; management will will allow customers to create their own AI agents with Veeva AI; management thinks Veeva AI will create billions of dollars of value in the Life Sciences industry and Veeva will be able to capture its fair share of value creation; management does not expect any material revenue contribution from Veeva AI in 2026 or 2027; management thinks it’s still early for customers to all-in on AI with Veeva because the company has not released any AI agents yet; management will enable Veeva’s AI agents to communicate with AI agents from other software platforms because that is of great benefit to customers

We are making great progress on Veeva AI which adds agentic AI to the Vault Platform and industry-specific AI agents in all Veeva applications. With agentic AI in the Vault Platform, we have an integrated platform that manages data, content, and agents together in a secure and maintainable way. Customers can use and extend our application agents and create custom agents of their own. This is a very fundamental change in the Vault Platform…

…Our first agents are on track for December release in CRM and commercial content. We will release new agents and improve existing agents with our releases three times a year. In 2026 we plan to deliver agents for clinical operations, regulatory, safety, quality, medical, and commercial. Clinical data agents are planned for 2027.

Veeva Business Consulting is a critical part of Veeva AI, helping customers with change management because AI enables new ways of working. We are already working on our first Business Consulting project for AI in the commercial content area…

…We continue to see customers looking for an integrated clinical platform across clinical data management and clinical operations. The value of integration is compelling and will only increase with Veeva AI…

… Veeva Vault platform, we started that in 2010, actually, late 2010. It was around this. They had content and it had data and they could do both. And that was very unique and users work with content and data and so we were able to make integrated suites in clinical and quality and regulatory and safety. And that’s what we’ve been doing for the last 15 years and working very hard at it and making these deep industry applications, the business rules around all the data and the content. Now this is the next phase where we’re going to have agents. We still have our data, we have our content. We have our agents and the users are going to interact with all and the agents also interact with the content of the data. So it’s a fundamental new thing. And what we — we’ve led really and are leading in this industry cloud area, industry-specific cloud applications. I think we’re going to lead in industry-specific agents and certainly inside life sciences…

…Customers can create their own custom agents, but mainly our industry-specific agents that they’ll get when they buy Veeva AI. With the model, MCP model context protocol, agent-to-agent, interoperability is really easy and also vault-to-vault interoperability. We will — in terms of monetizing that, we will create billions of dollars of value for the industry. No doubt about that. No doubt about that. Sometimes making humans much more efficient, sometimes reducing the need for certain people doing certain types of tasks. So there’s a tremendous amount of value to be captured by the industry, and we’ll get our fair share of that for sure…

…I don’t expect any material revenue contribution for ’26 or ’27, for example, but I expect it’s a significant increase in our market size. And that will play out over many years…

…I think it’s early for customers to be going all in on AI with EVA because we haven’t even released any agents yet, so we’ve got to work with our first early adopters and work that out…

…We’re architecting in that way that if you have an agent inside a Veeva, it can talk to an agent that might be inside of SAP or Workday or a different sales force one and vice versa. That’s I think that’s going to be one of the unheralded people don’t realize how much of a benefit that is when you have agents that can talk to agents across systems because they’re all following a common protocol much less brittle than you’re wiring things up with a mule soft and transferring data back and forth. I’m really excited about that potential, and it can expose from system to system communication but also for a user. I might be in my Microsoft Office, and I might say, “File this document in TMF.” Well, the Microsoft Office copilot may have that agent, the TMF filing agent from Veeva registered with it. So it says, “Any of the agents know how to do this? The TMF agent with AI sure do.” Okay. I’ll hand the document over to you and the way it goes.

Veeva’s management thinks Veeva has a structural advantage in AI in the Life Sciences industry because the company’s products are a system of record for customers, and the company has deep applications

[Question] Going back to the idea around the opportunity with AI, how you’re kind of thinking about Veeva’s platform approach, the network you’ve built, the scale you’ve built, giving you kind of that right to win as you embed more NII functionality across the platform?

[Answer] We refer to that as a structural advantage. When you have an application that’s a system of record, be it the e-mail system or the supply chain system or all the 50 sort of applications of Veeva has that are deep in life sciences and the CRM system to the drug safety system to the clinical trial management system. When you have that system of record with the users in there, you have the right to win the deep industry-specific agents because it’s in the user’s workflow. Think about it, if you use Google for your e-mail and your calendar, you would love an agent from Google that works seamlessly with that, if you could get it. So we have a right to win there. You called it right to win, I call it a structural advantage. We can knit that technology together so that it’s a seamless platform that handles the agents, the content the data. Another thing that Veeva has is we have a platform that’s broad. We make about 50 applications with our platform. So we can touch a lot of things with our platform. We put it in the Vault platform once, and it can extend area everywhere. So we have a structural advantage…

…[Question] Around AI and agents. Could you just sort of articulate what you view as the unique differentiator from an architecture perspective of Vault versus agent force or even the back end of IQVIA? Like what do you think puts you at an advantage?

[Answer] Our main advantage is that we have the deep applications. So if we just take a clinical example, again, we have the clinical trial management application. So that houses all the people that deal with clinical and all the data about clinical and all the business rules and all the content and all the security about clinical trials. So with Veeva AI, when we build an application agent, that’s built inside of the Vault platform. So it inherently knows all the security rules and have to deal with that. and it is running in the Vault application server. So it also has transaction control. So we can update the data in the content. It can act on behalf of the user inside of a workflow in a transactionally sound way. So that’s a structural advantage if you have the application.

Veeva’s management thinks AI agents will be doing some of the things humans will do, which will either free up productive-time for humans, or reduce the need for humans; 

If you look at areas within safety and clinical, there’s some areas where there’s a lot of outsourced hundreds of millions of dollars of outsourced labor used to do processing type things. I think agentic AI can maybe remove the need for half of that. If you look at a clinical trial master file, agent is going to be pretty good at putting a document where it should go and telling you if you have all the documentation you need for that trial based on the protocol. And is any document blurry, is there any document eligible, et cetera, it’s going to be really darn good at that stuff. So it will be different by each area, but agentic AI is going to do things — some of the things that humans can do, agentic AI is going to be able to do that. That either frees up more human time for humans to be more productive on what they need to do or reduces the need for humans.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Adyen, Alphabet, Amazon, Meta Platforms, Microsoft, MongoDB, Nu Holdings, Okta, Salesforce, Sea Ltd, Tencent, and Veeva Systems. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 September 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 September 2025:

1. Secular Bull Market Peaks – Are We There Yet? – Tyler Grason

The concentration and valuation of the market today often draws parallels to the tech bubble.  We analyzed the Tech Bubble and the nifty fifty period (late 1960s to early 1970s), both of which marked the end of secular bull markets, to assess the similarities. As to the end of this secular bull market, as Mark Twain said, “The reports of my death are greatly exaggerated.”…

…Both the nifty fifty and tech bubble periods both coincided with a Fed hiking cycle. In January 1973, the market peaked 3 days prior to the first hike, and the Fed did not cut until December 1974. During the tech bubble, the Fed began hiking rates in June 1999, or about 9 months prior to the market peak. While the Fed continues to be on pause, fed funds futures prices suggest a 0% probability that the Fed hikes by year-end 2026.  Odds now show the Fed is 88% likely to cut rates in September, with 2 expected rate cuts this fall…

…Market today is cheaper than the tech bubble despite better fundamentals. The S&P 500 at 22x forward twelve-month earnings is ~15% cheaper than the peak of the tech bubble at 25.5x despite having 60% higher profit margins and 10% better ROE. When compared to the 10yr which traded at 15.9x at the height of the tech bubble, equities were 10x turns more expensive vs 1x turn less expensive today. On a justified P/E basis, fundamentals and bond yields would suggest the market today should trade at 24x, or slightly above the current multiple of 22x…

…Market concentration today looks much more aligned with fundamentals. During the tech bubble, the concentration of the top 10 largest stocks at 27% was nearly 2x above its earnings contribution. The expected earnings growth that was priced in failed to materialize. Today, the weight of the top 10 stocks relative to their earnings contribution is much more aligned at 35% and 32%, respectively. While the top 10 stocks in the S&P at 38.3x is above the tech bubble at 34.4x, Tesla at 145x is meaningfully skewing the data. Excluding Tesla, the top 10 today trade at 26.5x, or ~25% below the tech bubble peak despite returns on capital that are >2x higher.

2. This Is Why America Is Losing to China –  Ross Douthat, Sophia Alvarez Boyd, and Dan Wang

Wang: I decided to take two friends and go on a lengthy bike ride in China’s southwestern province of Guizhou. This is a land where a local said, “Not three feet of land is flat, not three days go by without rain and not a family has three silver coins.”

China’s fourth-poorest province, I was surprised to see, had much better levels of infrastructure than one could find in much wealthier places in the United States, like New York State or California.

We saw very tall bridges all around us. We saw a guitar-making hub. We saw a lot of fancy new roads that were a cyclist’s dream. And it was only afterward when I realized how bizarre it was that China’s fourth-poorest province — about the level of G.D.P. per capita of Botswana, much less than Shanghai or Guangdong — was able to build all of these things.

It is a province with 11 airports, 50 of the highest bridges in the world and brand-new, spiffy highways — and that’s because China was just building a lot in its equivalent of a South Dakota or West Virginia…

…Wang: I think that the first and most important part of China’s technological success has to do with something I call process knowledge.

Process knowledge is also known as tacit knowledge, also known as industrial expertise. In a kitchen analogy, it is something like the recipe, and the hardware is something like the stoves and the pots and the pans.

But let’s say, Ross, we give someone who’s never cooked a day in his life the most well-equipped kitchen, as well as the most exquisitely detailed recipe. Are we sure that this person will be able to do something as simple as frying an egg for breakfast?

I’m not sure if that person will burn the kitchen down in some big way.

Douthat: My children have often given evidence for that hypothesis.

Wang: Yes. And I think the crucial part of technology is actually all of this tacit knowledge, process knowledge that we can’t really write down.

That is the core part of what has been driving China’s technological advantage. It started when China started making pretty simple things — socks, T-shirts, all these things that we think and know are not terribly important — before they get to slightly more complex things, like shoes.

Then they get to everything that now includes iPhones and electric vehicle batteries, and they are really good at climbing this ladder.

China’s hardware capital, Shenzhen, was mostly a backwater — making textiles all the way up until 2008, when Shenzhen started producing Steve Jobs’s iPhones.

iPhones started rolling off the line and you had this enormous work force, hundreds of thousands of people making the most sophisticated consumer electronics in the world, making the next consumer drones, more sophisticated electronics. And I think that is really the basis of China’s technology advantage: It’s just these gigantic investments and work force.

The state sometimes gets in the way; the state sometimes harnesses this work force. You also have a lot of entrepreneurial energy. I’m not sure if I wanted to define it as state capitalism with Chinese characteristics, but I just view it as technological catch-up.

Douthat: Right, but what is the difference, then, between that model and ours? Part of your argument is that America has lost a lot of that knowledge through the process of outsourcing and allowing factories to move overseas and allowing deindustrialization to happen, and becoming an information and financial services and service economy — a very rich one, but not an industrial economy in the way that China is.

I want to understand how much of this is saying there are engineering minds in the Politburo who made these choices that maybe you can only make in an authoritarian society, or maybe we could have made different choices ourselves in the U.S.?

How much of it is that versus some other element of competition or culture in China right now?

Wang: I think the crucial mistake in the U.S. was that it wasn’t even a choice that the U.S. made to outsource a lot of manufacturing. Now, there is this line that politicians like to trot out that China stole all the jobs — and sure, that’s one framing of it.

But I think a more accurate framing is that since the 1990s, big American manufacturers had been actively moving their production to China, and the U.S. government did almost nothing to restrain them.

I’m not sure whether that was actually a really deliberate choice plotted out by the Council of Economic Advisers advising Bill Clinton. Maybe it was, but I think this was just a process of business lobbying saying: Well, we need to tap into this market and produce at these cheaper places.

And something that the Communist Party actively decided was that they were going to import big American manufacturers in the 1990s and 2000s, Apple, Tesla.

If they want to build their products here, we are going to completely welcome Steve Jobs and Elon Musk to train our workers and make them as good as they can be.

That was a more conscious decision, I think, made by engineers who realized they had to catch up to the global frontier. They couldn’t do it with China’s existing level of technology, and they were going to have Americans help them…

…Wang: I think you’re absolutely right that America is highly dynamic, and I don’t want to count out America in this stage of competition. I think at various points the U.S. will look weak. At various points it will look strong.

But what are the stakes here? Because I think there is still a broad view in the U.S. that deindustrialization has been pretty bad — not just for regions like Pennsylvania or Michigan, where the deindustrialization has been felt pretty badly.

There’s also a pretty clear loss of manufacturing expertise that is represented in the declining fortunes of American apex manufacturers. Companies like Intel, Boeing, Detroit automakers and now, increasingly, Tesla.

They’ve had mostly bad news over the last few quarters, last few years. In the case of Detroit, the last few decades. Apex manufacturers are not working very well.

If we take a look at the early days of the Covid pandemic, the U.S. manufacturers were not very good at making simple products either — necessary products, like cotton swabs and cotton masks. And they weren’t able to really rejig their supply lines in order to build out critical materials.

If we take a look at the U.S. defense industrial base, after the U.S. shipped a lot of munitions to Ukraine for its self-defense against Russia, the U.S. hasn’t really been able to rebuild its munition stockpiles.

If we take a look at naval ships with the U.S. Navy, every class of ships is now behind schedule…

…Douthat: As a potential scenario for Chinese success. How could China, how could this model fail? What do engineers get wrong?

Wang: Engineers are meddling extensively in the economy. And maybe we will wake up and find one day that central planning is a ginormous failure and the Chinese will not be able to fundamentally overcome these contradictions in the model of state capitalism with Chinese characteristics.

That is a potential scenario in which the extensive meddling that has scared the living daylights out of a lot of venture capital investors in China, as well as a lot of entrepreneurs who would really prefer not to suffer through a lot of the edicts of the Politburo — they decide to not contribute so much to the great rejuvenation of the Chinese people.

I think that a lot of people have been pretty extensively burned out by the mistakes and some of the foibles of the Communist Party. A lot of what I have seen is that many young Chinese are willing to take leave of the great rejuvenation that is conducted in their name.

We have a lot of data on Chinese entrepreneurs, a lot of wealthy Chinese people who would much rather live their lives in Chinese communities like Irvine, Calif., by buying some property and just having their businesses be established in Singapore, and still not really quite trusting the Communist Party to respect everything that they want to do.

Young Chinese creative types are interested in smoking dope, just as young California types may be. They are smoking dope in Chiang Mai. I’ve spent a little bit of time seeing these people who are just as into marijuana, as well as cryptocurrencies, as folks are in Silicon Valley.

We also see a lot of Chinese migrants who are not necessarily rich, who are not necessarily the creative types, dare to fly to Ecuador, which has been visa-free for a period of time to the Chinese, and try to walk across the Darién Gap — a perilous journey to cross to the southwestern border of the United States.

At its peak in 2024, the U.S. was apprehending something like 30,000 to 40,000 Chinese who were trying to cross over into Texas. It still blows my mind that many people would try to do that to escape the regime…

…Douthat: Let’s end with advice for the United States. What are the actual implications of your analysis — and especially the bull’s case that we started with, the Chinese century case for what the U.S. should do right now? What should we be doing differently if China is poised to be as powerful as you think it might be?

Wang: I think that the U.S. should first and foremost rebuild its manufacturing base. That follows quite naturally from a lot of my analysis of China’s greatest strength, which is that China is a manufacturing superpower and China is poised to further deindustrialize Europe and it is poised to further deindustrialize the United States as well.

I am skeptical that President Trump’s efforts to reindustrialize America through the tariffs have been very effective. I am more positive about the Biden administration’s policies on efforts to reshore through industrial policy. But we can still see a lot of flaws with that approach as well.

Douthat: Do you think tariffs — essentially trade war — can’t work, in your view, because China has become too strong and resilient?

Wang: I think that the trade war, as prosecuted right now through the tariffs, is not going to be very effective. If we just take a look at the manufacturing employment data since Liberation Day in April — with the next jobs release, I’m not sure if we’ll get that data probity back — the U.S. has lost about 40,000 manufacturing workers.

It is not a natural fit if the U.S. is to become a technological, scientific superpower to advance its science by denying a lot of funding to scientific agencies like the National Science Foundation and the National Institutes of Health.

I think that universities, flawed as they are, are still driving a lot of American innovation and scientific advancements, and it also doesn’t make a lot of sense to attack universities in order to save the scientific base.

And it really doesn’t make sense to try to deport a lot of workers who may be working in the construction industry or the manufacturing industry, or to frighten away a lot of high-skilled researchers who may want to be in the U.S. from Europe or Asia to do a lot of their work here. So I think that as prosecuted, the trade war is not making a lot of sense.

The industrial push in the U.S. is not making a lot of sense. Maybe there’s something positive to be said about Trump’s energy agenda in terms of building more nuclear power, in terms of building more facilities online. Maybe there’s something positive about the deregulatory agenda. I can certainly see that case, but I certainly see more headwinds than tailwinds.

3. Are We at Bubble-Level Valuations? – Ben Carlson

Here’s the monkey wrench — Bernstein also wrote about why regression to the mean can be so tricky outside of science:

There are three reasons why regression to the mean can be such a frustrating guide to decision-making. First, it sometimes proceeds at so slow a pace that a shock will disrupt the process. Second, the regression may be so strong that matters do not come to rest once they reach the mean. Rather, they fluctuate around the mean, with repeated, irregular deviations on either side. Finally, the mean itself may be unstable, so that yesterday’s normality may be supplanted today by a new normality that we know nothing about…

…This is the CAPE ratio going all the way back to a time when Francis Galton was still alive: [Average of 17.6x since 1881, and average of 28.3x over past 30 years]

What’s more relevant here — the 150+ year full history or the past 30 years? Which average is more relevant?…

…Last week I wrote A Short History of the S&P 500 which looked at the composition change to the index over time in terms of the types of stocks. The S&P 500 was full of capital-intensive industrials and railroad stocks for much of its history. These were relatively low-margin businesses that required a large number of employees and lots of physical assets that needed to be replaced over time.

Today’s companies have more intangible assets and are far more efficient.

Take a look at average margins by decade going back to the 1990s and you can see this shift happening:

Every decade the average moves a little higher.

This was supposed to be the most mean-reverting series in all of finance. Market historians have been shouting it from the rooftops for the past 15 years. And they were wrong…

…It’s interesting to note that the biggest crash on this list–the Great Financial Crisis–started at relatively muted valuation levels. Stocks were not insanely overvalued heading into the fall of 2007. It’s just that no one saw earnings were about to fall off a cliff.

Picking tops is not easy.

4. Finding Fraud – Farrer 36 Asset Management

One of the first things I do when reading an annual report is search the PDF for the term “Material Weakness” – you’d be surprised how often you get a positive hit. A material weakness is a flaw or combination of flaws in a company’s internal controls over financial reporting that creates a “reasonable possibility” of a significant error occurring in the financial statements. For example, take Evolv Technologies that declared a material weakness in its 2024 annual report.

The discovery of the accounting mishap (it turns out an employee was overstating sales) sent the stock tumbling 50%…

…Many ‘material weakness’ declarations get remedied, or don’t turn out to be much, but their existence is cause for more work…

…Swedish small cap Intellego has been on a tear recently – with the stock up more than 300% this calendar year. The stock is being driven by impressive revenue (+152% yoy in Q12025) and profit growth (+162%). Given this, you would expect that operating cash flow would have also exploded. But would it surprise you that it has instead decreased over the same time?

This is because much of Intellego’s revenue, while recorded, has not actually been received by the company. Receivables have increased 6x over the same period.

The above begs the obvious question – are the revenues real? Let me be clear, I am not stating that this is fraud – the company has explained that some of their older contracts gave too loose of terms to their clients, and newer contracts have stricter terms. However, such a large mismatch between profits and cash should give any investor pause…

…Many of Enron’s troubles lay with CFO Andy Fastow’s creation of SPVs which he and his family owned. These vehicles had the dual purpose of raising billions for Enron (and thus allowing the consolidated balance sheet to appear debt-free) and paying himself millions of dollars…

…Going back to the Enron example, even though they showed positive operating cash flow in three annual reports prior to declaring bankruptcy, their working capital assumptions raised alarms. You can see from the above table that from 1998 to 2000 (read right to left) that both receivables jumped (see the previous example for what that implies), but to compensate, there was also a significant jump in payables…

…For years Yes Bank had posted numbers too good to be true. Their loan book grew much faster than peers, margins and profits were higher than its comparable set, and all this despite exposure to troubled sectors like real estate, airlines, and telecoms. It turns out that Yes Bank was underreporting stressed loans (they reported NPAs under 1%, whereas the RBI showed a 400-500bp difference). When the truth was revealed we saw a 96%+ drop in stock price and jail for the founder.

5. Why retention is so hard for new tech products – Andrew Chen

Just as there’s the laws of physics, weirdly there are some constant patterns that keep cropping up over time. Here are a few that I’ll share:

  • You can’t fix bad retention. No, adding more notifications will not fix your retention curve. You can’t A/B test your way to good retention
  • Retention goes down, it doesn’t go up. And weirdly, it decays (oh, does it decay) at a predictable half life. Early retention predicts later retention.
  • Revenue retention expands, while usage retention shrinks. Good news: You lose people over over time, but the ones that remain sometimes spend more more money!
  • Retention is relative to your product category. There’s nature, and there’s nurture. Sorry, you’ll never make a hotel booking app a daily use product
  • Retention gets worse as users expand and grow. The best users are early and organic. The worst users come after that
  • Churn is asymmetric. It’s far easier to lose a user forever than to re-win them back
  • Retention is weirdly hard to measure. Seasonality is a real thing. New tests throw things off. Bugs happen. D365 is a real metric but you can’t wait
  • Crazy viral growth with shitty retention fails. We’ve run this experiment many many times already, across multiple platforms and categories
  • Great retention is magic. When you see it out in the wild, it’s amazing…

…You might read all of this and still have a big question: So wait, how do you get to great retention? (If I knew the answer in a deterministic way, my job as a startup investor would be so much easier, wouldn’t it?)

But let’s try our best. In my points above, there’s a few clues:

  • The idea really matters.
  • If you want a high retention product, you need to pick a category that is high retention already.
  • You need to pick a product category where you already use an existing product every day.
  • You’re going to build something that directly competes against that.
  • If you win, then you’ll stop using that other product and use your product instead.

That’s a high bar, but I think it’s a good start…

…The natural counterpoint is that new markets are often more exciting than existing ones. Isn’t tech about building brand new things rather than innovating 20% on old stuff? Of course this is true, but I think this is the tiny tiny minority of products.

My counterpoint to this counterpoint is that most products actually have some kind of prior lineage, even if those prior products are quickly forgotten.

Before Instagram there was Hipstamatic, which had become the #1 paid photo app in the early App Store. It demonstrated the success of photo filters. Of course Google was not the first search engine, it was actually #10 or whatever, after Lycos, Excite, Infoseek, etc., which demonstrated consumers wanted search but that it was impossible to monetize. Tesla was not the first electric car, nor iPhone the first smartphone. Sometimes it’s the 10th iteration that matters. Some call this “last mover advantage” rather than first mover. I think an important point.

Yet sometimes new things do happen. Uber was created to turn an existing offline action — calling a cab — into an app, not because there was already a hugely successful ridehailing app. (And no, not Lyft — it was a weird bus booking thing at the time). Of course a lot of ChatGPT, with OpenAI’s 5 year journey between inception and v3 which really took off, and without any real blueprints for what it might replace. These types of journeys are remarkable, and the tech industry is better off for it, because they involve real risk as part of new category creation.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, Meta Platforms (parent of Instagram), and Tesla. Holdings are subject to change at any time.

Problems With Oracle’s AI Growth?

Oracle’s management is projecting a 14x increase in AI revenue over the next five years. But the picture is not as rosy as it seems.

Oracle Corporation’s (NYSE: ORCL) management stunned stock market participants earlier this week during the company’s conference call for the release of its first-quarter earnings for FY2026 (fiscal year ending 31 May 2026). Management announced stupendous future growth for Oracle’s Cloud Infrastructure business, driven by an enormous increase in RPO (remaining performance obligations) because of AI-related demand:

“We have signed significant cloud contracts with the who’s who of AI, including OpenAI, xAI, Meta, NVIDIA, AMD and many others. At the end of Q1, remaining performance obligations, or RPO, now to [US]$455 billion. This is up 359% from last year and up [US]$317 billion from the end of Q4. Our cloud RPO grew nearly 500% on top of 83% growth last year…

…The enormity of this RPO growth enables us to make a large upward revision to the Cloud Infrastructure portion of our financial plan. We now expect Oracle Cloud Infrastructure will grow 77% to [US]$18 billion this fiscal year and then increase to [US]$32 billion, [US]$73 billion, [US]$114 billion and [US]$144 billion over the following 4 years. Much of this revenue is already booked in our [US]$455 billion RPO number, and we are off to a fantastic start this year.”

For context, Oracle ended FY2025 with total revenue of US$57.4 billion, and Cloud Infrastructure revenue of merely US$10.2 billion. The newly expected windfall for Cloud Infrastructure drove Oracle’s stock price 36% higher the day after its FY2026 first-quarter earnings.

But when I looked at the details of Oracle’s RPO and financials, I found potentially serious problems with the company’s AI-growth story. 

Problem 1: Risky customer?

During the earnings conference call, Oracle’s management did not name the customers responsible for the massive increase in the company’s RPO. But a subsequent article from the Wall Street Journal revealed that OpenAI had recently signed a US$300 billion, five-year deal with Oracle – in other words, nearly 95% of Oracle’s sequential US$317 billion increase in RPO in the first quarter of FY2026 came from just OpenAI.

Intense customer-concentration alone can be a headache for any company. But when the customer is itself burning lots of cash, it can be a thunderclap headache. OpenAI’s leaders expect the company to earn around US$13 billion in revenue this year, but its deal with Oracle works out to an annual average spend of US$60 billion. Moreover, The Information reported earlier this month that OpenAI’s leaders are now forecasting significantly higher cash burn over the next few years than recently expected:

“OpenAI projected its cash burn this year through 2029 will rise even higher than previously thought, to a total of [US]$115 billion. That’s about [US]$80 billion higher than the company previously expected…

…The company projected it will burn more than [US]$8 billion this year, or roughly [US]$1.5 billion higher than its prior projection from earlier this year. Cash burn will more than double to more than $17 billion next year—[US]$10 billion higher than what the company earlier projected.

And in 2027 and 2028, the company projects to burn roughly [US]$35 billion and [US]$45 billion, respectively. In the prior projection, the company said its 2028 cash burn would be [US]$11 billion, meaning the new estimate is more than four times higher.”

OpenAI’s spending plans with Oracle will have to depend on the largesse of would-be investors and lenders, so there’s no guarantee that OpenAI will have access to funding in the future. In the meantime, Oracle will have to procure the AI hardware (mostly AI chips) ahead of time. This brings me to the second potential problem.

Problem 2: Risky finances? 

Purchasing AI hardware requires capital. Lots of capital. And Oracle’s not in the best financial shape for this

As of 31 August 2025, Oracle had US$11.0 in cash and marketable securities, but a staggering US$91.3 billion in debt, giving a high net-debt position of US$80.3 billion. If Oracle’s operating lease liabilities are included, the net-debt position rises further to US$94.4 billion. Oracle’s trailing operating cash flow and net income are US$21.5 billion and US$12.4 billion, respectively. Using the lower net-debt figure gives Oracle net-debt-to-operating-cash-flow and net-debt-to-net-income ratios of 3.7 and 6.5. These ratios suggest Oracle is unable to increase its debt significantly without risking its financial health. To be clear, the ratios are high not because Oracle’s trailing operating cash flow and net income are temporarily compressed; Table 1 below shows Oracle’s operating cash flows and net incomes for FY2021-FY2025.

Table 1; Source: Oracle earnings releases

Oracle’s management was asked during the FY2026 first-quarter earnings conference call about the capital expenditures needed to fulfill the company’s RPO. Management was coy and suggested that Cloud Infrastructure’s projected growth would happen in an asset-light way: 

As I mentioned in the prepared remarks, and as I’ve said very clearly beforehand, we do not own the property. We do not own the buildings. What we do own and what we engineer is the equipment. And that’s equipment that is optimized for the Oracle Cloud. It has extremely special networking capabilities. It has technical capabilities from Larry and his team that allows us to run these workloads much, much faster. And as a result, it’s much cheaper than our competitors. and depending on the workload.

Now because of that, what we do is we put in that equipment only when it’s time and usually very quickly, assuming that our customer accepts it, we’re already generating revenue right away. The faster they accept the system and that it meets their needs, the faster they start using it, the sooner we have revenue. This is, in some ways, I don’t want to call it asset-light from the finance world, but it’s asset pretty light.”

I disagree with management’s “asset pretty light” characterisation. Earlier, I mentioned that Cloud Infrastructure’s revenue was expected to increase from US$10.2 billion in FY2025 to US$18 billion in FY2026. During the earnings conference call, management projected US$35 billion in capital expenditure in FY2026, up 65% from US$21.2 billion in FY2025. I think it’s reasonable to assume that most of the US$35 billion in expected capital expenditure for FY2026 will be for the Cloud Infrastructure business, so we’re looking at a capital-expenditure-to-revenue-ratio of nearly 2 (US$35 billion over US$18 billion). That’s hardly “asset pretty light”

Exacerbating the problem for Oracle is that its operating cash flow in FY2025 was just US$20.8 billion, meaning it had negative free cash flow during the year. Unless Oracle’s operating cash flow increases by nearly 70% in FY2026, the company will have to raise capital externally for its projected capital expenditures. I already mentioned that Oracle’s heavy net-debt position is an obstacle to any large future increases in debt. This said, issuing shares could work, given Oracle’s current market capitalisation of US$922 billion. Oracle’s high price-to-earnings (P/E) ratio of 76 also makes issuing shares a palatable option. Nonetheless, there could still be material dilution given the potentially significant capital expenditures needed to support Oracle’s RPO. Coming back to the possibility of Oracle’s operating cash flow increasing by nearly 70% in FY2026, I think it’s very, very unlikely because of the lower margin of Cloud Infrastructure, which brings me to the third potential problem. 

Problem 3: Margin pressure?

Cloud Infrastructure has been Oracle’s fastest-growing business in the past few years. Table 2 shows the changes in Cloud Infrastructure revenue and  Oracle’s total revenue for FY2023-FY2025. 

Table 2; Source: Oracle earnings releases

Cloud Infrastructure revenue is likely all reported under Oracle’s Cloud services and license support segment. What has happened over the same period shown in Table 2 is that the Cloud services and license support segment’s operating expense has grown much faster than its revenue, as illustrated in Table 3, suggesting that Cloud Infrastructure is a lower-margin business for Oracle.  

Table 3; Source: Oracle earnings releases

This brings into question how much Oracle’s net income and cash flow can benefit from the rapid projected-growth in Cloud Infrastructure revenue. If Cloud Infrastructure’s revenue indeed grows as management expects, there’s no doubt that Oracle’s net income will grow – but to what extent remains to be seen. It’s worth noting that with Oracle’s shares carrying a P/E ratio of 76 at the moment, the market is expecting stellar net income growth. 

Conclusion

Larry Ellison, Oracle’s founder, chairman, and chief technology officer, once said

 “Why do we do these things? George Mallory said the reason he wanted to climb Everest was because it’s there. I don’t think so. I think Mallory was wrong. It’s not because it’s there. It’s because we’re there, and we wonder if we can do it…

…So how do I get off this merry-go-round? How do I stop when I’m winning? It’s hard for me to quit when I’m losing, and it’s hard for me to quit when I’m winning. It’s just hard for me to quit. I’m addicted to competing.” 

I wouldn’t count out any business leader with such a ferocious competitive spirit. But there are potential problems with Oracle’s AI-growth story, namely, (1) high revenue-concentration from a risky customer in OpenAI, (2) having a debt-laden balance sheet while having to invest heavily in AI chips, and (3) margin-compression from lower-margin AI-related services. I wonder how this will all work out. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Meta Platforms. Holdings are subject to change at any time.

What We’re Reading (Week Ending 07 September 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 07 September 2025:

1. The ROI Question – Abdullah Al-Rezwan

A friend recently DM-ed me to highlight one of the quotes from Nvidia’s CFO in their recent earnings: “New NVFP4 4-bit precision and NVLink 72 on the GB300 platform delivers a 50x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale. For instance, a $3 million investment in GB200 infrastructure can generate $30 million in token revenue, a 10x return.”

10x return? That’s a bit eye-popping number. My friend was understandably a bit skeptical of this claim, so he asked ChatGPT to show the math and some reasonable assumptions behind this claim…

…Clearly hyperscalers aren’t realizing such revenue from their investments in Nvidia chips yet…

…Batch size, which is the number of concurrent user requests processed simultaneously, is the single most important operational factor for maximizing throughput. The highest throughput numbers are always achieved with the largest possible batch sizes.

However, large batch sizes increase latency,..

…To maintain low latency, providers must deliberately use smaller batch sizes. This inherently sacrifices aggregate throughput to ensure a good user experience. The 1M tokens/sec benchmark mentioned in the ChatGPT screenshot above is likely achieved at latencies that would be unacceptable for real-time use…

…While 20% utilization may seem conservative, achieving this average utilization consistently (24/7/365) with monetized workloads may not be super easy in inference. AI inference demand is often “peaky.” Infrastructure built for peak load sits idle during off-hours…

…Hyperscalers do not only run the most expensive models. A likely material portion of their workload involves smaller, cheaper models (often <$1 per 1M tokens), reducing the actual blended revenue shown in my screenshot above. If the average realized price drops to $2/M tokens, the idealized revenue drops from $31.5M to $12.6M in my example (ceteris paribus).

2. AI Agents and the Future of Grocery Delivery – Thomas Reiner

Whether it’s OpenAI, Gemini, Siri, or some other tool, every consumer will have a personal agent in their pocket. It will book travel for you, make dinner reservations, manage your schedule, and for purposes of this discussion it will order your groceries for you.

For the average American family that gets groceries 1x per week it’ll know what you often order, it’ll recommend recipes, it’ll monitor past usage and wastage of food products, and it’ll know your consumer preferences around store loyalty. If you change your plans and tell your agent that you’re hosting a dinner party for 8 people serving it can assist by recommending what to serve and automatically ordering it from the grocery store…

…Looking across these models there’s two key areas where there are middlemen to be disrupted: 1) Grocery Delivery Marketplaces and 2) White Label Solutions. The Age of AI is going to be the age of efficiency and wringing out middlemen from the equation. Grocery Delivery Marketplaces are definitely middlemen and I’d argue that white label solutions from providers like Instacart Storefront sort of are…

…AI agents mean that the importance of brand goes down, while the importance of service goes up, and that’s where all the incremental dollars from both players are going. They can best position themselves to win in a world where AI agents make decisions based on best outcome (cost, quality, speed).

DoorDash with their DashMart concept is trying to take themselves out of the middleman equation and focus on being the 1P provider which is a lot more defensive in an AI agentic world.

The biggest challenge will be on crossing the trust chasm. While consumers have high trust in Amazon for shelf-stable goods, it’s non-existent for fresh goods. Early returns from Amazon same-day perishable trial showed 75% of consumers were first-time perishables shoppers at Amazon but only 20% reordered multiple times within the first month. “U.S. shoppers have shown they prefer to buy fresh goods from retailers that run brick-and-mortar stores, as evidenced by the struggles of online-only grocers like Peapod and FreshDirect.”…

…Generally the rise of AI Agents is a lot more mixed picture for the delivery marketplaces. On one hand, an AI that can spontaneously order anything might increase demand for delivery, on the other hand these services might be commoditized as the consumer UX slowly fades away and the importance is put on the underlying speed, convenience, and price.

3. An Interview with Cloudflare Founder and CEO Matthew Prince About Internet History and Pay-per-crawl – Ben Thompson and Matthew Prince

The reason to talk now, and we’ve talked offline about this a few times, both this year and last year, is your push for this pay-per-crawl concept. Why don’t you give me the high level overview, the pitch from your perspective, which I think has evolved? I would like to think partially based on some of my feedback, but what’s the pitch in September 2025?

MP: Let’s take Cloudflare out for a second and just talk about—

Talk about Matthew, the English student? The student newspaper editor.

MP: This is me channeling inner law professor. Let me give you the history of the Internet and why the Internet exists the way that it does and what’s changing.

This is usually my job, but go ahead.

MP: And you can tell me where I’m wrong, but this is my quick history of the Internet, and apologies to Michelle who hates history lessons.

For the last 25 years, the interface of the Internet has been search, and Google has dominated that space, and Google, their incentives as a company were to have the Internet grow as much as possible because if you have chaos, then the search becomes the organizer of the chaos. But you need incentives for people to actually create content and so Google not only had to create the thing that organized the Internet, but they then had to take the thing that took the traffic of where people went and then helped people monetize that, largely through advertising, although they also helped with subscriptions, and Google was the great patron of the Internet for the last 25 years. The web would not exist the way it does if there were not something like Google out there to create the incentives around.

There were a lot of problems with incentivizing around traffic, we created systems where people would just literally try and create rage-baity headlines to get people to click on things so that they could put ads against them and so not perfect, but we don’t have the Internet that we have today unless we have Google and search funding that.

That is changing. The world is shifting where the interface of the web is shifting from search engines and search engines give you a treasure map and say, “Hey, go figure out what your answer is by clicking on these 10 blue links”, to what are effectively answer engines. So if you look at OpenAI, if you look at Anthropic, if you look at Perplexity, even if you look at modern Google, they are not a search engine, they don’t give you a treasure map. Instead, they give you an answer right at the top of that page. That answer, for most users, 95% of the users, 95% of the time, it’s a better user interface. I’m not anti-answer engines, I’m not anti-AI, I think it’s better in every possible way for that to be what the interface is that we all interact with.

But the problem is that if you get the answer and you don’t get a treasure map, then you don’t generate traffic and if you don’t generate traffic, then the entire business model of the web, which has been based on traffic starts to break down and you can see that, not so much in e-commerce sites, not so much in things that actually sell you the physical thing because if you asked what’s the best camera to buy, even if you get an answer, you’ve still got to go buy it from somewhere. It’s going to take the e-commerce and the people who are selling things that’s going to work but the person who wrote the review—

The great thing about physical products is by definition they are scarce and the problem with text on the Internet is it is not scarce.

MP: It’s not scarce, that’s exactly right, and Google set this expectation that everybody can scrape the Internet for free, but it was never free. The Internet has never been free. Google paid for it for a really long time and the quid pro quo with the content creators was, “We get a copy of your content and in exchange we’ll send you traffic and help you monetize that traffic”.

That quid pro quo breaks down as we shift from search engines to answer engines and so something is going to change. I see three possible outcomes for that. And again, none of this involves — if Cloudflare disappeared tomorrow, this is still happening, one of these three things will happen. One, all of the journalists, academics, and researchers in the world will starve to death and die. And it’s crazy, like when you post this stuff on Twitter, how many people were like, “Well, we don’t really need journalists anymore, we have drones”, and I’m like, “I think we still need journalists”…

If it’s inevitable though, then why does Cloudflare need to be so aggressive? You’re instituting these policies of doing your best to block bots, putting together protocols for recognizing what it’s worth, payments, etc., all very nascent to be sure, a lot to be figured out. But you are not taking the posture of a company that this is inevitable and it’s going to be great, you are being pretty forceful in trying to make something happen.

MP: Well, I think if we weren’t doing it, someone else would. But what I think we have a unique ability to do is we’re really good at stopping things like bots because we do it every day.

So again, it wasn’t like we were sitting around being like, “Hey, what should we do next? Let’s go change the business model of the web”, it was our customers who were publishers were coming to us being like, “We’re dying and we don’t have the technical wherewithal to step in front of it, but we need to stop this, please help”. And honestly, when Neil [Vogel] at Dotdash Meredith was telling me this, I rolled my eyes and I was like, “Publishers, they’re such Luddites, they’re always complaining about the new technology, they’re always complaining about the next thing, this isn’t a big deal”. And Neil and a bunch of others finally said, “Just go pull the data”, and it was only when we actually saw the data, when we saw that over the course of the last 10 years, it’s become 10 times harder to get a click from Google for the same amount of content on that same kind of basis, it’s now 750 times harder with OpenAI, it’s 30,000 times harder with Anthropic.

The business of traffic on the Internet as being the currency is going away and so something either again, either content creation is going to die, it’s going to become futile, or we’ve got to create a new business model. Again, if our mission is to help build a better Internet, this seems squarely in the line with what we should be working on.

So why does Garry Tan say that you are an axis of evil with Browserbase and you should legalize AI agents?

MP: I really don’t understand. I mean, I’m confused by Garry, I think part of it might be that he’s an investor in Perplexity.

Every story needs four characters, you need to have a victim, you need to have a villain, you need to have a hero, and you need to have the village idiot or the stooge. And if you think about it, any news story has those four characters. Right now, the people who have most been the villains have been Perplexity, where they’re doing just actively nefarious things in order to try and get around content company.

I’ll give you an example of something that we’ve seen them do, which is that if they’re blocked from getting the content of an article, they’ll actually, they’ll query against services like Trade Desk, which is an ad serving service and Trade Desk will provide them the headline of the article and they’ll provide them a rough description of what the article is about. They will take those two things and they will then make up the content of the article and publish it as if it was fact for, “This was published by this author at this time”.

So you can imagine if Perplexity couldn’t get to Stratechery content, they would say, “Oh, Ben Thompson wrote about this”, and then they would just make something up about it and they put your name along it. Forget copyright, that’s fraud, just straight up and that’s the sort of bad behavior of some tech companies that again, I think needs to be called out and punished.

4. Bitcoin TreasuryCos: Lessons From The 1929 Crash – Be Water

The explosive proliferation of Bitcoin treasury companies mirrors that of the 1920s investment trusts, and both gold rushes stem from a perfect storm of greed: intense investor demand for exposure to a scarce asset creates mNAV premiums that promoters rush to monetize. If Goldman Sachs could extract enormous profits from its trust in the 1920s, why couldn’t everyone else? If MicroStrategy can monetize its mNAV premium, why shouldn’t every other company follow suit?

Galbraith documented the explosive growth of trusts in the 1920s:

During 1928, an estimated 186 investment trusts were organized. By the early months of 1929, they were being promoted at the rate of approximately one each business day, and a total of 265 made their appearance during the course of the year…

…The renowned Yale economist Irving Fisher famously declared that stock prices had reached a “permanently high plateau” just prior the 1929 Crash. Fisher’s declaration exemplified the kind of euphoric confidence that typically marks a market top…

… Fisher’s plateau quote is now infamous, but the lesser-known context that gave rise to it tells a more revealing story. He was actually defending investment trusts as a key support for stock valuations, much as Bitcoiners cite built-in demand from Bitcoin treasuries today. The New York Times reported at the time:

Professor Fisher spoke on the subject of investment trusts and presented a defense for them against recent attacks in which they have been charged with responsibility for many present evils.

Fisher defended trusts on the grounds that these vehicles were awakening people to the superiority of stocks over bonds and providing investors with a superior structure for gaining equity exposure—much as Bitcoin treasury advocates today claim MicroStrategy offers turbocharged “torque” over direct Bitcoin ownership, and Bitcoin itself offers superiority over TradFi assets like fiat currency, stocks, bonds, and real estate:

I believe the principle of the investment trusts is sound, and the public is justified in participating in them, with due regard to the character and reputation of those conducting them. Largely through the influence of the investment trust movement, the public has been waking up to the superior attraction of stocks over bonds. And I believe the operation of the investment trusts, as a whole, has acted to stabilize the stock market rather than to make its fluctuations more violent…

…Saylor’s confidence in monetizing NAV discounts—which is perhaps reasonable for MicroStrategy in isolation—mirrors the same logic 1920s trust managers used to justify buybacks—only to find that such support strategies are ineffective when liquidity across the ecosystem vanishes and selling pressure dominates.

The trusts discovered that buying back shares when investors are selling and credit is tightening is vastly different from issuing shares when investors are buying. Desperate to prop up their stock prices, the trusts began buying back shares at a discount to NAV—a strategy Bitcoin treasury companies will likely adopt with equally disappointing results for most:

The stabilizing effects of the huge cash resources of the investment trusts had also proved a mirage. In the early autumn the cash and liquid resources of the investment trusts were large…But now, as reverse leverage did its work, investment trust managements were much more concerned over the collapse in the value of their own stock than in the adverse movements in the stock list as a whole…

Under these circumstances, many of the trusts used their available cash in a desperate effort to support their own stock. However, there was a vast difference between buying one’s stock now when the public wanted to sell and buying during the previous spring—as Goldman Sachs Trading Corporation had done—when the public wanted to buy and the resulting competition had sent prices higher and higher. Now the cash went out and the stock came in, and prices were either not perceptibly affected or not for long. What six months before had been a brilliant financial maneuver was now a form of fiscal self-immolation. In the last analysis, the purchase by a firm of its own stock is the exact opposite of the sale of stocks. It is by the sale of stock that firms ordinarily grow.

As the crisis deepened and the mNAV continued to trade at a discount, trusts depleted their remaining cash reserves in a desperate—and ultimately self-defeating—effort to support collapsing share prices:

However, none of this was immediately apparent. If one has been a financial genius, faith in one’s genius does not dissolve at once. To the battered but unbowed genius, support of the stock of one’s own company still seemed a bold, imaginative, and effective course. Indeed, it seemed the only alternative to slow but certain death. So to the extent that their cash resources allowed, the managements of the trusts chose faster, though equally certain death. They bought their own worthless stock. Men have been swindled by other men on many occasions. The autumn of 1929 was, perhaps, the first occasion when men succeeded on a large scale in swindling themselves.

5. Technology vs Platform Shift, Portfolio Change – Abdullah Al-Rezwan

Casey Winters made this point almost a couple of years ago which I think still holds up pretty well:

What I realized having gone through the internet and mobile platform shifts is that the technological and distribution shifts did not happen at the same time. Platform shifts that create both technological and distribution opportunities happen in a sequence, not all at once…AI has come out and definitely created a technological shift that enables new ways to solve problems that couldn’t be done before. But AI lacks a new distribution channel. ChatGPT is “not it”, as the kids would say. At least not yet…

… Sameer also points out that in a technology shift, users may not even be aware about the tech (it just works) whereas in a platform shift, the change is front and center for the user:

In a technology shift, form factor does not and should not matter. For example, scaling Snapchat’s picture messaging functionality would not have been possible without the shift to cloud computing. While Snapchat’s cloud hosting costs were significant, it would not have been possible to scale it as quickly if it relied on large, operationally complex investments into server infrastructure. The most important part — Snapchat’s end users did not know or care about this in any way. The user interface did not change to call out Snapchat’s “Cloud powered” technology. The biggest changes happened in the backend, not the frontend. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Amazon. Holdings are subject to change at any time.

3 Things That May Drag a REIT’s Distribution Per Unit Lower

Look under the hood at a REIT’s financials to understand if its future distributions are at risk of a decline.

Singapore-listed REITs, or real estate investment trusts, are a favourite investment vehicle for many Singaporean investors.

REITs provide investors with the opportunity to invest in property and also provide much more liquidity than investing directly in real estate. In addition, many REITs in Singapore have juicy trailing distribution yields that can be as high as 9%. And with interest rates likely on the decline, REITs can also benefit from lower interest expenses and so can have higher distributions to unit holders.

Given the above, I did some research on REITs recently to see if there were any that might be attractive opportunities. But as I conducted my study, I noticed some common issues about REITs that may end up being a drag on their future distributions per unit. 

Here’s what I found.

Management fees that are paid in units 

A common theme I noticed about REITs in Singapore is that most of them pay the bulk of the REIT manager’s fees with units in the REIT. 

Take a look below at Keppel DC REIT’s (SGX: AJBU) financial statement for the first half of 2025. They show the adjustments made to Keppel DC REIT’s net profit to determine the income available for distribution. One of the adjustments made is the management fees paid in units.

Source: Keppel DC REIT 2025 first-half financial statement

To prop up a REIT’s distribution per unit (DPU), a REIT’s manager can choose to receive its fees in units of the REIT instead of cash, since doing so means cash can be returned to shareholders. But this will be a drag to a REIT’s DPU over the longer term for two reasons.

First, once the REIT’s manager opts to receive all or a bigger portion of its fees in cash, the amount available for distribution to unitholders will decline (all other things remaining constant). Second, because the REIT manager opted to receive its fees in units in the past, the REIT had to issue new units, which resulted in a higher unit count; future distributions are thus divided across a larger unit base.

Keppel DC REIT is not the only REIT that does this. In fact, it is common practice across REITs in Singapore. 

Sasseur REIT (SGX: CRPU) is another example. Take a look at the REIT’s condensed income statement for the first half of 2025:  

Source: Sasseur REIT 2025 first-half earnings presentation slide

The base fee of Sasseur REIT’s manager that is paid in cash increased by S$0.4 million from the first half of 2024 to the first half of 2025. This is because the REIT manager opted to receive 30% of its base fee in cash in the first half of 2025, rather than the 20% it chose for the first half of 2024. This is a real case of how the DPU of a REIT can decline simply because a REIT’s manager chooses to receive less of its fees in cash.

The way I see it, the DPU of Sasseur REIT is being propped up by the REIT manager’s decision to receive some or all of its fees in units rather than cash. Once this goes away, DPU will be pressured.

Another example is Frasers Logistics & Commercial Trust (SGX: BUOU). These are the adjustments the REIT made to obtain its distributable income:

Source: Frasers Logistics & Commercial Trust’s FY2025 first-half financial statement

For Frasers Logistics & Commercial Trust, the column on the right of the table refers to the first half of FY2024 and the column on the left refers to first half of FY2025. In both periods, the REIT added a significant amount of “management fees paid in units” to net income, which puffed up distributable income. But in the first half of 2024, 100% of the REIT’s management fees were paid in units while in the first half off 2025, only 43% was so. Because of this change, Frasers Logistics & Commercial Trust had a lower upward adjustment to distributable income in the first half FY2025, which was one of the reasons its DPU to shrank year-on-year.

Watch for capital distribution

Another thing to look out is whether the DPU is being bumped up by one-off or short-term capital distributions. We can return to Frasers Logistics & Commercial Trust as an example.

Source: Frasers Logistics & Commercial Trust’s FY2025 first-half earnings presentation

The image above shows that Frasers Logistics & Commercial Trust’s total distributable income were bumped up by capital distributions in both the first half of FY2025 and the first half of FY2024.

The problem is capital distributions are one-off, or short-term distributions, and are dependent on gain on divestments. A REIT’s manager may decide to retain or distribute capital gains depending on the REIT’s performance for the year in order to “smoothen” out distributions. But as investors, we should note that these are not long-term solutions and eventually this capital distribution buffer may run out.

We can look at Far East Hospitality Trust* (SGX: Q5T) as an example:

Source: Far East Hospitality Trust’s 2025 first-half earnings presentation

Far East Hospitality Trust’s distribution to stapled security holders include distributions from other gains. The stapled trust divested one of its properties in 2022 and has since been distributing the divestment gains to unitholders at around S$8 million annually; the manager of the trust had decided to distribute the divestment gains over a few years to smoothen the trust’s distribution per stapled security (DPS). But as with all capital distributions, the well will eventually run dry, and absent the capital distribution, the DPS will likely drop.

*Far East Hospitality Trust is technically not a real estate investment trust. Instead, it is a stapled trust consisting of Far East Hospitality Real Estate Investment Trust and Far East Hospitality Business Trust. But for the purposes of this article, there’s no need to split hairs.

Interest rate sensitivity

When looking at how sustainable a REIT’s DPU is, we also need to look at its interest rate sensitivity. REITs have been reporting rising finance costs in the last few of years as their debt gets refinanced at higher rates or as their floating rate debt gets repriced. The rising finance costs have been a drag to their DPU.

This why I prefer a REIT with a high interest coverage ratio. A higher interest coverage ratio means that a change in interest rates would have a smaller impact on distributable income.

Imagine a REIT with an interest coverage ratio of 5. All else equal, a 20% increase in finance costs will only lead to a 5% decline in DPU. Comparatively, a REIT with an interest coverage ratio of 3 will suffer a 7% decline in DPU. 

CapitaLand Integrated Commercial Trust (SGX: C38U) has an interest coverage ratio of 3.3, shown in the table below, which looks fairly low to me and suggests that the REIT’s DPU is sensitive to interest rate hikes. But we are in a fairly high interest rate environment at the moment, so it is perhaps more common for interest coverage ratios to be on the lower end of the spectrum during this time.

Source: CapitaLand Integrated Commercial Trust 2025 first-half earnings presentation

Final Takeaways

Singaporean investors invest in REITs for steady income.

Although Singapore-listed REITs have historically performed decently, investors still need to assess if a REIT’s DPU can be sustained in the long-term. To do so, they can peer under the hood and determine if the REIT’s DPU could be pressured by (1) a higher unit base, (2) the end of capital distributions, and (3) the end of fees being paid in units.

It is also important to assess how sensitive a REIT’s DPU is to interest rate spikes. Rates may be likely to be on a downtrend in the near future, but there may be times ahead when interest rates spike again.

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 31 August 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 31 August 2025:

1. Monetary policy is not about interest rates, it’s about the money supply – Steve H. Hanke and John Greenwood

The ongoing feud between President Trump and Fed Chairman Jerome Powell centers on interest rates. This tells us more about the near-universal view of what constitutes monetary policy than it does about Trump or Powell. While Trump and Powell might quibble over the proper level for the Fed funds rate, they both think monetary policy is all about interest rates…

…Why the obsession over interest rates? One reason hinges on the fact that for over the past 30 years or so, macroeconomic models are neo-Keynesian extensions of dynamic stochastic general equilibrium (DSGE) models. These put interest rates front and center…

…But that’s not what monetarists, who embrace the quantity theory of money, tell us. Unlike the neo-Keynesian macroeconomic models that exclude money, the quantity theory of money states that national income or nominal GDP is primarily determined by the movements of broad money, not by changes in interest rates…

…First, let’s consider the case of Japan between 1996 and 2019. Throughout this period, the Bank of Japan’s (BOJ) overnight policy rate lingered at negligible levels, averaging 0.125%. As a result, most economists concluded that monetary policy in Japan was very “easy”. But monetarists, who focused on Japan’s anemic broad money (M2) growth of only 2.8% per year, concluded that monetary policy was “tight”…

…Japan’s inflation averaged a de minimis 0.2% per year in the 1996-2019 period. It is clear that the monetarists were correct…

…Let’s consider the U.S. between 2010 and 2019. During most of this decade, the Fed funds rate was held down at 0.25%. In addition, the Fed engaged in three episodes of quantitative easing (QE). Many concluded that this amounted to very “easy” monetary conditions. They warned that inflation would result. In fact, broad money growth (M2) remained low and stable at 5.8% per year. In consequence, inflation also remained low, averaging just 1.8% per year between 2010 and 2019. As was the case with Japan, interest rates turned out to be a highly misleading indicator of the stance of monetary policy. The growth in the money supply was a much better guide to economic activity and inflation than the course of the Fed funds rate…

…The reason why central bank policy rates are a misguided mechanism for steering and forecasting the course of the economy is because interest rates are, in large part, symptoms of past money growth, not necessarily drivers of future money growth. Changes in the quantity of money, on the other hand, directly fuel spending, and therefore correctly signal the direction of spending and inflation…

 …By ignoring the quantity theory of money and employing neo-Keynesian macroeconomic models, central bankers are often wrong-footed. They think that by managing policy rates, they are controlling monetary policy when in reality, they are just reacting to changes in the quantity of money that occurred in a prior period.

2. Global Crossing Is Reborn… – Praetorian Capital

Let’s start with total datacenter spend for 2025. Insiders think it’s going to clock in at around $400 billion…

…What’s a datacenter made of?? There are three main components; the building and land at roughly a quarter of the cost, all the power systems, wiring, cooling, racking, etc. at about 40% of the cost, and then the GPUs themselves at about 35% of the cost. I am sure I’m off by a few percent in these categories, but I’m relying on AI and we all know it’s still imperfect. I’m assuming that the building depreciates over 30 years, the chips are obsolete in 3 to 5 years, and then the other stuff lasts about 10 years on average. Call it a 10-year depreciation curve on average for an AI datacenter. Which leads you to the first shocking revelation; the AI datacenters to be built in 2025 will suffer $40 billion of annual depreciation, while generating somewhere between $15 and $20 billion of revenue. The depreciation is literally twice what the revenue is…

…With nothing to go on, I’m going to take an optimistic guess here, and say that ultimately, the margins get to positive, and then gradually creep up towards 25%. Why 25%?? I have no idea. It just sounds right because electricity is really expensive and you need a lot of expensive tech nerds to manage the equipment. Honestly, no one really knows where gross margins eventually land, so let’s just run with it, so that we can do some simple math…

…By my math, you need $160 billion of revenue at that 25% gross margin, which gives you $40 billion of gross margin against $40 billion of depreciation. Now, remember, revenue today is running at $15 to $20 billion. You need revenue to grow roughly ten-fold, just to cover the depreciation. Except, no one does anything to break even in business. For a new technology like this, with huge obsolescence risk, what unlevered ROIC would you demand?? Would you want a 20% ROIC?? That’s still dilutive to the ROIC for most of the largest capex spenders. Even at that dilutive ROIC, you’d need $480 billion of AI revenue to hit your target return…

…$480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product. To put this into perspective, Netflix had $39 billion in revenue in 2024 on roughly 300 million subscribers, or less than 10% of the required revenue, yet having rather fully tapped out the TAM of users who will pay a subscription for a product like this. Microsoft Office 365 got to $ 95 billion in commercial and consumer spending in 2024, and then even Microsoft ran out of people to sell the product to. $480 billion is just an astronomical number…

…While we all remember Pets.Com and the hundreds of other Dot Com startups that flamed away, it was companies like Global Crossing, spending tens of billions on fiber, that facilitated all of this. That fiber, amazingly, is still in use. Global Crossing went bankrupt along the way, as did many of its peers. They overestimated what people would pay for this fiber, not that it would eventually be used or valuable.

Today, I watch in awe (stupefaction really), as companies continue to throw endless resources at AI, I remember back to the Dot Com bubble and Global Crossing—fiber was the datacenter of that cycle, and Corning was the NVIDIA of its day (it lost 97% of its share price in the two years after it peaked).

3. Bitcoin TreasuryCos & The Roaring 20s – Be Water

The Bitcoin Treasury craze is either genius or madness—and very possibly some combination of both…

…This is not the first time leveraged financial vehicles promised to democratize access to scarce assets using leverage and the accretive magic of mNAV premiums: the 1920s investment trust and holding bubble followed a similar script in the run-up to the 1929 Crash…

…During the Roaring Twenties common stocks occupied a cultural position remarkably similar to Bitcoin (and arguably the S&P) today—they were viewed as the revolutionary investment of their era, and there was widespread belief that supply of stocks was too scarce to meet surging demand.

In the 1920s, mutual funds were introduced under the name “investment trusts,” and—like Bitcoin treasury companies—formed to capitalize on this scarcity. A major difference between modern mutual funds and these trusts was that the trusts were leveraged: like Bitcoin treasuries, they invested using borrowed money that was considered “safe” because—like MicroStrategy—they issued preferreds and long-term debt securities to the public to buy portfolios of stocks. Galbraith:

The most notable piece of speculative architecture of the late twenties, and the one by which, more than any other device, the public demand for common stocks was satisfied, was the investment trust. The investment trust did not promote new enterprises or enlarge old ones. It merely arranged that people could own stock in old companies through the medium of new ones…

…Like Bitcoin Treasuries, the 1920s trusts had the added appeal of mNAV premiums that seemed to offer something for nothing.

Just as Bitcoin treasury companies today boast of their mNAV and ‘bitcoin yield,’ a key feature of the 1920s bubble was the tendency for investment trusts to trade at significant premiums to mNAV during their heyday. Galbraith:

The measure of this respect for financial genius was the relation of the market value of the outstanding securities of the investment trusts to the value of the securities they owned.

Normally, the securities of the trust were worth considerably more than the property it owned—sometimes even twice as much. There should be no ambiguity on this point: the only property of the investment trust was the common and preferred stocks, debentures, mortgages, bonds, and cash that it held. (Often, it had neither an office nor office furniture; the sponsoring firm ran the investment trust out of its own quarters.)

Yet, had these securities all been sold on the market, the proceeds would invariably have been less—and often much less—than the current value of the outstanding securities of the investment company. The latter, obviously, had some claim to value that went well beyond the assets behind them…

…As with today’s Bitcoin TreasuryCos, this persistent mNAV premium created a powerful financial engine for both the trusts and the underlying stocks they were buying: the ability to conduct immediately accretive share issuances. When a trust trades at a premium to its underlying stock values, it can issue new units at the inflated market price and instantly increase the NAV for its existing shareholders.

This reflexive accretion mechanism created a self-reinforcing feedback loop similar to today’s “Bitcoin Leverage Loop”. The cycle worked as follows:

  • Investor optimism drove a trust’s price to an mNAV premium.
  • The trust would issue new units at this premium price, which was immediately accretive to the NAV per share.
  • The new capital raised was used to purchase more stocks, adding buying pressure to the overall market and increasing the value of the trust’s own portfolio.
  • The rising NAV and apparent success of the strategy further fueled investor optimism, widening the premium and allowing the cycle to repeat.
  • Meanwhile, investors in the trusts and individual stocks amplified their exposure to a sure thing by using margin loans to leverage their positions, adding extra “juice” to the trade and further driving up NAVs and mNAVs for the trusts…

…Goldman Sachs Trading Corporation (GSTC) was perhaps the proto-MicroStrategy of the day. Launched by the influential Goldman Sachs partner Waddill Catchings in December 1928, it was, at its inception, the largest investment trust yet established—boasting an initial capitalization of $100 million. Its units, offered to the public at $104, was immediately oversubscribed and quickly soared in value, doubling to $226 within a short period and trading at a massive premium to the underlying value of its stock holdings…

…In  Brad DeLong and Andrei Shleifer’s The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds, they noted:

If [investment trust mNAV premia] indeed reflect excessive investor optimism rather than skill at management, there will be a tendency for funds to pyramid on top of one another. If each fund can be sold for 50 percent more than its own net asset value, promoters can more than double their profits by establishing a fund that owns funds that hold stocks, rather than just establishing funds that hold stocks…

This prediction is confirmed by one of the largest funds: the Goldman Sachs Trading Corporation. This was a closed-end fund organized in December 1928 with a net asset value of around $100 million. In 1929, one of its largest holdings was the Shenandoah Corporation, another closed-end fund organized by Goldman Sachs. Another large holding was in its own stock.

Nor is this all. In the same year, Shenandoah organized a new closed-end fund called the Blue Ridge Corporation and became a large investor in its stock. All these funds traded at premia; at the top of the pyramid, the Goldman Sachs Trading Corporation traded at a premium to a premium to a premium to net asset value…

…If history serves as any guide, we can expect Bitcoin treasury companies to begin investing in other Bitcoin treasury companies before this cycle concludes.

4. Whatever Happened to the Self Driving Semi? – Chris Paxton

There are almost three million semi trucks in the United States alone, to the point that trucker is the most common job in 29 states. Most of these are driving 400-600 miles per day along long, straight, predictable highways — a use case that, at a glance, seem perfect for autonomy.

And yet, on-road autonomy looks guaranteed to start not with semis but with taxis, operating over much shorter distances in much less of the United States…

…Fully-loaded trucks are massive, with a legally-mandated maximum of 80,000 lbs. This makes everything a truck does notably less responsive. Planning becomes more difficult; learning methods are less effective, too, when there’s not a clear, immediate mapping between input and output.

If we want to discuss how serious a problem this is, we should look at stopping distance; i.e. how long it takes a semi truck to come to a complete stop because, say, there was an accident on the road ahead of it.

Stopping distance for a fully-loaded semi truck traveling at 65 mph is approximately 525 feet to about 600 feet. Even though most US highways have higher speed limits, trucking companies usually limit speed to 65 mph for safety and fuel efficiency reasons; it seems reasonable to expect that autonomous truckers would do the same. But note that this is under ideal conditions; stopping distances can as much as double on icy roads.

Now, a good long-ranged lidar could have 1000 feet of range. Aurora has a particularly good in-house lidar, with about 450 meters (~1500 feet) of range – much farther than many other options. But maximum range isn’t effective range, which is far more important. This is hard to estimate — it varies depending on conditions, on objects, and of course on the quality of the particular classifiers being used to interpret objects. This quantity is notably shorter than the maximum range on practically any sensor, by as much as about half; and we’ll also need to classify if this was a spurious detection (a plastic bag blowing onto the road, a cardboard box) or a serious issue.

And that’s setting aside other concerns: what if there’s a patch of black ice ahead on the road? The lidar can’t detect this at all, and it’s a huge issue for highway driving. There was a famously horrific 133-car pileup in Fort Worth, Texas in 2021, caused by black ice, which led to 65 injuries and six fatalities.

5. SITALWeek #459 – Brad Slingerlend

Investing is a form of storytelling. CEOs spin tales about their companies and try to rally the workforce to manifest them over a long time horizon. Investors decide if they too believe the stories or not. Most of the time, the stories are fiction, fantasy, or even fairy tales. Occasionally, visionary entrepreneurs pen a nonfiction, or even a compelling fiction that turns out to be so predictive of the future that it serves as prior art for reshaping reality (think of the Steve Jobs Reality Distortion Field!). There are also stories about economics, politics, and the world at large that influence the stories about companies and investments. Investors create their own stories about businesses as well, and the resulting investment ideas can end up in either a canonized history book or a throwaway dime novel. Even trying to unravel the truth of past stories can be fraught, as hindsight is only as good as the incomplete and unreliable human narratives on which history is based…

…Today, it’s not clear how much, if any, impact investors’ stories have on the daily prices of stocks. And, in some cases, it appears to me companies are losing complete control of their own narratives as well…

…And, now, we have something very different happening: all of that volume in the market, previously programmed in some form or another by humans guiding machine learning algorithms (or retail investor brains programmed by social media news cycles, etc.), is slowly being taken over by LLMs and agentic AI. I suspect autonomous AI trader bots are writing their own signal algorithms and creating their own stories. They are telling those stories to each other and executing trades. We can see clues that this shift is happening in a recent study that found meaningful drops in trading activity during ChatGPT outages. I think that tidbit of information gives us, well, the rest of the story as to what will soon define the stock market on a day-to-day basis (if it’s not already the dominant force, which I suspect it is). This agentic investing evolution will create even more noise and less signal in the daily price of any given stock. Again, this turn of events spells good news for us active investors who still think we can find stories that, with any luck, will turn out to be superior nonfictional investments.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Microsoft and Netflix. Holdings are subject to change at any time.

Market View: Looking ahead to Nvidia’s earnings; Asian markets up on Powell’s speech at Jackson Hole, rate cut hopes; Oil prices edge higher after Ukraine attacks hit Russian energy sites, and more

Yesterday, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s Money Matters show. We discussed a number of topics, which include:

  • The highlights of Federal Reserve Chair Jerome Powell’s latest speech at the Jackson Hole Economic Symposium (Hints: I don’t have much highlights because I do not watch the Federal Reserve’s actions in my investing activities, but I noticed that some market participants have the mistaken notion that the Federal Reserve has completely stopped looking at inflation levels when making its monetary policy decisions when in reality, it continues to see inflation of 2% as the appropriate level to meet its dual-mandate)
  • Singapore’s latest inflation readings, which slowed, contrary to economists’ expectations (Hints: I don’t watch macro numbers and I also think inflation numbers are not that important for long-term investing in stocks because inflation has historically not had a significant influence on stock market valuations)
  • The movement of oil prices after Ukraine stepped up attacks on Russia (Hints: Oil prices are very important for investors in companies related to the oil & gas industry because the movement of oil prices can have a significant impact on their long-term business results, but I’m not one of those investors because I’m very aware that I do not have any ability to predict the movement of oil prices, and thus I’m unable to form any judgement on the long-term trajectory of their businesses)
  • What to watch regarding NVIDIA’s upcoming earnings and whether it would include China-revenues in its guidance (Hints: The Trump administration’s latest stance is that NVIDIA can sell certain AI chips to Chinese customers if it pays 15% of its China chip revenues to the US government, but given the Trump administration’s mercurial nature, who knows what’s going to happen in the near future; it will be interesting to hear about the status of NVIDIA’s next generation GPU platform, Rubin, in the upcoming earnings) 

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.