All articles

What We’re Reading (Week Ending 20 July 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 20 July 2025:

1. Sweatshop data is over – Tamay Besiroglu, Matthew Barnett, Ege Erdil

Historically, the importance of data has been underrated in the field of AI. Decades ago, many assumed the key to AGI would come from devising the right “theory of intelligence”, which we could then implement by hand; the role of training data was sidelined.

Despite being trained on more compute than GPT-3, AlphaGo Zero could only play Go, while GPT-3 could write essays, code, translate languages, and assist with countless other tasks. The main difference was training data. AlphaGo Zero learned from Go games, whereas GPT-3 learned from natural language. This meant that while Google was playing games, OpenAI was able to seize the opportunity of a lifetime. What you train on matters.

We may soon witness a similar lesson if AI labs continue to scale up their models without similarly scaling up the quality of their training environments. Many have observed that pretraining is already saturating. GPT-4.5, while impressive in its own right, didn’t feel like a major generational leap in the way GPT-4 did over GPT-3.5.

The recent reinforcement learning with verifiable rewards (RLVR) paradigm seeks to revive progress by getting AIs to learn how to perform formally checkable reasoning inside contained environments. What we’ve seen so far is necessary for progress, but it is far from sufficient. Current methods will get us to the point where AIs can prove theorems and solve hard puzzles, but it won’t be enough to get models to deal with the open-ended nature of reality, where the quality of our actions cannot be so easily “verified” as either correct or incorrect.

To make progress, there’s no way around designing better rewards, and ultimately better RL environments.

2. Silk, Porcelain, Tea, Opium: 2000 Years of Trade Deficit with China – Tomas Pueyo

The West has had deficits with China for over 2,000 years, and they have had a massive impact on world history, from the opening of global trade routes, to the establishment of colonies, colonial policies, international wars, the emergence of nation-states, the politics of present-day China and the US…

…Romans loved luxury goods:

India, China and the Arabian peninsula take one hundred million sesterces1 from our empire per annum at a conservative estimate: that is what our luxuries and women cost us—Pliny the Elder, Natural History (77–79 AD).

Of these, silk was the biggest import from China. In 14 AD the Senate prohibited the wearing of silk by men!

To pay for it, Romans traded glassware, amber, wine, carpets, and other goods,2 but they didn’t make up for the value of what Romans bought from China. And in general, Chinese traders preferred money—mostly gold and silver—over other goods…

…Europeans obsessed about producing silk locally, but they didn’t know how to make it and didn’t have silkworms: China had protected its near-monopoly on silk for many centuries thanks to imperial orders to execute anybody caught trying to export silkworms or their eggs. The only way to succeed was by stealing them, and that’s precisely what two Christian monks did around 550 AD, risking their lives to smuggle silkworms hidden inside their canes.

This started silk production in the Eastern Roman Empire, which would slowly permeate through the rest of Europe.

This might have been the first time Chinese manufacturing prowess caused a trade imbalance in the West that required political intervention…

…Porcelain could only start reaching Europe in the 1500s,4 which is not a coincidence either: Porcelain was too heavy and fragile for overland routes, so it needed a maritime route to reach Europe. The Portuguese found a path to the Indies circumventing Africa just around 1500…

…Chinese porcelain was so much thinner, whiter and more translucent than local wares that European nobility really prized it…

…You know how nowadays Westerners design some products and then they send those designs to China for manufacture?

Porcelain is another example of China manufacturing products that Europeans craved, but again it didn’t need anything Europeans produced. Except for silver. So silver flowed from Europe to China. From 1500 to 1800, Bolivia and Mexico’s mines produced about 80% of the world’s silver; 30% of that eventually ended up in China!

Europeans hated that flow, as the silver disappeared as fast as it was produced, so they tried to stop it. Of course, the most incentivized were the countries who didn’t have access to either silver or trade with China. This is why the Italians tried to copy porcelain in the late 1500s with Medici porcelain, although they largely failed. By the early 1700s, Germans succeeded. A few years later, in 1712, the French Jesuit father Francois Xavier d’Entrecolles published the secrets of porcelain making in Europe, which he had read about and witnessed in China. In the following decades, the local production of porcelain increased and the import of Chinese porcelain fell…

…Tea’s ever-escalating trade imbalance with China became a serious economic problem, so much so that the British King George III sent an envoy to the Chinese Emperor to ask for more trade liberalization. These are excerpts of the Emperor’s response:

Our Celestial Empire possesses all things in prolific abundance and lacks no product within its own borders. There is therefore no need to import the manufactures of outside barbarians in exchange for our own produce. But as the tea, silk and porcelain which the Celestial Empire produces, are absolute necessities to European nations and to yourselves, we have permitted, as a signal mark of favor, that foreign merchants should be established at Canton, so that your wants might be supplied and your country thus participate in our beneficence.

So what did the British do to solve the trade imbalance? Two things. One is that the East India Company sent Scottish botanist Robert Fortune to China to purchase and export Chinese tea plants in the 1850s. This kick-started tea production in India, which grew over the following decades, reducing the share of Chinese tea consumed. Here we have, for the third time, a smuggling of Chinese production know-how to reduce trade imbalances…

…When the British conquered India8 in the late 1700s, they were very conscious about their trade imbalance with China, so they looked for any way to reduce it. They found the right tool in opium. They devised a plan to produce it in India and sell it in China. So the British drove local farmers in eastern India out of crop production and into poppies, from which opium is derived.

Then, the British introduced opium smoking in China…

…The Emperor Jiaqing noticed all this so he published an edict to stop it in 1810:

Opium has a harm. Opium is a poison, undermining our good customs and morality. Its use is prohibited by law.

But the government couldn’t enforce it. When the Chinese government finally cracked down on opium in 1839, the opium trade was paying for all the tea trade and then some, so the British reacted to protect the trade and attacked China; this was the First Opium War.

Britain won and bent China’s arm: It would be allowed to sell opium in China. It also took over Hong Kong.

There would be another Opium War, after which the British, and then other Westerners10 could reach far inland in China to sell opium. The deficit to China became a surplus. Over the following decades, opium addiction became widespread. By 1949, 4.4% of Chinese people were addicted. Local farmers replaced their crops with opium. Governments used opium taxes to finance themselves, and this lasted until the Communist Party had a strong enough chokehold on society and culture to finally ban opium.

This is what the Chinese call the century of humiliation, when China went from the richest and most advanced nation of the world to a dirt poor backwater.

3. The Codes AI Can’t Crack – Taras Grescoe

Since 2018, neural networks trained on cuneiform, the writing system of Mesopotamia, have been able to fill in lost verses from the story of Gilgamesh, the world’s earliest known epic poem. In 2023, a project known as the Vesuvius Challenge used 3D scanners and artificial intelligence to restore handwritten texts that hadn’t been read in 2,000 years, revealing previously unknown works by Epicurus and other philosophers. (The scrolls came from a luxurious villa in Herculaneum, buried during the same eruption of Mount Vesuvius that destroyed Pompeii. When scholars had previously tried to unroll them, the carbonized papyrus crumbled to dust.)

Yet despite these advances, a dozen or so ancient scripts — the writing systems used to transcribe spoken language — remain undeciphered. These include such mysteries as the one-of-a-kind Phaistos Disk, a spiral of 45 symbols found on a single sixteen-inch clay disk in a Minoan palace on Crete, and Proto-Elamite, a script used 5,000 years ago in what is now Iran, which may have consisted of a thousand distinct symbols. Some, like Cypro-Minoan — which transcribes a language spoken in the Late Bronze Age on Cyprus — are tantalizingly similar to early European scripts that have already been fully deciphered. Others, like the quipu of the Andes — intricately knotted ropes made of the wool of llamas, vicuñas, and alpacas — stretch our definitions of how speech can be transformed into writing…

…Cracking these ancient codes may seem like the kind of challenge AI is ideally suited to solve. After all, neural networks have already bested human champions at chess, as well as the most complex of all games, Go. They can detect cancer in medical images, predict protein structures, synthesize novel drugs, and converse fluently and persuasively in 200 languages. Given AI’s ability to find order in complex sets of data, surely assigning meaning to ancient symbols would be child’s play.

But if the example of Ithaca shows the promise of AI in the study of the past, these mystery scripts reveal its limitations. Artificial neural networks might prove a crucial tool, but true progress will come through collaboration between human neural networks: the intuitions and expertise stored in the heads of scholars, working in different disciplines in real-world settings…

…Ithaca was trained on ancient Greek, a language we’ve long known how to read, and whose entire corpus amounts to tens of thousands of inscriptions. The AI models that have filled in lost verses of Gilgamesh are trained on cuneiform, whose corpus is even larger: hundreds of thousands of cuneiform tablets can be found in the storerooms of the world’s museums, many of them still untranslated. The problem with mystery scripts like Linear A, Cypro-Minoan, Rongorongo, and Harappan is that the total number of known inscriptions can be counted in the thousands, and sometimes in the hundreds. Not only that, in most cases we have no idea what spoken language they’re meant to encode…

… Two of the greatest intellectual feats of the 20th century involved the decipherment of ancient writing systems. In 01952, when Michael Ventris, a young English architect, announced that he’d cracked the code of Linear B, a script used in Bronze Age Crete, newspapers likened the accomplishment to the scaling of Mount Everest. (Behind the scenes, the crucial grouping and classifying of characters on 180,000 index cards into common roots — the grunt work that would now be performed by AI — was done by Alice Kober, a chain-smoking instructor from Brooklyn College.)

The decipherment of the Maya script, which is capable of recording all human thought using bulbous jaguars, frogs, warriors’ heads, and other stylized glyphs, involved a decades-long collaboration between Yuri Knorozov, a Soviet epigrapher, and American scholars working on excavations in the jungles of Central America.

While the interpreting of Egyptian hieroglyphics is held up as a triumph of human ingenuity, the Linear B and Mayan codes were cracked without the help of a Rosetta Stone to point the way. With Linear B, the breakthrough came when Ventris broke with the established thinking, which held that it transcribed Etruscan — a script scholars can read aloud, but whose meaning still remains elusive — and realized that it corresponded to a form of archaic Greek spoken 500 years before Homer. In the case of ancient Mayan, long thought to be a cartoonish depiction of universal ideas, it was only when scholars acknowledged that it might transcribe the ancestors of the languages spoken by contemporary Maya people that the decipherment really began. Today, we can read 85% of the glyphs; it is even possible to translate Shakespeare’s Hamlet into ancient Mayan.

Collaborating across cultures and disciplines, and carrying out paradigm-shedding leaps of intuition, are not the strong points of existing artificial neural networks. But that doesn’t mean AI can’t play a role in decipherment of ancient writing systems. Miguel Valério, an epigrapher at the Autonomous University of Barcelona, has worked on Cypro-Minoan, the script used on Cyprus 3,500 years ago. Two hundred inscriptions, on golden jewelry, metal ingots, ivory plaques, and four broken clay tablets, have survived. Valério was suspicious of the scholarly orthodoxy, which attributed the great diversity in signs to the coexistence of three distinct forms of the language.

To test the theory that many of the signs were in fact allographs — that is, variants, like the capital letter “G” and “g,” its lower-case version — Valério worked with Michele Corazza, a computational linguist at the University of Bologna, to design a custom-built neural network they called Sign2Vecd. Because the model was unsupervised, it searched for patterns without applying human-imposed preconceptions to the data set.

“The machine learned how to cluster the signs,” says Valério, “but it didn’t do it simply on the basis of their resemblance, but also on the specific context of a sign in relation to other signs. It allowed us to create a three-dimensional plot of the results. We could see the signs floating in a sphere, and zoom in to see their relationship to each other, and whether they’d been written on clay or metal.”…

…A generation ago, most people were taught that writing was invented once, in Mesopotamia, about 5,500 years ago, as a tool of accountancy and state bureaucracy. From there, the standard thinking went, it spread to Egypt, and hieroglyphics were simplified into the alphabet that became the basis for recording most European languages…

…Monogenesis, the idea that the Ur-script diffused from Mesopotamia, has been replaced by the recognition that writing was invented independently in China, Egypt, Central America, and — though this remains controversial — in the Indus Valley, where 4,000 inscriptions been unearthed in sites that were home to one of the earliest large urban civilizations.

4. A 37,000-Year Chronicle of What Once Ailed Us – Carl Zimmer

On Wednesday, a team of scientists unveiled a new genetic chronicle, documenting the rise of 214 diseases across Europe and Asia over the past 37,000 years…

…The researchers examined the remains of 1,313 ancient individuals for the project. The large scale enabled the researchers to do more than just push back the earliest known occurrence of different diseases. They could also track the rise and fall of epidemics across centuries.

The oldest remains the researchers studied belonged to hunter-gatherers. Their bones and teeth contained a host of pathogens, such as hepatitis B, herpes virus and Helicobacter pylori, a stomach-dwelling bacterium.

“As far back as we go, humans have had infectious diseases,” said Eske Willerslev, a geneticist at the University of Copenhagen and an author of the new study…

…Initially, Dr. Willerslev and his colleagues assumed that they would see such diseases rise to prominence starting about 11,000 years ago. That’s when people started domesticating animals, from which new diseases could spread more easily…

…But the ancient DNA defied that expectation. The scientists found that plague and a number of other diseases jumped to people from animals thousands of years later, starting about 6,000 years ago. And those microbes did not jump into early farmers.

Instead, the new study points to nomadic tribes in Russia and Asia. Thousands of years after the dawn of agriculture, those nomads started rearing vast herds of cattle and other livestock.

Why diseases would have attacked those herders instead of earlier farmers, the scientists can’t say for sure. “We haven’t been able to come up with anything conclusive,” Dr. Willerslev said…

…The nomads expanded over the next few centuries across the steppes of Asia and eastern Europe. In that time, their pathogens thrived; the scientists frequently found several individuals in a single grave with DNA from plague or other diseases.

Those epidemics were so intense that they changed the genetic profile of the nomads. Last year, Dr. Willerslev and his colleagues found that the nomads experienced a spike in mutations that boosted their immune system and that may have helped them resist the diseases they contracted. But their active immune systems may have also attacked their own bodies, producing chronic diseases such as multiple sclerosis.

5. AI is killing the web. Can anything save it? – The Economist

Similarweb, which measures traffic to more than 100m web domains, estimates that worldwide search traffic (by humans) fell by about 15% in the year to June. Although some categories, such as hobbyists’ sites, are doing fine, others have been hit hard (see chart). Many of the most affected are just the kind that might have commonly answered search queries. Science and education sites have lost 10% of their visitors. Reference sites have lost 15%. Health sites have lost 31%.

For companies that sell advertising or subscriptions, lost visitors means lost revenue…

…Google has insisted that its use of others’ content is fair. But since it launched its AI overviews, the share of news-related searches resulting in no onward clicks has risen from 56% to 69%, estimates Similarweb. In other words, seven in ten people get their answer without visiting the page that supplied it…

…To keep the traffic and the money coming, many big content producers have negotiated licensing deals with AI companies, backed up by legal threats: what Robert Thomson, chief executive of News Corp, has dubbed “wooing and suing”. His company, which owns the Wall Street Journal and the New York Post, among other titles, has struck a deal with OpenAI. Two of its subsidiaries are suing Perplexity, another AI answer engine. The New York Times has done a deal with Amazon while suing OpenAI. Plenty of other transactions and lawsuits are going on…

…Reddit, an online forum, has licensed its user-generated content to Google for a reported $60m a year…

…The bigger problem, however, is that most of the internet’s hundreds of millions of domains are too small to either woo or sue the tech giants. Their content may be collectively essential to AI firms, but each site is individually dispensable. Even if they could join forces to bargain collectively, antitrust law would forbid it. They could block AI crawlers, and some do. But that means no search visibility at all…

…All of Cloudflare’s new customers will now be asked if they want to allow AI companies’ bots to scrape their site, and for what purpose. Cloudflare’s scale gives it a better chance than most of enabling something like a collective response by content sites that want to force AI firms to cough up. It is testing a pay-as-you-crawl system that would let sites charge bots an entry fee…

…An alternative is offered by Tollbit, which bills itself as a paywall for bots. It allows content sites to charge AI crawlers varying rates: for instance, a magazine could charge more for new stories than old ones. In the first quarter of this year Tollbit processed 15m micro-transactions of this sort, for 2,000 content producers including the Associated Press and Newsweek…

…One of Tollbit’s highest per-crawl rates is charged by a local newspaper.

Another model is being put forward by ProRata, a startup led by Bill Gross, a pioneer in the 1990s of the pay-as-you-click online ads that have powered much of the web ever since. He proposes that money from ads placed alongside AI-generated answers should be redistributed to sites in proportion to how much their content contributed to the answer. ProRata has its own answer engine, Gist.ai, which shares ad revenue with its 500-plus partners, which include the Financial Times and the Atlantic…

…As for the idea that Google is disseminating less human traffic than before, Mr Stein says the company has not noticed a dramatic decline in the number of outbound clicks, though it declines to make the number public. There are other reasons besides AI why people may be visiting sites less. Maybe they are scrolling social media. Maybe they are listening to podcasts.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (the company behind AlphaGo Zero and Google). Holdings are subject to change at any time.

An Important Perspective on US Government Debt

The US government has a lot of debt, but what about its assets?

I’ve noticed that when there’s public discussion on US government finances, the prevailing stance is that the government is heavily in debt and it is a terrible situation for the country to be in. For example:

  • CNN quoted Maya MacGuineas, President of the Committee for a Responsible Federal Budget in January 2024: “Though our level of debt is dangerous for both our economy and for national security, America just cannot stop borrowing”
  • In June 2025, Market Watch wrote: “America’s current debt level stands at roughly 121% of GDP… The debt burden is no longer just a distant concern. It is a present and pressing problem”
  • Ray Dalio, who is the founder of one of the largest – if not the largest – hedge fund in the world, BridgeWater, commented in June 2025 on American government debt: “[The US government] has accumulated a big debt—approximately six times the amount that it is bringing in each year (about $30 trillion), which equals about $230,000 per household that you have to take care of”

The thing about debt is that there are two sides to the coin. A balance sheet for a company has both assets and liabilities and the same goes for a country. So while the US government has plenty of debt, which are liabilities, it also has assets.

And what does the US government’s assets look like? According to the Federal Reserve, the US government’s assets have a value of just US$5.6 trillion as of September 2024, which is far lower than its liabilities of US$45.5 trillion, most of which are US28.3 trillion in government debt. This does not look good.

But, according to the Institute of Energy Research, the US government has ownership of a huge mineral estate, consisting of natural resources such as oil, natural gas, and coal, which had a value of US$150 trillion as of January 2013. The value of these assets are not recorded on the Federal Reserve’s accounting of the US government’s balance sheet. The prices of oil, natural gas, and coal today are within the same ballpark as what they were in January 2013 and this means the US government’s US$150 trillion in mineral assets back then would have around the same value today. In other words, the US government’s assets are much higher than its liabilities.

One more point worth noting is that Federal Reserve data show American households have a total net worth – that would be household assets minus household liabilities – of US$170 trillion in the first quarter of this year. This net worth is again much higher than US government liabilities. The US$230,000 in debt per US household that Ray Dalio said the US government has saddled the country’s population with, turns out to be much lower than US households’ net worth. 

When it comes to the idea of the US government being heavily in debt, I think the reality is different. Yes, the US government has been borrowing like a drunken sailor, with a budget deficit that currently runs at around 7% of GDP – this is absolutely not sustainable in the long run. But right now the balance sheet of the US government is still really healthy when the true value of its assets is considered and this gives the government plenty of buffer time to right the ship. 

In public discussions of US government debt, I find that the asset-part of the balance sheets of the US government and households is often missing – and this is an important perspective we should all be aware of.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 13 July 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 13 July 2025:

1. Jim Chanos on the Nuttiness of ‘Bitcoin Treasury Companies’ | Odd Lots (Transcript Here) – Tracy Alloway, Joe Weisenthal, and Jim Chanos

Joe: All right, first question: Are Bitcoin treasury companies the stupidest thing you’ve ever seen in your entire life?

Jim Chanos: It’s rarely, rarely that I have to increase my personal security after a podcast which I had to do after our last podcast together when I said some intemperate things about Bitcoin treasury companies.

Here’s the thing. I get people very agitated about this and they point out just what a genius idea this is and I keep trying to point out to them I’m doing the same thing that guys like Michael Saylor are doing. I’m on the same side of the trade and I keep pointing out to my critics, “You’re on the opposite side of that trade and you don’t want to be on the opposite side of the trade, and the Bitcoin treasury paradox being that you are the one buying the pieces of paper that have infinite supply so that Michael Saylor and I can buy the digital asset with the limited supply and it makes kind of no sense.” So what will inevitably happen is happening, in that there’s nothing proprietary here – this is just simply raising capital to buy a financial asset and other companies will do this. In fact even since the podcast we last did, I think the number of companies that have announced this strategy is scores more. I think there’s over a hundred in the US and over 200 globally now…

…Jim: Because there’s a wonderful sales job that’s being done about the fact that this is an economic engine in and of itself, therefore terms like Bitcoin Yield are are used and I’ve called them financial gibberish – because they are. In fact, this will get arbed away ultimately by companies that will do this to try to capture that spread. In the case of Micro Strategy, it’s substantial. It’s still $50 billion, something like that, of the difference between the value of the enterprise value of the company and the value of their Bitcoin holdings. But the thing that really shot me into orbit on all this was when Saylor and others then said, “You can’t really value us on an NAV basis, a so-called MNAV, multiple of NAV. You actually have to also give us additional value for the amount of profit that we make every quarter from the appreciation in the asset.” I said, “Well that’s like saying my whole net worth is in a house that’s worth $400,000 that is now worth $500,000 a year or two later, and my net worth is not $500,000 now – it’s $2.5 million because it’s the value of the house plus a multiple on the increase in the profitability of the asset.”…

…Tracy: I have one more question why did Micro – I have to remember to call them Strategy but I can’t bring myself to do it. Why did they switch from issuing the convertible debt to preferred shares?

Jim: Because he realized that as he began to issue more and more common, it was putting pressure on the premium. Now the latest iteration is, “We’re going to do this quasi equity security, quasi debt, preferred stock and then we can lever up the the balance sheet.” This is a company whose selling point a year ago was “We’re not going to lever, because we have this wonderful equity that we can issue at a premium.” Now they’re saying, “Maybe if it trades above 2x we’ll issue equity, but if it’s between 1x and 2x, we’ll do preferred, and then if it’s below 1x we’ll buy back common and then what is Chanos going to do?” To which I said, “I’ll be out of the trade by then.” If it’s 1x NAV it’s not a trade. That’s the latest game plan – but stay tuned, it’ll change, I think. The narrative keeps changing…

…Jim: The legacy data centers – and there’s only a couple companies in the United States that really have legacy data centers. There’s Equinix, there’s Digital Realty, and then there’s old Colony Capital – it’s now called Digital Bridge and they own these things in fund format.

When we took a look at this with our partner back in ‘22 the idea was pretty simple. We did not see the AI explosion in mid-’22, but the idea was it was a pretty crummy business then, working on the cloud and SaaS demand. But it became a really bad business with the advent of AI because it just moved the hyperscalers to invest more in state-of-the-art data centers. These are older data centers that we’re short, the idea being that the new GPU-centric data centers need liquid cooling – they basically need all the infrastructure ripped out and replaced – and the business was not a high return on capital business before this. It’s getting even worse now.

What Equinix said yesterday at their Analyst Day was that revenues were not going to quite be what people thought they would be, but more ominously, capex was going to keep increasing. That’s what we’ve been saying, that these are not like warehouses where you just collect a check. These are actually operating businesses where you have to service the servers, you have to make sure there’s redundancy. It’s a business, a tech business, and they’re traded as REITs and that was the opportunity. That was the dichotomy in valuation. People added back the depreciation as they do with REITs and they valued them on a so-called FFO or AFFO, which is a cash flow metric. But in fact, unlike warehouses, shopping centers to a lesser extent, office buildings, the capex was real. Depreciation was a real expense. To give you an example, with Equinix yesterday, they said “Our capex is now going to bump up to between $4 billion and $5 billion a year.” The problem is their EBITDA this year is expected to be $4.5 billion, so all of that’s going to go to capex, meaning they’re going to have to basically borrow or issue equity to pay their interest and dividends. That’s just a definition of a bad business and it’s a business that’s not growing very fast. Unlike other really true AI companies which are growing 25%, 30%, 40% a year, these guys are growing 3%, 5%, 6% sort of with GDP. So there’s no growing their way out of this. So they’re just really bad businesses trading at just nosebleed valuations.

Tracy: On the topic of idiosyncratic opportunities I got to ask about Carvana because when my husband and I moved back to the States in 2022, we bought a used car through Carvana and that was a mistake. It took us about 6 months to actually get the car and they lost all our paperwork and it was just an absolute nightmare. I thought at the time this is a company whose entire business model is basically built on regulation, that’s what they’re doing and I thought they’re not going to have a future if they are this bad at it. Yet the stock is up.

Joe: It’s done insanely well.

Jim: It’s done a double round trip. It crashed 99% and now it’s up 100x, so it’s pretty interesting again. The reason it’s interesting is that if you go through the numbers, they are making more than 100% of their pre-tax profit from gain on sale of subprime loans and gain on sale of equity stakes in other companies. You ex those two out, they’re losing money and they’re losing money now right after the rebound, after the restructuring from 2022-2023. This is a company that is being valued again as a secular growth stock that saw its used car revenues drop 30% between 2022 and 2023, so it’s not necessarily a secular growth company. The accounting is abysmal. What people are really missing is that what’s happening in subprime auto securizations right now – and you can track it on your Bloomberg terminal – delinquencies are starting to skyrocket.

Tracy: We actually did an episode on this recently with Jim Egan.

Jim: So a huge amount of their profits comes from generating paper from customers and then selling it into the open market or to affiliates. This is a company that was spun out of a company called Drive Time Finance, which is their affiliated finance company which was originally called Ugly Duckling in the late ‘90s which was run by the current CEO’s father. That company collapsed in the first subprime blowup which was not the GFC – it was actually in the late ‘90s in subprime auto credit and consumer loans. It didn’t go bankrupt but it came close. He had to restructure it. He bought it in private and then restructured it, renamed it Drive Time Finance. But that’s the genesis of Carvana. That’s its DNA. It’s basically a subprime finance lead company, if you will. Those companies should not trade at 40x and 50x expected earnings – and they don’t by and large. They’re consumer finance companies. So it’s an odd bird. It’s still heavily leveraged, the stock is up a ton.

But what really got us interested again recently was the vast amount of insider selling that has just started in May and June in the company. If you go look at the insider selling in the company, it is just now a torrent of everybody selling pretty much every day. We just don’t think that’s a good sign given what’s happening in the subprime securization market…

…Jim: Every once in a while. There’s one other thing though I do want to mention. I was talking to someone earlier today and I think one of the things that’s underappreciated by investors right now and one of the things that’s been most interesting to me is how corporate profit margins have held up, which used to be very mean-reverting as you know. The more work we’ve done on this, the more we’re kind of convinced that the capital spending boom we’re seeing due to tech and specifically AI, is is looking very much akin to the global internet buildout networking buildout in the late ‘90s and the problem there of course is that if you buy my chips from NVIDIA or you were buying my networking equipment at Cisco and Lucent, that’s revenue for me and profit. But for you it’s a capitalized expense, it’s written off over time, and that adds a big, big boost until people pull their orders. That’s what we saw in 2001, 2002 that GDP dropped about 1% to 2% in the recession of ‘01-’02. Does anybody know what corporate profits did in that? That was an investment-driven recession. Consumers didn’t feel it at all. Earnings were down about 45% I think from peak to trough in the S&P. They were down about the same, a little bit more in the global financial crisis, but of course GDP collapsed.

Here’s a little interesting thought experiment. Right now NVIDIA’s revenues are about one-half of 1% of US GDP, about $140 billion and our GDP is about $29 trillion. Anyone tell me what Cisco and Lucent – the two companies that you needed when building out your internet network in ‘99, 2000 – did anybody know what their combined revenues as a percent of GDP was in 2000?

Tracy: No using your phones.

Joe: And ChatGPT.

Jim: It was a half a percent. It was roughly $50 billion total on GDP of $10 trillion. So those revenues stopped growing at some point shortly thereafter and actually shrunk a little bit. The investment boom we’re seeing right now, we’ve seen before. And it’s not just chips. It’s Caterpillar, it’s people building the data centers, it’s people building new utilities. There is an ecosystem around the AI boom that is considerable, as there was for TMT back in ‘99 and 2000. But it is a riskier revenue stream because if people pull back, they can pull back capex very easily, projects can get put on hold for six months or nine months, and that immediately shows up in disappointing revenues and earnings forecast if it happens. We’re not there yet but that’s one of the risks out there that I think a lot of people are underestimating.

2. Creating therapeutic abundance – Jacob Kimmel

Jack Scannell infamously predicted in 2012 that the number of drugs per billion dollars would decline two-fold every nine years. Unfortunately, our therapeutics industry has largely followed through…

…Drug program success rates are equally complex. Failures can be attributed to safety issues, failure of a drug to hit the desired biological target, or improper selection of the target for a given disease…

…We can bucket the failures into a two broad categories of safety and efficacy and make informed estimates.

1. Safety failures – ~20-30% of all candidates
A molecule was developed, but proved unsafe in patients. These are typically detected as failures in Phase 1 trials.

2. Efficacy failures – 70-80% of all candidates
The remainder of all drug candidates that fail – 63% of all drugs placed into trials period – fail due to a lack of efficacy. Even though the drugs are safe, they don’t provide benefit to the patients by treating their disease.

From these coarse numbers, it’s clear that the highest leverage point in our drug development process is increasing the efficacy rate of new candidate medicines…

…Efficacy failures can broadly occur for two reasons:

  1. Engagement failures: We chose the right biology (“target”) to manipulate, but our drug candidate failed to achieve the desired manipulation. This is the closest thing drug development has to an engineering problem.
  2. Target failures: The drug candidate manipulated our chosen biology exactly as expected. Unfortunately, the target failed to have the desired effect on the disease. This is a scientific or epistemic failure, rather than an engineering problem. We simply failed to understand the biology well enough to intervene and benefit patients.

It’s difficult to know exactly the exact frequency of these two failure modes, but we can infer from a few sources that target failures dominate.

  • Success rates for biosimilar drugs hitting known targets are extremely high, >80%
  • Drugs against targets with genetic evidence have a 2-3 fold higher success rate than those against targets lacking this evidence, suggesting that picking good targets is a high source of leverage
  • Among organizations with meaningful internal data, picking the right target is considered the first priority of all programs (e.g. “Right target” is the first tenet of AstraZeneca’s “5Rs” framework).

The predominance of target failures has likewise led most companies working on new modalities to address a small set of targets with well-validated biology. This has led to dozens of potential medicines “crowding” on the same targets, and this trend is increasing over time…

…If searching for targets is the limiting reagent in our medicine production function, the difficulty of finding targets must increase over time in order to explain part of Eroom’s law. How could this be the case given all the improvements in underlying biomedical science?

In an influential paper “Are ideas getting harder to find?”, Nicholas Bloom and colleagues argue that many fields of invention suffer from diminishing returns to investment. Intuitively, the low hanging fruit in a given discipline is picked early and more investment is required merely to reap the same harvest from higher branches on the tree of ideas…

…Targets are getting harder to find not because we are getting worse at selection, but because many of the easy and obvious therapeutic hypotheses have already been exploited….

…While promising, human genetics can only reveal a certain class of targets. The larger the effect size of a genetic variant, the less frequently it appears in the population due to selective pressure. In effect, this means that the largest effects in biology are the least likely to be discovered using human genetics. Many of the best known targets have minimal genetic signal for this reason.

Our current methods are good at discovering individual genes that associate with health, but discovering combinations of genes is nascent at best. Human genetics cannot help us discover the combinatorial medicines or gene circuits to install in a cell therapy…

…Even with the best possible experimental methods, some of the most promising target biologies will never be searched exhaustively. There are a nearly infinite number of combinatorial genetic interventions we might drug, synthetic circuits we might engineer into cells, and changes in tissue composition we might engender.

Artificial intelligence models can learn general models from the data generated in functional genomics experiments of many flavors, predicting outcomes for the experiments we haven’t yet run. If we manage to construct a performant model for a given class of target biologies, we may be able to increase the efficiency of target discovery by many orders-of-magnitude. The cost of discovering a target could conceivably go from >$1B to <$1M.

There’s growing interest in the idea of combining these technologies to build “virtual cells,” models that can predict the outcomes of target discovery experiments in silico before they’re ever executed in the lab. The grand version of this vision spans all possible target biologies, from gene inhibitions to polypharmaceutical small molecule treatments. In the maximal form, it may take many years to realize.

More limited realizations though are tractable today. The initial versions of these models are already emerging within early Predictive Biology companies. As a few examples, Recursion is building models of genetic perturbations in cancer cells, Tahoe Tx is building models in oncology with a chemical biology approach, and NewLimit has developed models for reprogramming cell age across human cell types13. Focused models like these represent an early demonstration that this general approach can yield therapeutic value…

…We are entering an epoch of abundant intelligence. With these tools, we have the opportunity to discover & design target biologies at a rate that’s too cheap to meter. The therapies that emerge could serve as the counterexample that downgrades Eroom’s law to a historic conjecture.

3. What I learned watching 78 videos from Tesla’s Austin robotaxis – Timothy B. Lee

I’ve watched 78 videos posted by pro-Tesla influencers who got early access to the service. Those videos documented more than 16 hours of driving time across nearly 100 rides.

These videos exceeded my expectations. Tesla’s robotaxi rollout wasn’t perfect, but it went as well as anyone could have expected. A handful of minor glitches got outsized attention online, but a large majority of trips were completed without incident…

…Tesla’s robotaxis drove flawlessly during the vast majority of the 16 hours of driving footage I watched. They stayed in their lane, followed traffic laws, and interacted smoothly with other vehicles…

…Tesla’s most widely discussed error occurred around seven minutes into this video. The robotaxi approached an intersection and got into the left turn lane. But the robotaxi couldn’t make up its mind whether it wanted to turn left or go straight. The car’s steering wheel jerked back and forth several times. On the car’s display, the blue ribbon showing the car’s intended path jumped back and forth erratically between turning left or continuing straight. Finally, the Tesla decided to proceed straight but ended up driving the wrong way in the opposite left turn lane…

…But in a piece last year, I argued that they were misunderstanding the situation.

“Tesla hasn’t started driverless testing because its software isn’t ready,” I wrote. “For now, geographic restrictions and remote assistance aren’t needed because there’s always a human being behind the wheel. But I predict that when Tesla begins its driverless transition, it will realize that safety requires a Waymo-style incremental rollout.”

That’s exactly what’s happened:

  • Just as Waymo launched its fully driverless service in 50 square miles near Phoenix in 2020, so Tesla launched its robotaxi service in about 30 square miles of Austin last month.
  • Across 16 hours of driving, I never saw Tesla’s robotaxi drive on a freeway or go faster than 43 miles per hour. Waymo’s maximum speed is currently 50 miles per hour.
  • Tesla has built a teleoperation capability for its robotaxis. One job posting last year advertised for an engineer to develop this capability. It stated that “our remote operators are transported into the device’s world using a state-of-the-art VR rig that allows them to remotely perform complex and intricate tasks.”

The launch of Tesla’s robotaxi service in Austin is a major step toward full autonomy. But the Austin launch also makes it clear that Tesla hasn’t discovered an alternative path for testing and deploying driverless vehicles. Instead, Tesla is following the same basic deployment strategy Waymo pioneered five to seven years ago.

Of course, this does not necessarily mean that Tesla will scale up its service as slowly as Waymo has. It took almost five years for Waymo to expand from its first commercial service (Phoenix in 2018) to its second (San Francisco in 2023). The best informed Tesla bulls acknowledge that Waymo is currently in the lead but believe Tesla is positioned to expand much faster than Waymo did…

…Last month, Waymo published a study demonstrating that self-driving software benefits from the same kind of “scaling laws” that have driven progress in large language models.

“Model performance improves as a power-law function of the total compute budget,” the Waymo researchers wrote. “As the training compute budget grows, optimal scaling requires increasing the model size 1.5x as fast as the dataset size.”

When Waymo published this study, Tesla fans immediately seized on it as a vindication of Tesla’s strategy. Waymo trained its experimental models using 500,000 miles of driving data harvested from Waymo safety drivers driving Waymo vehicles. That’s a lot of data by most standards, but it’s far less than the data Tesla could potentially harvest from its fleet of customer-owned vehicles…

…I posed this question to Dragomir Anguelov, the head of Waymo’s AI foundations team and a co-author of Waymo’s new scaling paper. He argued that the paper’s implications are more complicated than Tesla fans think.

“We are not driving a data center on wheels and you don’t have all the time in the world to think,” Anguelov told me in a Monday interview. “Under these fairly important constraints, how much you can scale and what are the optimal ways of scaling is limited.”

Anguelov also pointed to an issue that will be familiar to anyone who read last month’s explainer on reinforcement learning.

Waymo’s scaling paper—like OpenAI’s famous 2020 scaling law paper—focused on models trained with imitation learning…

…Anguelov was a co-author of a 2022 Waymo paper finding that self-driving models trained with a combination of imitation and reinforcement learning tend to perform better than models trained only with imitation learning.

Imitation learning is “not the most sophisticated thing you can do,” Anguelov told me. “Imitation learning has a lot of limitations.”

This is significant because demonstration data from human drivers—the kind of data Tesla has in abundance—isn’t very helpful for reinforcement learning. Reinforcement learning works by having a model try to solve a task and then judging whether it succeeded. For self-driving, this can mean having a model “drive” in simulation and then judging whether it caused a collision or other problems. Or it can mean running the software on real cars and having a safety driver intervene if the model makes a mistake. In either case, it’s not obvious that having vast amounts of human driving data is especially helpful.

One finding from that 2022 paper is particularly relevant for thinking about the performance of Tesla’s robotaxis. The Waymo researchers noted that models trained only with imitation learning tend to drive well in common situations but make mistakes in “more unusual or dangerous situations that occur only rarely in the data.”

In other words, if you rely too much on imitation learning, you can end up with a model that drives like an expert human most of the time but occasionally makes catastrophic mistakes…

…Since its 2018 launch, Waymo has acknowledged that it has remote operators who sometimes provide real-time assistance to its vehicles. But Waymo has also said that these remote operators never drive the vehicles in real time. Instead, they provide high-level feedback, while the vehicle always remains in control of second-by-second decisions.

In contrast, Tesla’s job posting stated that teleoperators can be “transported into the device’s world” so that they can “remotely perform complex and intricate tasks.” Could those “complex and intricate tasks” include driving the car for seconds or even minutes at a time?

In the videos I watched, a number of Tesla’s early customers commented on how human-like Tesla’s driving was. That might just be a tribute to the quality of Tesla’s AI model. But it’s also possible that sometimes a human driver is literally driving the vehicle from a remote location.

4. No Bad Risks, Only Bad Rates — And Other Lessons From National Indemnity Founder Jack Ringwalt – Kingswell

There are no bad risks in insurance — only bad rates

This maxim was Ringwalt’s north star, the iron-clad principle that allowed him to fearlessly pursue unusual and unwanted risks without driving himself right out of business. Almost anything can be intelligently insured, so long as you charge enough for the coverage.

(It’s also reminiscent of one of my favorite Warren Buffett lines. “I can go into an emergency ward and write life insurance,” he said in 1990, “if you let me charge enough of a premium.”)

When evaluating potential opportunities, Ringwalt’s open mind welcomed the weird and the wild — and he wrote many policies on offbeat ventures that others wouldn’t touch with a ten-foot pole. But, when it came to pricing, that flexibility vanished. If the market would not meet his rate, Ringwalt never blinked. He just waved goodbye to the deal with an indifferent shrug.

“When business is unprofitable to the companies in general,” wrote Ringwalt, “our premium volume has taken a very sharp spurt and when business has been profitable for most companies, we have run into very unintelligent competition and have had to cut down temporarily on our writings.”

The insurance merry-go-round is always the same: profitability lures rivals who slash rates to grab market share, only to crater when losses inevitably pile up. And when the industry bleeds, fly-by-night competitors vanish, prices climb back to normal, and the cycle starts spinning anew. “This pattern will keep repeating,” he wrote. “It makes no sense, but it’s human nature.”

Ringwalt steadfastly refused to play that sucker’s game — a tradition that continued under Berkshire’s aegis. From 1986 to 1999, National Indemnity’s revenue nosedived 85% as profitable premiums evaporated. But, rather than succumb to the pressure to write more business at any price, Buffett and co. urged employees to wait patiently for the right pitch (so to speak). Some things never change.

5. Why I don’t think AGI is right around the corner – Dwarkesh Patel

Sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the internet. I disagree. I think the LLMs of today are magical. But the reason that the Fortune 500 aren’t using them to transform their workflows isn’t because the management is too stodgy. Rather, I think it’s genuinely hard to get normal humanlike labor out of LLMs. And this has to do with some fundamental capabilities these models lack…

…But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.

This just wouldn’t work. No matter how well honed your prompt is, no kid is just going to learn how to play saxophone from just reading your instructions. But this is the only modality we as users have to ‘teach’ LLMs anything…

…When we do solve continuous learning, we’ll see a huge discontinuity in the value of the models. Even if there isn’t a software only singularity (with models rapidly building smarter and smarter successor systems), we might still see something that looks like a broadly deployed intelligence explosion. AIs will be getting broadly deployed through the economy, doing different jobs and learning while doing them in the way humans can. But unlike humans, these models can amalgamate their learnings across all their copies. So one AI is basically learning how to do every single job in the world. An AI that is capable of online learning might functionally become a superintelligence quite rapidly without any further algorithmic progrss…

…But here are the timelines where I’d take a 50/50 bet:

  • AI can do taxes end-to-end for my small business as well as a competent general manager could in a week: including chasing down all the receipts on different websites, finding all the missing pieces, emailing back and forth with anyone we need to hassle for invoices, filling out the form, and sending it to the IRS: 2028
    I think we’re in the GPT 2 era for computer use. But we have no pretraining corpus, and the models are optimizing for a much sparser reward over a much longer time horizon using action primitives they’re unfamiliar with. That being said, the base model is decently smart and might have a good prior over computer use tasks, plus there’s a lot more compute and AI researchers in the world, so it might even out. Preparing taxes for a small business feels like for computer use what GPT 4 was for language. It took 4 years to get from GPT 2 to GPT 4. Just to clarify, I am not saying that we won’t have really cool computer use demos in 2026 and 2027 (GPT-3 was super cool, but not that practically useful). I’m saying that these models won’t be capable of end-to-end handling a week long and quite involved project which involves computer use.
  • AI learns on the job as easily, organically, seamlessly, and quickly as a human, for any white collar work. For example, if I hire an AI video editor, after six months, it has as much actionable, deep understanding of my preferences, our channel, what works for the audience, etc as a human would: 2032
    While I don’t see an obvious way to slot in continuous online learning into current models, 7 years is a long time! GPT 1 had just come out this time 7 years ago. It doesn’t seem implausible to me that over the next 7 years, we’ll find some way for models to learn on the job.

You might react, “Wait you made this huge fuss about continual learning being such a handicap. But then your timeline is that we’re 7 years away from what would at minimum be a broadly deployed intelligence explosion.” And yeah, you’re right. I’m forecasting a pretty wild world within a relatively short amount of time.

AGI timelines are very lognormal. It’s either this decade or bust. (Not really bust, more like lower marginal probability per year – but that’s less catchy).AI progress over the last decade has been driven by scaling training compute of frontier systems (over 4x a year). This cannot continue beyond this decade, whether you look at chips, power, even fraction of raw GDP used on training. After 2030, AI progress has to mostly come from algorithmic progress. But even there the low hanging fruit will be plucked (at least under the deep learning paradigm). So the yearly probability of AGI craters.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (the company behind Waymo), and Tesla. Holdings are subject to change at any time.

Does Grab Holdings’ Recent Convertible Note Offering Make Sense?

Management teams that can make use of opportune pricing of stocks and debt can greatly increase the returns of shareholders.

Grab Holdings (NASDAQ: GRAB) recently announced that it would be raising cash through a convertible note offering. This came as a surprise to investors as Grab still has lots of cash on its balance sheet.

But if you look at recent history, it is not uncommon to see companies raise cash when the cost of capital is relatively low, even when they have sufficient cash on their balance sheets.

Companies such as Zoom Communications (NASDAQ: ZM) and Tesla (NASDAQ: TSL) raised cash through a secondary offering in 2021 when their stock prices went hyperbolic. 

With stock prices rising to new all-time highs again, we could potentially see more companies taking advantage of favourable market conditions to raise cheap capital. With that in mind, I thought it would be a good time to share some quick thoughts on such capital raises.

Understanding cost of capital

The question of whether a company should raise capital boils down to whether the returns earned on the capital exceed the cost of capital.

But there is a lot of confusion over what the cost of capital is. For debt issuance, the cost of capital is simply the interest that is paid on the debt. For equity issuance, the cost of capital is a lot more complicated.

There are a few schools of thought when it comes to calculating an equity’s cost of capital. I like to keep things simple – and the simplest way to think about it is by assessing the impact on future returns to shareholders on a per share basis. For instance, if a company needs to issue 300 shares and has 1,000 shares outstanding, the cost of capital is 30% of the company. To make the share issuance worthwhile, the company needs to ensure that the money raised will be able to increase the future stream of cash returned to shareholders by at least 30%.

So a company that is projected to return $1 per share to shareholders for eternity will require the cash that is raised to increase that return-figure to at least $1.30 to justify a share issuance that dilutes shareholders by 30%.

Does Grab’s issuance make sense?

With this in mind, let’s take a look at Grab’s recent note offering. Last month, Grab announced that it would be raising US$1.5 billion through a zero coupon convertible note offering. 

Convertible note offerings are debt offerings in the sense that the money needs to be paid back. But because it is convertible, these notes can potentially be turned into equity. In Grab’s convertible note offering, the debt can be turned into equity if its stock price trades above the conversion price of US$6.55. If the conversion happens, Grab does not need to pay back cash to the note holders but the new shares will become dilutive to existing shareholders.

Let’s assume that all these notes will be turned into equity. As of end-2024, Grab had a fully diluted share count of 4.3 billion shares (including warrants, unvested restricted stock units, and options). The note offering, if converted to shares, will result in 229 million new shares being created, which means 5.3% dilution. In other words, for the convertible note offering to make sense for Grab, it needs to use the proceeds to increase its future cash returned to shareholders by at least 5.3% per share. 

This can be done in two ways: (1) Grow the cash generated by the company by more than 5.3% or (2) decrease the share count by more than 5.03% (a 5.03% reduction in share count will lead to 5.3% per share growth – you can do the math)

Grab mentioned that it could potentially use the cash to buyback shares. If they manage to do it at the current share price of around US$4.70, the company will be able to buy back 330 million shares or around 7.3% of its fully diluted share count (this includes the 5.3% dilution from the conversion of the convertible notes). This would be a massive win for the company. In essence, Grab would be able to reduce its share count even after the conversion of the convertible notes to shares, simply by using the cash raised and buying back its shares at current prices.

Grab doing a massive buyback may not be far fetched, as the company also simultaneously announced along with its note offering that it is buying back around US$273.5 million of its shares at US$4.68 each from buyers of the notes.

Bottom line

As investors, it is challenging to assess whether equity raises make sense or not. The theory mentioned above may be simple but in most cases, there are many moving parts.

Cash is also fungible, and we usually do not know where the capital went. If the company had not raised cash, which aspect of its expenses or investments would it have cut?

As an investor, instead of trying to assess where the money went, one thing that we can do is to dissect whether management teams are raising capital at opportune times; opportune times are when stock prices are high or when interest rates are low. This is when the cost of capital is the cheapest. Likewise, companies should be buying back stock when stock prices are low and holding back on debt issues when interest rates are high.

As investors, owning a strong business is one thing, but we also need management teams that are savvy with capital allocation and capital raising. Management teams that can make use of opportune pricing of stocks and debt can greatly increase the returns of shareholders.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Tesla Inc. Holdings are subject to change at any time.

What We’re Reading (Week Ending 06 July 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 06 July 2025:

1. Etched is Making the Biggest Bet in AI – Etched team

We’ve spent the past two years building Sohu, the world’s first specialized chip (ASIC) for transformers (the “T” in ChatGPT).

By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either.

But for transformers, Sohu is the fastest chip of all time. It’s not even close.

With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs…

…No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.

Suddenly, that’s changed:

  • Unprecedented Demand: Before ChatGPT, the market for transformer inference was ~$50M, and now it’s billions. All big tech companies use transformer models (OpenAI, Google, Amazon, Microsoft, Facebook, etc.).
  • Convergence on Architecture: AI models used to change a lot. But since GPT-2, state-of-the-art model architectures have remained nearly identical! OpenAI’s GPT-family, Google’s PaLM, Facebook’s LLaMa, and even Tesla FSD are all transformers.

When models cost $1B+ to train and $10B+ for inference, specialized chips are inevitable. At this scale, a 1% improvement would justify a $50-100M custom chip project.

In reality, ASICs are orders of magnitude faster than GPUs. When bitcoin miners hit the market in 2014, it became cheaper to throw out GPUs than to use them to mine bitcoin…

…We believe in the hardware lottery: the models that win are the ones that can run the fastest and cheapest on hardware. Transformers are powerful, useful, and profitable enough to dominate every major AI compute market before alternatives are ready:

  • Transformers power every large AI product: from agents to search to chat. AI labs have spent hundreds of millions of dollars in R&D to optimize GPUs for transformers. The current and next-generation state-of-the-art models are transformers.
  • As models scale from $1B to $10B to $100B training runs in the next few years, the risk of testing new architectures skyrockets. Instead of re-testing scaling laws and performance, time is better spent building features on top of transformers, such as multi-token prediction.
  • Today’s software stack is optimized for transformers. Every popular library (TensorRT-LLM, vLLM, Huggingface TGI, etc.) has special kernels for running transformer models on GPUs. Many features built on top of transformers aren’t easily supported in alternatives (ex. speculative decoding, tree search).
  • Tomorrow’s hardware stack will be optimized for transformers. NVIDIA’s GB200s have special support for transformers (TransformerEngine). ASICs like Sohu entering the market mark the point of no return. Transformer killers will need to run on GPUs faster than transformers run on Sohu. If that happens, we’ll build an ASIC for that too!…

…Isn’t inference bottlenecked on memory bandwidth, not compute?

Actually, for modern models like Llama-3, no!

Let’s use NVIDIA and AMD’s standard benchmark13: 2048 input tokens and 128 output tokens. Most AI products have much longer prompts than completions (even a new Claude chat has 1,000+ tokens in the system prompt).

On GPUs and on Sohu, inference is run in batches. Each batch loads all of the model weights once, and re-uses them across every token in the batch. Generally, LLM inputs are compute-bound, and LLM outputs are memory-bound. When we combine input and output tokens with continuous batching, the workload becomes very compute bound…

…We can scale up the same trick to run Llama-3-70B with 2048 input tokens and 128 output tokens. Have each batch consist of 2048 input tokens for one sequence, and 127 output tokens for 127 different sequences.

If we do this, each batch will require about (2048 + 127) × 70B params × 2 bytes per param = 304 TFLOPs, while only needing to load 70B params × 2 bytes per param = 140 GB of model weights and about 127 × 64 × 8 × 128 × (2048 + 127) × 2 × 2 = 72GB of KV cache weights. That’s far more compute than memory bandwidth: an H200 would need 6.8 PFLOPS of compute in order to max out its memory bandwidth. And that’s at 100% utilization – if utilization was 30%, you’d need 3x more.

Since Sohu has so much compute with very high utilization, we can run enormous throughputs without bottlenecking on memory bandwidth…

…On GPUs and TPUs, software is a nightmare. Handling arbitrary CUDA and PyTorch code requires an incredibly complicated compiler. Third-party AI chips (AMD, Intel, AWS, etc.) have together spent billions on software to little avail.

But since Sohu only runs transformers, we only need to write software for transformers!

Most companies running open-source or internal models use a transformer-specific inference library like TensorRT-LLM, vLLM, or HuggingFace’s TGI. These frameworks are very rigid – while you can tweak model hyperparameters, changing the underlying model code is not really supported. But this is fine – since all transformer models are so similar (even text/image/video ones), tweaking the hyperparameters is all you really need.

2. Lots More on What’s Going On in Iran’s Markets (Transcript Here) – Tracy Alloway, Joe Weisenthal, and Maciej Wojtal

Maciej: If I can just comment on one thing, because the way you introduced Iran is the perfect way to show the country. It’s the size of Turkey in terms of population and actually geographical size as well. But if you compare the economy of Turkey and Iran, it’s around five times smaller. So Iran is around five times smaller and if you look at the composition of the economy, Turkey has no natural resources, so they have to import the whole energy commodities they consume. So Iran has a similar size of potential non-commodity GDP that it could grow to, from the current let’s say $250 billion to $1.1 trillion GDP that Turkey has. But also on top of this, has resources that are actually – if you combine gas and oil, they are bigger than Saudi Arabia’s and Saudi Arabia is another one, I think $1.3 trillion economy. This is a good way to just frame Iran as to show Iran, as it’s a big country that should really be having much bigger economy. Because of sanctions, various reasons and so on, it’s been underdeveloped. But the scale of this underdevelopment is like 10x.

Tracy: And because of the sanctions we can’t actually go and look up what’s happening in Tehran’s stock market. So why don’t you give us an overview of what it’s been like for the past week given geopolitical events?

Maciej: So for the past week it was difficult for everyone to check what was going on in Iran because internet was shut down basically. I could communicate with my team on the ground in Tehran once a day when they had signal and sometimes it was WhatsApp that was working, sometimes Telegram. But it was maybe once or twice per day. So what was going on in the market was simply nothing. The stock market hasn’t opened. The exchange of fire between Iran and Israel happened on a Friday, which is weekend in Iran, and then on the following Saturday there was an important religious holiday, so the market and actually the whole economy was supposed to be closed anyway. The economic activity, the market, was supposed to resume on a Sunday but they didn’t open. So the stock market, pretty much most of the currency market, has been closed for the last two weeks…

…Maciej: For example right now, when you have the stock market, the currency market were shut down, but you could track what’s going on with the exchange rate of the Iranian rial versus dollar either on Telegram chats but also on cryptocurrency exchanges. You have liquid market on stable coins versus Iranian rial inside of Iran where liquidity was limited during the last period anyway, but we could see the changes. So we knew that $1 before the war was at around 830,000 rials, then it went up roughly 15% to 950,000 and now after the ceasefire, it’s back down at 850,000. You can track the market, you can actually make transactions depending on the vol, on the liquidity, but it is possible. To be honest, when I saw those exchange rates moves 15% when you have a war where a lot of commentators were saying that this could turn into a massive worldwide conflict, that 15% in a country like Iran I would say that this is your usual volatility on the currency market…

…Maciej: In Tehran, a lot of residents were just relocating out of Tehran. Tehran is a big city, 12 million people, and they were moving mainly north to some smaller cities by the Caspian Sea. You had massive congestion. People were spending hours in traffic jams trying to get out of Tehran. There was not enough petrol on gas stations just because of this peak in demand. You had some petrol rationing.

Then I was asking them is the economy working, not working? Everything that was non-essential basically was closed. So you couldn’t build, buying materials or anything like this. But groceries, pharmaceuticals, gas stations, banks, this was all open and working properly with some disruptions. But those disruptions, for example, if you wanted to buy groceries in north of Iran where everyone has just relocated, you had some logistical bottlenecks. Distribution was not fast enough, so you had some shortages just for a little while. With banks, some branches were not operating at 100% capacity. Two banks got hacked. You had some cyber attacks on two banks in Iran and one cryptocurrency exchange. The rest of the banking sector was working without any disruptions. You could get cash from any ATM. There were no problems like these…

…Maciej: It’s interesting because there is information, very up-to-date detailed information on Iranian stocks available in Iran. But the majority of this information is not accessible if you’re trying to access it from a computer with your IP address outside of Iran. A lot of this information is restricted to Iran IP only. You cannot find anywhere on the whole internet. There is no website that shows the stock market index in dollars. When we send it out to our investors or just people who want to read news about the stock market in Iran, we are the only source of this information. This is quite amazing. It’s a country of 90 million people and stock market with 700 companies and there is no single place in internet that would show you the only important index…

…Maciej: In terms of oil, it is not really publicly traded. There is one Iranian monopoly called National Iranian Oil Corporation or company that is responsible for production. I think this is all centralized in one company and this is held by the government so it’s not publicly listed. You have some exposure to oil through oil refineries that are listed but refineries, they’re not sensitive to the price of oil. They are sensitive to the crack spread, which defines their refining margin, so they are not really a proxy to oil prices.

The whole stock market actually is well diversified. You have  large sectors such as chemicals – mainly these are like petrochemicals – companies that produce different products, different versions, use natural gas that is in in large supply as a cheap commodity and produce fertilizers or products like this. This is probably 20% of the stock market.

Then you have steel companies. The largest steel company in the Middle East is in Iran. You have car makers that produce more than 1 million cars a year. With car manufacturers, you have all the related industries, suppliers, to the car manufacturing businesses. You have banks – financials is an important sector – plus some consumer exposure, some building materials, cement companies are one of the best performers over the last few years actually…

…Maciej: I’ll get back to the potential for GDP in a moment. But the catalyst is absolutely clear. It must be the opening up of Iran as a country, and opening up of the economy, andthe US sanctions lifted. There must be an agreement between the US and Iran. What needs to happen? Some sort of political change. Political attitude must change on both sides. But to be honest, many analysts were expecting some big dramatic event that needs to happen in Iran for the country to properly open up.

When you look at Iran right now and you compare to let’s say even a few years ago when you had negotiations with the US, what were the biggest problem was, it was always about two things: (1) Iran enriching uranium too much basically, at a wrong level, and (2) Iranian regional policies, so financing proxies from Hezbollah to Hamas, Assad in Syria and so on. These two things were always the problem that they couldn’t negotiate over. When you look at it right now, to a large extent both obstacles are gone…

…Joe: Are there tech companies that trade on the Tehran stock market?

Maciej: There are tech companies. The ones that are listed are related to enterprise software, the Oracle or SAP, German SAP. But you have privately held companies that would like to IPO but they are just waiting for the approval from the regulator, and these are quite amazing companies. You have Snapp, which is like an Uber, but Snapp has more rides in Tehran than Uber in any city in the world. It’s a really world-class company. You have Digikala, which is like Amazon basically, also a large company, one of the biggest success stories.

3. Stablecoins might revolutionise payments, but what if they don’t? – Bryce Elder

That leaves payments:

While in a theoretical tokenized/blockchain based world, stablecoin-based payments would be faster, more efficient and interoperable, in practice at the moment these stablecoin based payments mostly start and finish with fiat, thus requiring on/off-ramps. This on/off ramp requirement adds significant friction/cost to the use of stablecoins for payments, making it less attractive compared to traditional financial systems, in particular if one takes into account the emergence of faster payment rails in the traditional financial system via fintech advancements in recent years. As a result, we find rather unrealistic the expectation of a massive increase in the use of stablecoins in payments. Indeed, our colleagues in US short-term rates research also note that market participants at the front end are skeptical of significant growth in the near term, in part due to the fact that the infrastructure/ecosystem for stablecoins remains underdeveloped. But even if one adopts an optimistic view and assumes, for example, a tenfold increase in the use of stablecoins in payments over the next couple of years, the stablecoin universe would only expand by $15bn x 10 = $150bn.

Stablecoin optimists point to the rapid adoption of the e-CNY, China’s central bank digital yuan, which has grown to a more than Rmb300bn market cap from Rmb13.6bn at the end of 2022. There’s no comparison, JPMorgan says:

First, the digital yuan is a central bank liability and thus it effectively replaces banknotes in circulation. While there does not appear to be a published target share of M0, there have been suggestions that a 10-15% share of M0 is a plausible medium-term goal, which would imply around RMB 1.3-2tr using current M0 levels. By contrast, stablecoins are a form of a tokenized MMF with zero interest, effectively a private sector liability rather than a central bank liability.

Second, the digital yuan does not operate through a fully decentralized blockchain-based ledger. Instead, it operates via a centralized network supervised by the PBoC and competes with other mobile/ electronic payment options in China such as Alipay and WeChat Pay.

Then is it better to think of stablecoins as global equivalents to Alipay and WeChat Pay? JPMorgan says no. Fintech payment companies offering collateralised electronic private money on their own platforms hasn’t proven the need for public blockchains; if anything, it proves the opposite:

Alipay/WeChat Pay digital money are private liabilities and are perhaps more similar to bank deposits in that regard which are also private liabilities. The difference between bank deposits and Alipay/WeChat balances is that the latter are backed by reserve funds that in turn hold public liabilities i.e. central bank reserves, while bank deposits are matched on the asset side by a mix of loans and debt securities, though they do have an additional guarantee via deposit protection arrangements.

In our mind, the strong expansion of Alipay and WeChat Pay should be viewed through the lens of a fintech payments revolution over the past decade in China that utilizes and increases the efficiency of traditional banking/financial system networks, rather than through the lens of a blockchain/crypto ecosystem revolution. In fact, it could be argued that the success and continued advancements in payments by fintechs, such as Alipay and WeChat Pay reduce the need for blockchain-based payment systems in the future.

4. Meet Project Rainier, Amazon’s one-of-a-kind machine ushering in the next generation of AI – Kirsteen Rodger

Project Rainier is designed as a massive “EC2 UltraCluster of Trainium2 UltraServers.” The first part refers to Amazon Elastic Compute Cloud (EC2), an AWS service that lets customers rent virtual computers in the cloud rather than buying and maintaining their own physical servers.

The more interesting bit is Trainium2, a custom-designed AWS computer chip built specifically for training AI systems. Unlike the general-purpose chips in your laptop or phone, Trainium2 is specialized for processing the enormous amounts of data required to teach AI models how to complete all manner of different and increasingly complex tasks—fast.

To put the power of Trainium2 in context: a single chip is capable of completing trillions of calculations a second. If, understandably, that’s a little hard to visualize: consider that it would take one person more than 31,700 years to count to one trillion. A task that would require millennia for a human to complete can be done in the blink of an eye with Trainium2…

…Traditionally, servers in a data center operate independently. If and when they need to share information, that data has to travel through external network switches. This introduces latency (i.e, delay), which is not ideal at such large scale.

AWS’s answer to this problem is the UltraServer. A new type of compute solution, an UltraServer combines four physical Trainium2 servers, each with 16 Trainium2 chips. They communicate via specialized high-speed connections called “NeuronLinks.” Identifiable by their distinctive blue cables, NeuronLinks are like dedicated express lanes, allowing data to move much faster within the system and significantly accelerating complex calculations across all 64 chips.

When you connect tens of thousands of these UltraServers and point them all at the same problem, you get Project Rainier—a mega “UltraCluster.”…

…Communication between components happens at two critical levels: the NeuronLinks provide high-bandwidth connections within UltraServers, while Elastic Fabric Adapter (EFA) networking technology (identified by its yellow cables) connects UltraServers inside and across data centers. This two-tier approach maximizes speed where it’s most needed while maintaining the flexibility to scale across multiple data center buildings.

5. OpenAI has started to form a “moat” – Rihard Jarc

I think anyone who follows the AI space knows about OpenAI and, more specifically, about ChatGPT. Even outside of investors and tech enthusiasts, the verb ChatGPT has gone viral, similar to how the verb Google started. What is even more surprising is that despite ChatGPT being out there for more than 2 years already, just recently, at the end of March, it came to another acceleration point in terms of adoption when the Ghibli photo trend emerged on ChatGPT:

The number of MAUs doubled from 400 million to 800 million in a matter of a few weeks. Looking at the adoption curves of other highly adopted technology platforms, such as TikTok, Facebook, Instagram; ChatGPT, is on a slope of its own.

Another factor to consider is that it is not just a “I must try it moment”. Looking at the number of minutes a user spends on ChatGPT, the minutes are constantly growing and have now reached the 29-minute daily mark.

Remember that at the start of ChatGPT and LLMs, many critics said that people tried it, had fun, and then didn’t use it again. This trend shows that that is not the case and that with each enhanced model version and UX improvement, the stickiness factor becomes bigger…

…OpenAI also now has serious hardware ambitions. In late May of this year, they acquired Jony Ive’s startup, a famous former Apple designer, for nearly $6.5 billion, who will now lead OpenAI’s hardware efforts. What is now almost a consensus opinion among big tech leaders is that AI will unlock the next computing platform, one that is not tied to the smartphone.

And if you listen to those conversations, everyone is calling for a similar device. A device that will be more like a companion system and will be less dependent on a screen. Proactive assistant who will run even when you don’t ask it.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (the company behind AlphaFold), Amazon (the company behind AWS), Apple, Meta Platforms (the company behind Facebook and Instagram), and Tencent (the company behind WeChat Pay). Holdings are subject to change at any time.

Can The (Micro)Strategy Bitcoin Playbook Last Forever?

Strategy’s amazing financial engineering.

Strategy (recently renamed from Microstrategy) is one of the top performing companies in the US stock market in recent years. The stock price of the highly controversial “Bitcoin holding company” is up 210% in the last year alone and up a staggering 3,300% in the last five years.

One reason why Strategy has done so well is because it is one of the best at raising cheap capital. How does this work?

Self-fulfilling cycle

Strategy’s Bitcoin playbook is pretty simple and yet quite ingenious. The “Bitcoin holding company” basically takes advantage of its stock price trading at a premium to book value by selling new shares for cash. 

Imagine a company that has a book value of $1 million and has 1 million shares. Each share, hence, has a book value of $1. But let’s say that for some reason, someone is willing to buy the shares at $2 each. The company can take advantage of this and sell new shares to this buyer. Let’s say the company sells 1 million new shares for $2 million. After the share issuance, the company now has 2 million shares outstanding and $3 million in book value. The book value per share is also now magically $1.50. The process can become a self-fulfilling cycle where the company raising shares above book value actually leads to the book value per share increasing.

This is exactly what Strategy has done. Its book value per share has risen by using this simple financial engineering trick. But Strategy then also uses proceeds from its share issuance to buy Bitcoin. If Bitcoin’s price rises, Strategy’s book value per share will increase yet again.

In 2023, Strategy raised US$2.0 billion from issuing shares. In 2024, the company raised an even larger sum of US$16.3 billion from ordinary share sales. As of its last quarterly earnings update for the first quarter of 2025, it has raised another US$5.7 billion through sales of common shares and preferred shares.

But Strategy has gone yet one step further. The company has also raised capital through debt markets to buy more Bitcoin, in effect leveraging up its balance sheet and increasing its exposure to Bitcoin. Strategy’s total debt has increased from US$2.2 billion in 2023 to US$7.2 billion in 2024, and US$8.1 billion in the first quarter of 2025.

What the bulls believe

Investors who are bullish on Strategy believe that this virtuous cycle can continue forever. They believe that Strategy’s premium to book value will exist for many years as there are sufficient buyers of the stock who believe in this self-fulfilling cycle. 

If true, Strategy will become a compounding machine simply by issuing new shares at a premium and juicing its book value per share. There’s also the Bitcoin purchases, which adds another growth-factor for Strategy’s book value per share.

But as I mentioned earlier, there’s also leverage at play with Microstrategy because the company has used debt to buy more Bitcoin that it can actually afford. Microstratregy’s book value will therefore swing more than Bitcoin’s price. If Bitcoin’s price rises, Microstrategy’s book value will go up faster. 

When will the party end?

“I applaud Strategy’s playbook. But there are some risks that shareholders need to be wary of. The obvious one is if Bitcoin’s price falls. When this happens, Strategy’s book value per share will fall faster because of the leveraged nature of the company’s balance sheet. As of 31 March 2025, Strategy had US$43.5 billion worth of Bitcoin but only US$32.2 billion in equity. If Bitcoin’s price falls by 50%, Strategy’s book value would drop to US$10.5 billion, or roughly a 66% fall. For Strategy to enter negative book value territory, Bitcoin will need to fall by around 74% from the 31 March Bitcoin price. 

The other major risk is if stock market participants decide that Strategy’s stock price simply does not deserve to trade at a premium to book value. In other words, buyers of the stock only want to pay book value to buy shares. This throws Strategy’s ability to raise capital cheaply out the window. But it also means that Strategy’ shareholders who first invested at a premium to book value could face a potential heavy loss.

As of Bitcoin’s price at the time of writing, Strategy’s book value is worth around US$38 billion. But based on the company’s current stock price, its market capitalisation is around US$108 billion, or a 180% premium to its book value. Even if Bitcoin’s price remains stable, but Strategy’s stock price reverts to no premium on book value, this could still lead to a painful 64% reduction in the stock price price.

For now, momentum and the current environment suggests that market participants are unlikely to bid down Strategy’s stock price so drastically so soon. But things can change during “risk-off” environments and when market participants become more cautious.

A double whammy for Strategy shareholders can happen if both Bitcoin’s price falls and Strategy’s premium to book value narrows.

The bottom line

Whatever you think about Michael Saylor and his Bitcoin views, he certainly has mastered the dark arts of financial manoeuvring. In most assets, fundamentals drive price. Saylor has managed to turn the script around, making price drive fundamentals.

But this comes with risks. If Strategy’s stock price collapses, the virtuous engine stops running. Saylor seems to be wary of these risks. While Strategy continues to issue shares to buy Bitcoin, Saylor is constantly selling his Strategy shares.

Despite the risks, market participants seem hungry for more of such companies. Besides Strategy, there are now a number of copy cats around the world, such as Metaplanet in Japan which has seen a meteoric rise in its share price this year. Its stock price is at an eye-popping 7 times book value.

For such companies, the party will end when there are no more greater fools to sell to (both for Bitcoin and for new shares of the company). Whether – or more likely, when – that happens is anybody’s guess. Just be careful not to be the last one holding the bag.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 29 June 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 29 June 2025:

1. China’s rare earth choke hold – Amber Zhang

Rare earths comprise a group of 17 elements, typically categorized into light, medium, and heavy groups. These materials are indispensable for making high-performance magnets used in both civilian and military technologies. Among them, medium and heavy rare earths — critical for aerospace, defense, and other cutting-edge sectors — are particularly scarce and difficult to source.

Don’t be fooled by their size. Rare earth magnets are no larger than a stick of chewing gum, yet pack magnetic force 15 times stronger than traditional iron magnets. Heat-resistant and cost-efficient, they are essential components in electric motors — not only in EVs and hybrid vehicles, but also in robots, drones, offshore wind turbines, missiles, and fighter jets…

…According to the International Energy Agency, China accounted for over 60% of global rare earth mining output in 2023 — and an even more dominant 92% of the world’s refining capacity. According to the International Energy Agency, China accounted for over 60% of global rare earth mining output in 2023 — and controlled a staggering 92% of global refining capacity…

…Between 2020 and 2023, 70% of the rare earth compounds and metals used in the U.S. were imported from China, according to the U.S. Geological Survey…

…Ford recently halted production for a week at its Chicago plant due to rare earth shortages, affecting its Explorer SUV line. In early June, the Motor & Equipment Manufacturers Association (MEMA), along with General Motors, Toyota, Volkswagen, Hyundai, and other major automakers, issued a joint letter warning that without a stable supply of rare earth magnets, production of essential components could come to a standstill…

…The U.S. once boasted the world’s largest rare earth magnet industry. Its Mountain Pass mine in California had supplied most of the global market since 1965. But in 1998, the mine was shut down following a pipeline leak that released trace heavy metals and radioactive materials into the Mojave Desert. Chinese firms made three separate attempts to acquire the mine — all blocked by U.S. authorities.

Alarmed by Japan’s supply crisis, the Obama administration supported Hitachi Metals’ investment in a rare earth magnet plant in North Carolina, operational from 2011 to 2013. But the costs were prohibitively high compared to China’s vertically integrated, state-backed operations in cities like Ganzhou. U.S. buyers, ultimately unwilling to pay a “made-in-America premium,” continued sourcing from Chinese suppliers. In 2020, Hitachi shut down the facility and mothballed its equipment…

…Back in 2010, Mountain Pass — the U.S.’s only remaining rare earth mine — received over $1 billion in Pentagon funding just to stay afloat. But lacking commercial competitiveness, it shut down again the following year. In 2017, MP Materials acquired the site, restarted mining operations, and began exporting raw ore to China for processing. The company now plans to begin producing rare earth magnets at a new facility in Texas by the end of this year. Still, even at full capacity, its annual output would match just a single day of production in China…

…Domestically, the Round Top project in Texas has emerged as a cornerstone of America’s rare earth strategy. Operated by U.S. Rare Earths Inc., the site holds estimated reserves of 130,000 metric tons across 16 different elements and aims to supply 20% of U.S. rare earth demand by 2027. The company is also building a $100 million magnet manufacturing facility in Oklahoma, which is expected to process up to 2,000 metric tons of rare earth materials annually.

Meanwhile, the U.S. Department of Energy has launched the ReElement initiative, allocating $50 million to recover up to 90% of rare earth elements from electric vehicle batteries by 2025. But these recycling systems have yet to achieve commercial scale and remain economically marginal.

The National Defense Authorization Act for fiscal year 2025 earmarks $1.2 billion for strategic stockpiling and $350 million for domestic development. These funds are being channeled into American firms like MP Materials, aimed at accelerating the construction of a domestic rare earth processing infrastructure…

…According to the Center for Strategic and International Studies (CSIS), the Pentagon has invested over $439 million since 2020 to develop a rare earth industrial base — but most U.S. production remains in its early stages.

RAND Corporation estimates that it would take at least 10 years and $10–15 billion in investment to establish a fully independent domestic rare earth supply chain, factoring in infrastructure, permitting, environmental compliance, and workforce training.

2. The Great Decoupling (or Why Your Clicks Are Down and Impressions Up) – Ryan Law

Impressions are increasing because AI Overviews now give companies two chances to log an impression for a given keyword: once as a “traditional” blue link in the search results, and again as a citation in an AI Overview…

…At the same time, clicks are decreasing because AI Overviews are increasing zero-click searches. Searchers can get all the information they need to resolve their query without leaving the search results page.

When we studied this at scale across 300,000 keywords, we found that the presence of an AI Overview correlates with a 34.5% reduction in clickthrough rate…

…While our clicks are tanking because of AI search, recent data from Patrick Stox shows that—at least on the Ahrefs website—visits from AI search convert 23x better than visits from traditional search.

The way content marketing functions is very different, but guess what? There are more potential customers in the world, more demand for products and services. That is the real determinant of growth, not clicks to a blog. We’ll find different ways to reach those people.

3. A Cheeky Pint with Meta CFO Susan Li (Transcript Here) – John Collison and Susan Li

Susan: When I think about it, I go back to when I was IC4 and I joined in 2008, I’m building these first revenue models. I’d gone from banking – which is super organized, super structured, they don’t even need to know your name, they just train you to immediately figure out how to find the backup to everything, so that two years later someone else can do this and so on and so forth – to there was no infrastructure. So I’m hunting down the exact engineer who has built some ad server so that he can tell me what the parameters mean. And of course, the next time he changes them, he’s not gonna tell me, and I have to go find him again, and he’s like, “Oh, she’s coming. Don’t look her way.” A few months in, I got a meeting invite for power users of SQL and I thought, “My gosh. I’d been getting a good amount of feedback about how things could be better, and here was finally this moment of recognition that – I didn’t even know how to write queries in SQL when I started.” I show up to this meeting and there are five other people and the meeting organizer tells us that we have been called because we are the five users of SQL who consume too much power. And we have just been churning with our massive joint tables through the…

John: I love that you were all called to the principal’s office.

Susan: Basically, yes. But I often think back to this because this was a data analyst who didn’t know any of us that well, but had just generated his reports of who’s using the most infrastructure and looked at the top people on the list and thought, “Okay, this person in finance, it doesn’t make sense why she’s the third highest person on the list,” and called us in and then taught us to write better queries. No one I think specifically told him to do that. I think it’s a little awkward when you call people in to do this, but he did it because it would make us all better at our jobs…

…Susan: So, there’s this very measurable part of the company and we generally try to trade those things off against each other when we’re evaluating things within that bucket and we generally try to fund the things that are positive ROI. I’m usually the person who’s trying to make sure we understand, for every individual experiment, the expected return is something, but that’s where we are on the curve today, but what about 50 experiments later? Does the curve still have the same slope?

Then there’s a set of things which we constrain more in terms of, there’s some envelope of investment that we’re willing to make that’s not in this really ROI-driven bucket. It is very difficult to pencil out what the annual revenue forecast for Reality Labs is gonna look like over the next 20 years. For bets like that, we invert the problem. But when we talk about the return on the investment, the question that we pose, as a finance organization, to Mark – and make sure that Mark and the board understand – is what does this have to be worth to pencil out at the end? Does that pass the sanity check, the intuition, about what the size of these markets can be based on maybe some comparisons to markets that exist today, but of course in another 10, 20 years, you expect that the world will look different and maybe those markets should be bigger or smaller for whatever reason. That’s the guide, which is, for this thing to succeed at the rate at which we’re investing, it needs to be worth this at the end and does that make sense?…

…Susan: I am not a tech visionary. There are many things I’m good at, but envisioning the future of the world and what I want it to be like is not one of them. I’m a very happy beneficiary of the technology built by the world around me.

But Mark very much has a vision for what he wants that world to be. And for him, I think the strategic imperative is that we have to be building these next states of the world for us to again, be a good business, but also just be a compelling company that builds technology and puts it out in the world and builds incredible experiences for people.

I remind people in the finance organization all the time, we are very good at skeptically evaluating each bet. But the point is not that we have to look at every bet and be like, “This bet is going to work.” The point is there is a portfolio of bets, and some of them are going to pay off massively beyond, in fact, what the case on paper looks like when you make the bet. Many of them are going to not work out, but the ones that pay off are gonna more than justify the overall investment strategy or the overall roadmap that you’re building toward. If we just allowed ourselves to nix everything that the paper-case didn’t seem high-confidence, then we would never make a lot of the important bets that have been really important over the history of the company…

…Susan: That is the question that I assume all of my counterparts at these companies and I are all thinking about. For us, there are the drivers of the way we’re investing in capex today. Of course, we have, first of all, just a massively-scaled consumer business and core AI infrastructure that powers all the ranking and recommendations work and so on and so forth. That’s always been a reasonably big number for us, but also because it was getting more mature that we were driving to be more efficient over time. Then now you have, among many of our peers and ourselves, this big investment to train what we all aspire to be, frontier models. If you use those models to build great and scaled consumer experiences, then how much inference compute you’re gonna need on top of that? If compute required continues to scale up in this way forever, then you’re gonna run into some true problems of physics. But hopefully, there will be different kinds of research innovations along the way that will unlock things like being able to distribute the training so you don’t need one extremely large cluster somewhere and that will help with a lot of the energy and other challenges. So there’s some question about what that looks like over time.

Then there’s this question about, “Great, you can build all this capacity, and what do you do with them if it turns out you don’t need as much compute for either training or inference as you thought?” I think a lot of us have different backup use cases. So, up to some point, we would use a lot of compute very happily still, in the core business and what we expect the core business to be, three years from today. But frankly, we’d use more compute in the core business. Now, that doesn’t scale forever. So the real question is what happens in like two years if you’ve built so much compute that you cannot envision a reasonable ROI on the backup use case if what you’re building doesn’t come to fruition. That’s something I think we’re all gonna learn in the next few years…

…Susan:  As part of not wanting to miss the boat, we built out enough capacity for Reels but also for future things. We found that we were in fact able to put that capacity towards very good use – exactly as you said. So I do think an interesting question in the future will be allocating compute as a resource, It’s a muscle we’ve built later as a company, because we had gotten very good at allocating headcount as a resource, and headcount’s really easy to account for because you have org charts, you know exactly this person reports to this person, to this person, this person is incontrovertibly working on Facebook Marketplace, for example. GPUs don’t have that property. In fact, you often want to build out your infrastructure for it to be very fungible. Because you need to divert capacity to where – suddenly something has happened in India and you want a lot of compute to be available to be used there. So it’s not like this GPU is labeled for Facebook Marketplace, and this is labeled for – it’s actually quite a bit more difficult to account for where the capacity is being used at any given point in time. That means it’s harder to manage, and it’s harder to create the incentives around are you using GPUs efficiently?

John: You allow people to trade between people and GPUs, right?

Susan: In the budgeting process, we have allowed people to trade. Not too surprisingly, even though you’ll find that groups are often asking for compute, when that particular trade is on offer, people almost never trade for compute for exactly the reason I described, which is that if they get allocated 100 new headcount, there is no chance that 26 of those headcount will accidentally be working for something else.

4. My Trip to Washington to Get in Sync with Republican and Democratic Leaders on the Budget and Debt Situation – Ray Dalio

Everyone I spoke with on both sides agreed that:

  • We are likely to have a big debt-economic crisis if we don’t get the budget deficit down to 3 percent of GDP, so 3 percent should be an agreed-on goal,
  • Getting the deficit to 3 percent will require both spending cuts and tax revenue increases because if they come from just spending cuts or just tax revenue increases alone, the cuts or increases would be too big and shocking.
  • It’s not possible for politicians to say these things publicly even though they believe them because they would be thrown out of office…

…So, our biggest problem is that our country’s political representatives can’t even say, let alone do, what they need to do to fix our debt issues because their constituents would throw them out of office if they did that. Such is the condition of our political decision-making system.

We discussed my idea of a “3 percent 3-part solution,” which would be to cut the budget deficit to 3 percent of GDP through a mix of spending cuts, tax revenue increases, and interest rate cuts. For example, cutting spending by 4 percent, increasing tax revenue by 4 percent, and lowering the real interest rate by 1%** so that the adjustments wouldn’t be unbearably large to achieve that 3% deficit goal. The leaders I spoke with said that they’d love to do this or something like it — in fact, they thought it would be wonderful if the “meme” of reducing the deficit in this way took hold in the electorate and there was public pressure to get it done.

As for where things are likely to go, there won’t be big enough changes to the current proposed budget to change the overall picture this tax year.

5. The Speed of Patience – Paul Higgins

To understand how patient preparation creates decisive speed, I’ll show you three different maps of the same territory I’ve found practical.

  1. Pace layers reveal where to be patient and where to be urgent, showing how businesses operate across multiple timescales simultaneously, from seasonal fashion to generational culture.
  2. S-curves illuminate when those layers will hit their inflection points, helping you recognize which growth curve you’re actually betting on.
  3. Trust as a leading indicator – what emerges from ongoing interactions across and between layers (employees, communities, customers and processes), the invisible asset that compounds for decades…

…I like Stewart Brand’s pace layering framework for understanding how businesses operate across time. It reveals why this matters so profoundly. In most complex systems, different elements change at different speeds. Fashion moves seasonally, commerce shifts yearly, infrastructure evolves over decades, governance changes generationally, and culture moves so slowly it appears frozen in time…

…Layers don’t exist separately, they form a single, interconnected living system which is sometimes hard to see. We tend to see layers as independent parts to optimize separately, but in living systems, layers are how the whole organism breathes – each rhythm nested within another, each movement part of a larger dance. The fast movements at the surface and slow currents in the depths aren’t separate phenomena but the system’s way of being alive at every scale simultaneously. Speed doesn’t come from stability – they arise together from the coherence of the whole system…

…Apple master this temporal arbitrage. New iPhone colors arrive every season to satisfy the fashion layer, while annual product cycles drive the commerce layer with reliable predictability. But the iOS ecosystem, which represents their true competitive moat, took twenty years to build in the infrastructure layer, creating switching costs and network effects that compound with each passing year. Their App Store governance evolves with glacial deliberation, each change carefully considered for its long-term implications, while their design philosophy – the cultural layer that infuses everything they create – hasn’t fundamentally changed since Jobs articulated it decades ago. You just have to look at their cumulative cash reserves to see whether they have the capacity to keep it up or not.

Competitors try to destroy Apple’s fashion layer moat and assume that’s the game being played. They miss the insight that Apple’s speed in the fashion layer comes from stability in the infrastructure layer, that the layers aren’t independent but deeply interdependent, with the slow layers enabling the fast ones to move with confidence and clarity…

… In business, you’re never riding just one S-curve. You’re managing a portfolio of them, each operating at different speeds across different layers of your organization. Your product adoption might be hitting exponential growth (measured in months) while your infrastructure build-out is still in early grind (measured in years) and your culture formation hasn’t even begun its curve (measured in decades)…

…Netflix understood this with brutal clarity. In 2010, they were shipping 2 million DVDs daily – a massive operation at the peak of its S-curve. But Reed Hastings saw streaming was at the bottom of its S-curve, barely functional, with terrible selection and constant buffering. While Blockbuster optimized their mature retail model, Netflix deliberately cannibalized their profitable DVD business to ride the next wave. They moved $200 million from DVD operations into streaming content when streaming represented less than 20% of revenue. Today Netflix is worth $240 billion; Blockbuster is a cautionary tale…

…Kerry Group’s transformation from Ireland’s smallest dairy cooperative to a €6.3 billion ingredients empire illustrates how patience creates opportunities invisible to those focused on shorter horizons. Every dairy producer faced the same challenge with whey, the protein-rich liquid left over from cheese-making that represented both a disposal cost and a compliance headache. While the entire industry treated this as expensive waste, Kerry’s leadership recognized something profound: they were looking at two different S-curves operating on completely different timescales.

The dairy business that consumed everyone’s attention was approaching the top of its S-curve, with margins thinning and consolidation inevitable, while the ingredients business hadn’t even begun its exponential climb. For fifteen years, Kerry invested in extraction technology and scientific capabilities while competitors focused on optimizing dairy margins. By the time health consciousness and specialized nutrition exploded into mainstream consciousness, Kerry had spent two decades perfecting protein extraction, understanding molecular structures, and building relationships with food manufacturers who needed exactly these capabilities…

…Warren Buffett’s 2008 moves exemplified how trust operates across all three maps simultaneously. While others mocked Berkshire’s growing cash pile – $40 billion sitting “idle” – he was building in the infrastructure layer (pace layers), preparing for the inevitable down-cycle in financial services’ S-curve, and accumulating trust with every patient year. That cash pile represented more than financial capacity; it was trust crystallized into capital. Every year Buffett didn’t chase returns, every quarter he resisted leverage, every deal he walked away from, he was depositing into an invisible trust account. When 2008 hit, that patient accumulation enabled lightning-fast execution: $8 billion deployed to Goldman Sachs with one phone call. The $7.7 billion total return exceeded Coca-Cola’s entire 20-year dividend stream to Berkshire. Trust had compressed decades into days.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (the company behind AI Overviews), Apple, Meta Platforms, and Netflix. Holdings are subject to change at any time.

Market View: Asian shares up, oil down after Trump announces Israel-Iran ceasefire; Trump says US interest rates should be lowered, and more

Yesterday, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s Money Matters show. We discussed a number of topics, which include:

  • What a potential ceasefire between Israel and Iran would mean for the stock market and oil prices (Hints: Peace is a big positive for equity markets because more people can channel their energies into improving the world, and over the long run, that’s really what fuels the global economy; oil prices have experienced five major crashes over the past four decades despite demand for the commodity being higher than supply in each year, so it’s really difficult to tell what will happen to oil prices)
  • What OCBC’s announcement that it will not convert its Class C non-voting Great Eastern shares into ordinary shares when they come up for conversion in five years mean for investors of OCBC (Hints: OCBC is attempting to privatise Great Eastern and its decision to not convert the Class C shares implies that it intends for Great Eastern to remain a public-listed entity if the upcoming delisting resolution fails; whether Great Eastern is successfully privatised or not will not move the needle for OCBC because nearly 94% of the economics of Great Eastern already belongs to OCBC and 6% of Great Eastern’s S$8.7 billion in shareholders’ equity currently is much lower than OCBC’s shareholders’ equity of S$59 billion)
  • How will Lum Chang benefit from the upcoming spin-off of its interior fit-out business, Lum Chang Creations (Hints: Lum Chang’s management appears to be aiming for the market to be able to better recognise the value of Lum Chang Creations, since Lum Chang Creations has “demonstrated strong growth in recent years”; whether the spin-off is a long-term positive for Lum Chang or a non-event will depend on the future business performance of Lum Chang Creations. 
  • Why does US President Donald Trump want the Federal Reserve to lower interest rates in the USA by at least two to three percentage points (Hints: Trump appears to think that US government bond yields will decrease if the Federal Reserve lowers interest rates, but the problem is the Federal Reserve controls only one interest rate, which is the federal funds rate, and most US government bond yields depend on market forces)
  • What Federal Reserve Chair Jerome Powell’s testimony before Congress means (Hints: I don’t watch the Federal Reserve’s actions in my investing activities because the Federal Reserve does not exert as much power over the stock market as some people think)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 June 2025)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 June 2025:

1. Message from CEO Andy Jassy: Some thoughts on Generative AI – Andy Jassy

Today, in virtually every corner of the company, we’re using Generative AI to make customers lives better and easier. What started as deep conviction that every customer experience would be reinvented using AI, and that altogether new experiences we’ve only dreamed of would become possible, is rapidly becoming reality. Technologies like Generative AI are rare; they come about once-in-a-lifetime, and completely change what’s possible for customers and businesses…

…You can see it in Advertising where we’ve built a suite of AI tools that make it easier for brands to plan, onboard, create and optimize campaigns. In Q1 alone, over 50K advertisers used these capabilities…

…We’re also using Generative AI broadly across our internal operations. In our fulfillment network, we’re using AI to improve inventory placement, demand forecasting, and the efficiency of our robots—all of which have improved cost to serve and delivery speed. We’ve rebuilt our Customer Service Chatbot with GenAI, providing an even better experience than we’d had before. And, we’re assembling more intelligent and compelling product detail pages from leveraging GenAI…

…First, we have strong conviction that AI agents will change how we all work and live. Think of agents as software systems that use AI to perform tasks on behalf of users or other systems. Agents let you tell them what you want (often in natural language), and do things like scour the web (and various data sources) and summarize results, engage in deep research, write code, find anomalies, highlight interesting insights, translate language and code into other variants, and automate a lot of tasks that consume our time. There will be billions of these agents, across every company and in every imaginable field. There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.

Second, and what makes this agentic future so compelling for Amazon, is that these agents are going to change the scope and speed at which we can innovate for customers. Agents will allow us to start almost everything from a more advanced starting point…

…Today, we have over 1,000 Generative AI services and applications in progress or built, but at our scale, that’s a small fraction of what we will ultimately build. We’re going to lean in further in the coming months. We’re going to make it much easier to build agents, and then build (or partner) on several new agents across all of our business units and G&A areas.

As we roll out more Generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.

2. Experiencing the Real “Belt and Road” – Nina Chen

In early June, I traveled in Central Asia for 9 days, visiting two countries. I spent 2 days in Almaty, Kazakhstan, and 7 days in Uzbekistan, covering Tashkent, Samarkand, and Bukhara…

…We flew from Almaty, Kazakhstan, to Tashkent, the capital of Uzbekistan. Even before landing, it was clear that Uzbekistan and China have a close partnership. On the flight, there were many Chinese merchants and workers traveling in groups…

…When we arrived at the airport, the sense of close cooperation was even stronger. The airport signs had Chinese translations, and there was a billboard in the walkway advertising the “UZ-China Silk Road Free Trade Special Zone.”…

…While we didn’t meet any locals in Uzbekistan who’d been to China, in Kazakhstan, we met a Kazakh girl with fluent Chinese. We joined a day tour to the lakes and canyons near Almaty. With many Chinese tourists in our group, she translated for us when we couldn’t understand the guide. She studied in Chongqing(*) and worked in Yiwu, Zhejiang province (*), where her Chinese boss ran a company exporting goods from China to former Soviet countries like Moscow, Azerbaijan, and Central Asian cities.

This made me feel that trade between China and Central Asia is largely a one-way flow, from China to Central Asia, with China’s economic influence in the region being substantial…

…At the Tashkent City Mall, the premier shopping destination in Uzbekistan’s capital, I was surprised to find stores for well-known Chinese sportswear brands Anta, Li-Ning, and Xtep all located in close proximity.

I decided to explore the Anta store first. Picking up a pair of PG 7 running shoes (the PG 7 refers to the midsole technology), I noticed the price tag read 1,103,000 Uzbekistani som (approximately 612 Chinese yuan, US$87), which is significantly higher than the price in China (where it’s around 200-300 yuan, US$29–43 on Tmall). However, the store currently has a promotion: buy one pair and get the second at 50% off (effectively 459 yuan per pair, US$66) or buy two pairs and get the third free (bringing the cost down to 408 yuan per pair, US$58). Even with the discounts, the price is still higher than in China. When I asked the store manager if Anta is considered a premium brand in Uzbekistan, he confirmed it is. Surprised, I inquired if only the wealthy can afford it. He explained that due to the popularity of digital payments, many people, especially the youth, opt for installment plans…

… Central Asia has many Chinese-made beauty and skincare products that aren’t available in China.

An example is “Shanghai Song,” with packaging featuring a classic Chinese vintage design. The brand’s slogan states: “Inspired by myths and legends, it’s about Shanghai in the Song period, which ruled one of China’s most glorious cultural eras in the long-flowing Eastern cultural river.”

I found this puzzling. First, the specific myths or legends that served as inspiration aren’t clear, giving it a mysterious and abstract feel. Second, to the best of my knowledge, during the Song Dynasty, the economic and cultural centers of the Northern Song were in Kaifeng, and those of the Southern Song were in Hangzhou, not in Shanghai. Perhaps “Shanghai Song” represents a blend of the modern and the classical, or maybe the company behind the brand has a special affection for Shanghai.

When I picked up a bottle of cream and examined it closely, I found that the company is based in Guangzhou. Well, it’s likely that “Shanghai Song” is a brand from Guangzhou that embodies what Chinese people think Central Asians imagine about China and the East.

3. The Capital Cycle Way – Omar Malik

The capital cycle best explains how changes in the amount of capital employed within an industry will impact profits and future returns on capital.

Central to the capital cycle approach is the observation that an industry with high returns on capital tends to attract new entrants. For incumbents, high profitability loosens discipline because management incentives often align with growth. Therefore, both groups will increase spending to capture those high returns. The behavioural pattern of herding often means all the players in an industry invest simultaneously…

…A key characteristic of this cycle is the delay between the investment decision and the new supply coming online. By the time the new supply arrives, historical demand forecasts are often shown to have been overly optimistic, creating an overhang. This causes returns on capital to fall below the cost of capital. As profits collapse, management teams are changed, spending is slashed, and the industry begins to consolidate. That contraction in supply eventually paves the way for a recovery in returns…

…Supply dynamics are more certain than demand and therefore easier to forecast. This is because increases in industry supply are often well-flagged by management teams. In certain industries, such as aircraft manufacturing and shipbuilding, the supply pipelines are well-known. New entrants will noisily announce their arrival into an industry…

… Studying the supply side can help you identify companies that are likely to sustain their high returns for decades to come. The lack of competition due to a competitive moat prevents the supply side from shifting in response to high profitability and defies the typical mean revision in returns…

…Buffett’s investment cases are often predicated on a supply-side focus, and his acquisition of BNSF Railway is a good example. In his own words, the railroad industry had a ‘terrible century’ leading up to his investment. But after following the industry from a young age, he became interested in 2006, why?

The industry had rationalised from over 100 players in the 1960s to just five. In the 1990s, a final wave of consolidation led to the formation of today’s giants. The relative competitive position of railroads versus trucking had improved as oil prices rose, making the railroads the lowest-cost way to move heavy freight. No new capacity was being built. And after consolidating, driving efficiency became the focus, with the labour force falling by 90% and the introduction of new innovations, such as double stacking.

Putting that all together, as long as you believed that the US economy would grow over the coming decades, the structurally improved supply-side dynamics would lead to higher returns on capital in the future. He was not focused on demand because he acquired BNSF during the global financial crisis (GFC), the worst economic crisis since the Great Depression…

…We have held TSMC since Hosking Partners’ inception in 2013 — in fact, it dates back even earlier, if you include the years at Marathon.

The semiconductor industry is highly cyclical, and the news flow around the cycle is immense. Analysts are obsessed with questions such as: Are we at a peak or trough earnings cycle? Was that the last cut or the last beat? How many quarters will the trough last?

Our thesis for the last 15 years has been based on a simple insight: the foundry business would consolidate over time, given the ever-rising cost of advancing Moore’s law. And that TSMC had the superior model, as a pure-play foundry, creating a true alignment with the customer, completely agnostic to the end market. Today, we feel that insight still holds. The scale advantage of TSMC’s model has only grown as the industry has gone from over 20 players to just three…

…How a management team responds to the capital cycle in their industry is critical. If they can act counter-cyclically, pull back when others are adding supply, and take advantage of downturns, they can create significant value.

The way I think about it is if you find one of these outlier teams, you can subcontract the capital allocation decisions to them. You can trust them to navigate the cycles instead of trying to time the buy and sell decisions…

…Even if you have a fix on the supply side for the next decade and you trust management to allocate capital well, you still need to buy at the right price! That brings me to the fourth tenet – remember replacement value.

It is a simple concept: how much would it cost to reproduce or replicate this asset? It is the driving force of the capital cycle. When companies are valued at a premium to replacement cost in the equity market, it creates an incentive to invest and capture that arbitrage. That is why venture capital and private equity funding is tied to equity market valuations.

It is far easier to calculate replacement value in asset-intensive industries with readily available data. But it is more of an art in other sectors, where the model is asset-light with a greater share of intangibles. In such cases, a question I often think about is, “Should we compete with this business instead of buying it?”…

…The final point I’ll leave you with is that we are all guilty, including myself today, of singling out the parts of Buffett’s approach that appeal to us. It is natural, as we all look for confirmation in the tough pursuit of outperforming. I am convinced that the capital cycle lens is one of Buffett’s big mental models for the world.

But my ultimate takeaway from studying Buffett and attending these annual meetings is that he is the Swiss Army Knife of investing. Over his long career, Buffett has successfully invested in great compounders across a wide range of industries (i.e., Coke, Amex, Apple); deep value (i.e., PetroChina on a 3x P/E, as well as all the early partnership investments); activism (i.e., Sanborn maps, Berkshire Hathaway); baskets (i.e., Korean stocks, railroads, airlines, Japanese trading houses); merger arbitrage (i.e. Activision Blizzard); bonds (i.e., high-yield bonds in the fallout of the tech bubble); commodities (i.e., oil futures, silver, and more recently Occidental), among others.

4. A Moody’s Ratings Downgrade for the US: What now? – Aswath Damodaran

Through time, governments have often been dependent on debt to finance themselves, some in the local currency and much in a foreign currency. A large proportion of sovereign defaults have occurred with foreign currency sovereign borrowing, as the borrowing country finds itself short of the foreign currency to meet its obligations. However, those defaults, and especially so in recent years, have been supplemented by countries that have chosen to default on local currency borrowings. I use the word “chosen” because most countries have the capacity to avoid default on local currency debt, being able to print money in that currency to pay off debt, but chose not to do so, because they feared the consequences of the inflation that would follow more than the consequences of default…

…Researchers who have examined the aftermath of default have come to the following conclusions about the short-term and long-term effects of defaulting on debt:

  1. Default has a negative impact on the economy, with real GDP dropping between 0.5% and 2%, but the bulk of the decline is in the first year after the default and seems to be short lived.
  2. Default does affect a country’s long-term sovereign rating and borrowing costs. One study of credit ratings in 1995 found that the ratings for countries that had defaulted at least once since 1970 were one to two notches lower than otherwise similar countries that had not defaulted. In the same vein, defaulting countries have borrowing costs that are about 0.5 to 1% higher than countries that have not defaulted. Here again, though, the effects of default dissipate over time.
  3. Sovereign default can cause trade retaliation. One study indicates a drop of 8% in bilateral trade after default, with the effects lasting for up to 15 years, and another one that uses industry level data finds that export-oriented industries are particularly hurt by sovereign default.
  4. Sovereign default can make banking systems more fragile. A study of 149 countries between 1975 and 2000 indicates that the probability of a banking crisis is 14% in countries that have defaulted, an eleven percentage-point increase over non-defaulting countries…

…If sovereign ratings are designed to measure exposure to default risk, how well do they do? The answer depends on how you evaluate their performance…

…In sum, the evidence suggests that while sovereign ratings are good measures of country default risk, changes in ratings often lag changes on the ground, making them less useful to lenders and investors.

If the key limitation of sovereign ratings is that they are not timely assessors of country default risk, that failure is alleviated by the development of the sovereign CDS market, a market where investors can buy insurance against country default risk by paying an (annualized) price. While that market still has issues in terms of counterparty risk and legal questions about what comprises default, it has expanded in the last two decades, and at the start of 2025, there were about 80 countries with sovereign CDS available on them…

…At the start of 2025, the market was drawing a distinction between the safest Aaa-rated countries (Scandinavia, Switzerland, Australia and New Zealand), all with sovereign CDS spreads of 0.20% or below, and more risky Aaa-rated countries (US, Germany, Canada). During 2025, the market shocks from tariff and trade wars have had an effect, with sovereign CDS spreads increasing, especially in April. The US, which started 2025 with a sovereign CDS spread of 0.41%, saw a widening of the spread to 0.62% in late April, before dropping back a bit in May, with the Moody’s downgrade having almost no effect on the US sovereign CDS spread…

…The ramping up of US debt since 2008 is reflected in total federal debt rising from 80% of GDP in 2008 to more than 120% in 2024. While some of the surge in debt can be attributed to the exigencies caused by crises (the 2008 banking crisis and the 2020 COVID bailouts), the troubling truth is that the debt has outlasted the crises and blaming the crises for the debt levels today is disingenuous.

The problem with the debt-to-GDP measure of sovereign fiscal standing is that it is an imperfect indicator…

…Many of the countries with the highest debt to GDP ratios would be classified as safe and some have Aaa ratings, whereas very few of the countries on the lowest debt to GDP list would qualify as safe. Even if it is the high debt to GDP ratio for the US that triggered the Moody’s downgrade, the question is why Moody’s chose to do this in 2025 rather than a year or two or even a decade ago, and the answer to that lies, I think, in the political component. A sovereign default has both economic and political roots, since a government that is intent on preserving its credit standing will often find ways to pay its debt and avoid default. For decades now, the US has enjoyed special status with markets and institutions (like ratings agencies), built as much on its institutional stability (legal and regulatory) as it was on its economic power. The Moody’s downgrade seems to me a signal that those days might be winding down, and that the United States, like the rest of the world, will face more accountability for lack of discipline in its fiscal and monetary policy…

…The ratings downgrade was after close of trading on Friday, May 16, and there was concern about how it would play out in markets, when they opened on Monday, May 19. US equities were actually up on that day, though they lost ground in the subsequent days…

…If equity markets were relatively unscathed in the two weeks after the downgrade, what about bond markets, and specially, the US treasury market? After all, an issuer downgrade for any bond is bad news, and rates should be expected to rise to reflect higher default risk…

…While rates did go up in the the first few days after the downgrade, the effect was muddled by the passage of a reconciliation bill in the house that potentially could add to the deficit in future years. In fact, by the May 29, 2025, almost all of the downgrade effect had faded, with rates close to where they were at the start of the year…

…The expected return on the S&P 500 as of May 30, 2025, reflecting the index level then and the expected cash flows, is 8.64%. Incorporating the effects of the downgrade changes the composition of that expected return, resulting in a lower riskfree rate (4.01% instead of 4.41%) and a higher equity risk premium (4.63% instead of 4.23%). Thus, while the expected return for the average stock remains at 8.64%, the expected return increases slightly for riskier stocks and decreases slightly for safer stocks, but the effects are so small that investors will hardly notice. If there is a lesson for analysts here, it is that the downgrade’s effects on the discount rates (costs of equity and capital) are minimal, and that staying with the conventional approach (of using the ten-year US treasury bond rate as the riskfree rate and using that rate to compute the equity risk premium) will continue to work.

5. Contrary Research Rundown #140 – Contrary Research

Tesla has taken a fundamentally different approach. It does not use lidar or radar and instead relies entirely on eight cameras to make driving decisions. In contrast, Waymo’s fifth-generation car has 29 cameras, six radar sensors, and five lidar sensors…

…As early as 2013, Elon expressed skepticism about the need for lidar in autonomous vehicles. Elon framed the reason in a rather intuitive way in 2021: if humans can rely on their eyes and brain, then self-driving cars can rely on cameras and AI…

…Another reason Tesla has avoided using lidar is the cost. One 2024 report estimated Tesla’s sensor suite costs just $400 per vehicle, compared to an estimated $12.7K per vehicle for Waymo’s sensors on its fifth-generation Jaguar SUVs…

…Companies like Waymo follow a multi-step process where they first deploy vehicles with safety drivers to record and map the area, which can take months for each new city and requires continuous updates. Waymo and companies like it then use these predefined maps to complement their real-time sensor data from lidar/radar about the surrounding area. Tesla, by contrast, claims its software can operate anywhere without pre-mapped data, relying entirely on real-time camera input to understand road conditions…

…At Google I/O in May 2025, Waymo showed a few examples where its full suite of sensors successfully avoided pedestrians and where it claims a camera-only approach would have struggled.

In one example, Waymo’s lidar picked up the presence of a pedestrian in a Phoenix dust storm that was not visible on the camera…

…In another example, Waymo’s sensors were able to detect a pedestrian who was behind a bus and avoid a collision:

“We are detecting a pedestrian on the other side of the bus. That would be completely occluded to a human driver. So what’s happening here is that our sensors are able to pick up the movement of the person’s feet under the bus. And just that little bit of noisy and sparse signal is enough for the Waymo Driver to detect that there’s a pedestrian there and, furthermore, to predict what they’re going to do in the future, allowing us to take a defensive action early.”…

…Waymo has only had one fatal accident in its history, and not due to a Waymo error. In January 2025, a Tesla struck an unoccupied Waymo and other cars at a red light, killing one person. As we wrote in our last piece, one study by Swiss Re shows Waymo saw an 88% reduction in property damage claims and a 92% reduction in bodily injury claims when compared to human-driven vehicles…

…In 2023, a Tesla in Full Self Driving mode (FSD) hit a 71-year-old woman at highway speed, killing her. Video of the crash shows a sun glare appearing to blind the camera, and the National Highway Traffic Safety Administration (NHTSA) opened an investigation into Tesla in October 2024 for four total FSD collisions that occurred in low visibility situations…

…When Elon first called lidar too expensive in the early 2010s, it cost ~$75K per unit. Since then, costs have fallen dramatically, and some lidar units sold for personal vehicles (not robotaxis) are being priced in the hundreds of dollars…

…By one estimate, lidar costs have fallen by 99% since 2014.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon (company where Andy Jassy is the CEO), Apple, Tesla, and TSMC. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2025 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2025 Q1 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2025 Q1). In it, I shared commentary in earnings conference calls for the first quarter of 2025, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2025’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management sees the Firefly App as a place for creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control; the Firefly app also supports 3rd-party models from Google, OpenAI, and others, with more coming soon; Firefly is attracting new customers for Adobe with first-time subscribers up 30% sequentially in 2025 Q1 (FY2025 Q2); management recently rolled out new Firefly offerings such as the (1) Firefly Image Model 4 for life-like images, (2) Firefly Image Model 4 Ultra for impeccable detail, (3) Firefly Video Model; users of Firefly can now collaborate with other users through Firefly Boards; management is monetising Firefly through new Firefly App subscription plans (ranging from US$10 per month to US$200 per month) and the Creative Cloud Pro plan; traffic to the Firefly App was up 30% sequentially in 2025 Q1 (FY2025 Q2); paid subscriptions to the Firefly App nearly doubled sequentially in 2025 Q1 (FY2025 Q2); Firefly has powered 24 billion generations (20 billion in 2024 Q4) since its launch in March 2023; management believes that the only commercially safe way to build AI models is to do with content where the creators are willing participants in the process and this is how Firefly was trained; companies are choosing Firefly because of its commercial safety; management thinks Firefly will be the ultimate creative destination because even if it’s used only for ideation, users will want intellectual property that is safe for production; management sees Creative Cloud Pro (CC Pro) as the place for where Adobe’s AI and generative capabilities will increasingly be best available

The Firefly App is a new destination for AI-assisted content ideation, creation and production with Adobe’s comprehensive family of commercially safe Firefly creative models and an expansive ecosystem of third-party models. Firefly empowers creative professionals to generate images, video, audio and vectors from a single place with unmatched creative control, iterate on their creations through Adobe’s creative apps and seamlessly deliver them into production. Our support for third-party models, including from Google, OpenAI and Black Forest Labs gives creators the flexibility to choose the AI that works best for them, with Firefly upholding our standards for IP safety and transparency…

…The Firefly App is attracting new users to the Adobe franchise with first-time subscribers growing 30% quarter-over-quarter…

…Earlier this quarter, we launched the new Firefly Image Model 4 for life-like images and the Firefly Image Model 4 Ultra for impeccable detail in complex visuals. We also made the Firefly Video Model generally available for the first time, empowering creators to generate 4K footage from text prompts and images with unprecedented creative control and extend video clips in our tools like Premiere Pro…

…In addition to supporting our own Firefly Models, the Firefly App now supports a growing family of third-party models for creative ideation. Firefly offers the flexibility to explore the diverse aesthetic styles of Google’s Imagen and Veo models OpenAI’s GPT-image model and Black Forest Labs’ Flux image model with Runway, Ideogram, Fal.ai, Luma and Pika coming soon. With the release of the Firefly Boards public beta earlier this quarter, creators can now ideate and collaborate when generating content with Firefly and our third-party models.

To monetize this incredible innovation, we have introduced a comprehensive set of offerings aimed at new and existing creators and creative professionals across all routes to market. The new Firefly App subscription plans are ideal for creators starting their creative journey and are now globally available. Creative Cloud Pro, which combines Creative Cloud All Apps and the Firefly App represents the best value for content creation and is now available in North America. Creative Cloud Pro will be released in other geographies over the next few months…

…Traffic to the Firefly App grew over 30 percent quarter over quarter and paid subscriptions nearly doubled in the same period…

…Excitement for and adoption of generative AI innovation, such as Generative Fill in Photoshop Generative Remove in Lightroom, Generative Expand in Illustrator, Generative Extend in Premiere Pro, video generation in the Firefly App and production workflows in Firefly Services, continues to accelerate with over 24 billion cumulative generations exiting Q2…

…One of the core things that we believe from the very beginning is that the right transparent and really the only commercially safe way to build these models is to do it on a set of content that — where the contributors are themselves excited and willing participants in the process. And so we have trained our Firefly models, as many of you know, on Stock and other content that we have access to. We do have a contributor fund that pays out to those individuals. And as a result, we feel like we’re in a very advantaged position when it comes to people choosing models. I’ll say, especially in enterprises, we see a lot of companies selecting Firefly partially because of the quality, partially because of the controllability of it but also very, very strongly because of the commercial safety of it…

…I think Firefly, with the support for all of those models, will be the ultimate creative destination. And to, I think, punctuate what David said, in the enterprise, the value proposition that we have resonates because even if you use it for ideation, you’re not going to use something that’s not being designed to be the intellectual property being correct for production…

…We can meet the needs of creators with the new Firefly plans that we’ve released recently, whether it’s Firefly Standard for, in the U.S., $10; Firefly Pro for $20; or Firefly Premium, which is unlimited access to video generation as well, for $200 a month…

…All of the AI and generative capability will increasingly be best available for our customers through the CC Pro application.

Adobe’s management sees marketing professionals being required to create huge amounts of personalised content and this is where Adobe’s AI-powered vertical solutions can help; management is seeing increasing demand from customers for personalisation capabilities in Adobe’s Digital Experience suite of products

Marketing professionals need to create an unprecedented volume of compelling content and optimize it to deliver personalized digital experiences across channels, including mobile apps, e-mail, websites, social media and advertising platforms. They’re looking for agility and self-service as well as integrated workflows with their creative teams and agencies. To achieve this, enterprises require custom commercially safe models and purpose-built agents tailored to address the inefficiencies of the content supply chain. Marketing practitioners, Chief Marketing Officers and Chief Digital Officers need solutions that enable them to acquire, engage and delight customers across a variety of channels and geographies. 

Adobe’s strategy is to deliver a comprehensive marketing technology platform leveraging AI to offer vertical solutions that integrate content, customer data and profiles across journeys in both B2B and B2C industries. Adobe GenStudio and Firefly Services are revolutionizing the content supply chain across enterprises, empowering marketers to activate personalized on-brand content across millions of touch points. For marketing professionals, Adobe Experience Platform and apps and purpose-built agents are redefining the future of customer connection by enabling real-time orchestration of content, data and journeys…

At our scale, the bigger metric that we track is in DX. Let’s talk about DX. And in DX, how much of this technology that we have been delivering is being adopted? What is the scale at which we’re driving, whether it’s campaigns, whether it’s engagement through e-mail or SMS, the amount of transactions that are going through AEP and apps? And all of that is because the agility of marketing and the ability to personalize these experiences with customers is dramatically increasing. So that’s one underlying trend that we clearly see. And the demand for that is only increasing and not decreasing. 

Adobe’s AI-influenced ARR (annual recurring revenue) is in the billions; Adobe’s AI-first products is tracking ahead of management’s target of $250 million in ending ARR by end-FY2025; management thinks Adobe is still very early in AI monetisation and feels good about it

While our AI influenced ARR is already contributing billions of dollars, our AI book of business from AI-first products, such as Acrobat AI assistant, Firefly App and Services and GenStudio for Performance Marketing is tracking ahead of the $250 million ending ARR target by the end of fiscal 2025…

…It’s very early in terms of the AI monetization, but we’re very advanced in terms of how much innovation we’ve delivered. And so it feels really good right now.

Adobe’s management thinks the infusion of conversational experiences in Adobe Acrobat and generative AI models in Express is allowing users to combine the 2 products in novel ways; Adobe’s Acrobat and Express products have combined monthly active users of more than 700 million, up 25% year-on-year; Express capabilities within Acrobat saw adoption grow 3x sequentially and 11x year-on-year in 2025 Q1 (FY2025 Q2); there’s a 75% increase in students gaining access to Acrobat AI Assistant and/or Express premium plans; Acrobat AI Assistant and Express added 35,000 new businesses in 2025 Q1 (FY2025 Q2), with Express adding 8,000; monthly active users (MAUs) in Acrobat’s AI Assistant and Express’s generative AI grew 3x year-in-year in 2025 Q1 (FY2025 Q2); Acrobat AI Assistant saw number of questions asked nearly doubling sequentially in 2025 Q1 (FY2025 Q2)

Our investments in conversational experiences in Acrobat and generative AI models in Express allow users to combine the 2 products in novel ways that empower users to accelerate their time to insight and ability to create compelling presentations. Sales professionals can gather industry reports on a prospect, use AI system to quickly identify effective sales conversations and automatically generate a pitch deck with Express. A social media marketer can ask AI Assistant for help identifying buying behaviors in market research documents and use that information to create better TikTok videos in Express…

…We’re seeing steady growth across our family of Acrobat and Express products with combined monthly active user growth accelerating to over 25% year-over-year and crossing 700 million monthly active users as Acrobat users increasingly rely on Acrobat AI Assistant to enhance content consumption and Express to create richer PDFs, customized presentations and animated designs. Due to increasing customer demand for creative functionality through Acrobat, we saw an approximately 3x quarter-over-quarter and approximately 11x year-over-year increase in the adoption of Express capabilities within Acrobat…

…With students, we’re driving over 75% year-over-year increase in students gaining access to Acrobat AI Assistant and/or Express premium plans. These products are also seeing strong adoption by businesses with over 35,000 new businesses added in Q2. Express alone added around 8,000 new businesses this quarter, approximately 6x growth year-over-year including companies such as Microsoft, ServiceNow, Workday, Intuit and top sports leagues like MLB, the NFL and Premier League…

…Use of generative AI features continues to grow quickly with AI Assistant MAU in Acrobat and generative AI MAU in Express growing over 3x year over year; Acrobat AI Assistant engagement continues to accelerate with the number of questions asked nearly doubling quarter over quarter;

Adobe’s management has launched GenStudio Foundation to provide visibility and actionable insights into campaign plans, projects and assets; Adobe has GenStudio for Performance Marketing for users to create on-brand content for websites and social media; GenStudio for Performance Marketing grew 45% sequentially in 2025 Q1 (FY2025 Q2); management thinks Adobe can work well with Meta even though Meta is increasing usage of AI to automate advertising creation

We launched GenStudio Foundation, a unified interface to bring together data from our full suite of content supply chain applications providing visibility and actionable insights into campaign plans, projects and assets. GenStudio for Performance Marketing empowers teams to create their own on-brand content, supporting ad creation and activation for Google, LinkedIn, Meta, Microsoft, Snap and TikTok…

…Momentum for GenStudio for Performance Marketing with growth of over 45 percent quarter over quarter…

…[Question] With respect to outside of your traditional competitive environment, maybe just coopetition with vendors like Meta where it — at least it’s a little harder for some investors to understand given their increasing usage of AI to automate kind of ad creation and campaign optimization. To what extent does that overlap versus partner with some of the GenStudio offerings?

[Answer] In terms of the ad platforms, obviously, their primary goal is to grow the ad revenue. The best way to do that is to make sure that the creative is optimized and the ROI from the advertisers’ perspective, is clear to the advertisers, which is where our marketing stack and everything that we’re doing around GenStudio for Performance Marketing comes together really well.

Adobe’s management is seeing high enterprise demand for and adoption of Firefly Services and Custom Models for marketing use cases; solutions that customers desire from Firefly Services and Custom Models include video reframe and support for 3rd-party models; Adobe collaborated with Coca-Cola to develop the AI-powered Project Fizzion on Firefly Services and Custom Models; Project Fizzion can scale creative output up to 10x faster while reducing misinterpretation of brand guidelines in AI content; Firefly Services and Custom Models within the GenStudio solution had 4x year-on-year growth in ARR (annual recurring revenue) in 2025 Q1 (FY2025 Q2)

We’re seeing high enterprise demand for and adoption of Firefly Services and Custom Models to automate and scale on-brand content production for marketing use cases…

…We are building on the momentum behind Firefly Services and Custom Models, addressing additional highly desired solutions, including video reframe and support of third-party models for automation and cost efficiency.

With The Coca-Cola Company, we co-developed a new AI-powered design intelligence system called Project Fizzion, built on Firefly Services and Custom Models. Project Fizzion is designed to scale creative output up to 10x faster while tackling the common challenge of misinterpreting brand guidelines in AI-powered content…

…Continued demand for Firefly Services and Custom Models as part of the GenStudio solution, resulting in 4x year-over-year ARR growth.

The Adobe Experience Platform (AEP) has the AEP AI Assistant that allows users to interact with data through natural language; management has introduced native AI agents into AEP that can orchestrate customer journeys in real time; the NFL (National Football League) in the USA is using AEP to enable all 32 clubs in the league to scale personalized fan touch points across different channels; management recently introduced 11 AI agents, including the most recent Product Support Agent, to improve Adobe’s customers’ customer-experience; the AI agents leverage the Adobe Experience Platform; companies such as Wegmans Food Markets and dentsu Merkle are already using Product Support Agent; AEP’s subscription revenue grew 40% year-on-year in 2025 Q1 (FY2025 Q2); 

Adobe Experience Platform and native applications are central to delivering unified, personalized customer experiences. With the introduction of AEP AI Assistant, we’ve extended the platform’s value by enabling teams across the business to interact with data through natural language, streamlining ingestion, insight generation, audience segmentation and experience delivery. Building on this momentum, we are now expanding AEP with native AI agents that intelligently orchestrate customer journeys in real time. These innovations empower our customers to leverage their first-party customer data and deliver more relevant high-impact advertising experiences rooted in direct customer relationships.

The National Football League expanded our global partnership combining content data and journeys to deliver a new level of AI-powered fan experiences. Adobe will enable all 32 clubs to scale personalized fan touch points across NFL channels through project management, audience and campaign development, creative production and performance optimization.

At Adobe Summit in March, we introduced the Adobe AI platform with an agentic layer to scale Customer Experience Orchestration. We unveiled 10 agents purpose built for creative, marketing and technology teams that leverage Adobe Experience Platform to act intelligently and in alignment with business goals. These agents coordinate across systems to accelerate the delivery of exceptional experiences. We recently launched a Product Support Agent to help enterprises anticipate, troubleshoot and resolve operational issues.

Customers like Wegmans Food Markets, and dentsu Merkle are already using it to streamline onboarding and feature deployment and drive faster resolutions and greater efficiency…

…Strong demand for AEP and native apps, with Q2 subscription revenue growing over 40 percent year over year.

Adobe’s core creative business subscription revenue has been accelerating over the past few quarters, driven by AI features

In terms of the pricing part of that equation, we talked about the increased value that we have in Creative Cloud Pro. That gives us some opportunity to match the value we’re providing with the pricing, and then in terms of the value is around Firefly Services and GenStudio. So that’s really the growth algorithm. The thing to note is that as we go down this path, some of this will take some time to play out because we have — for the quantity side, we have premium and lower-priced offers. But we’re starting to see the early signs of that. And if you do the math — and I’ll maybe turn it over to Dan. If you do the math, our core creative business subscription revenue has been accelerating over the past few quarters…

…If you take a look at the supplemental disclosure that we provided between the subscription revenue for creative and marketing professionals, the subscription revenue for DX, you can pretty quickly derive what the subscription revenue is for the Creative and Creative Pro audience that we serve. And I think what you’ll see is, in the current quarter, it growing 10.1% year-over-year, which is up from 10% in Q1. And when you think about the acceleration over the last 4 or 5 quarters, in the year ago period, that same 10.1% would have been about 7.9%, so just over 2% acceleration over the last 4 quarters.

Adobe’s management is not looking to increase Adobe’s headcount dramatically because employees are using AI to become more efficient

We’re not really looking to grow our head count very dramatically. We are finding a lot more efficiency. People are using AI to be more efficient within the enterprise.

Meituan (OTC: MPNGY)

Meituan’s management sees 3 layers in AI, which are infrastructure, products, and work; Meituan has made good progress in 2025 Q1 in all 3 AI layers

When we talk about AI, I think there are at least 3 layers, the AI infrastructure and the AI in products and AI at work. So that’s how we view AI. And this quarter, we iterate our foundation large language model, and we have launched a new AI application and services for external users. At the same time, we also enhanced the suite of employee productivity to boost our own efficiency and improve the work experience. So it’s fair to say we have made good progress on all 3 fronts.

Meituan continued tweaking its foundational LLM (large language model) in 2025 Q1; Meituan launched a new AI application and services for external users in 2025 Q1; Meituan’s in-house large language model named Longcat can now seamlessly switch between reasoning and non-reasoning modes; Longcat’s performance in both reasoning and non-reasoning modes is at the leading edge; Meituan updated its voice interaction model, Longcat F, in 2025 Q1 and its performance now closely approaches OpenAI’s GPT-4o

On AI infrastructure. We continue to increase our investment for large language model and allocating resources not only to infrastructure CapEx but also to recruiting top-tier AI talents and to ensure our foundation of large language model is among the best tier in China. And during this quarter, we made continuous upgrade to our LongCat, large language model. The enhanced model can now seamlessly switch between reasoning and non-reasoning modes with the performance in both modes reaching the caliber of China’s leading models. Now we have also updated our end-to-end voice interaction model, LongCat F. So this updated model demonstrate advanced capabilities in understanding nuanced information, including the emotion or contextual environments and engaging in natural voice conversation. So it performance closely approach that of GPT-4o.

Meituan will soon launch an AI-powered business assistant for the food service industry; the food service industry’s AI business assistant will help with dish selection, new store location selection, menu development, and store operations

Next month in June, we plan to launch Kangaroo [foreign language]. It will be an AI-powered business decision assistant for the food service industry. It will act as an intelligent operational assistant for food service merchants and industry professionals covering 4 key scenarios, the cuisine dish selection and the new store location selection and menu development and store operations.

A key priority in Meituan’s AI initiatives is to use AI to enhance employee productivity and the workplace experience; about 52% of new code in Meituan is generated by AI, with over 90% of team members in some teams using AI coding tools intensively; management’s goal is to gradually achieve 100% adoption of AI coding tools across all engineers; Meituan has a no-code platform that is widely adopted internally, with 62% of product managers and 28% of business analysts using it; management has launched the no-code platform for public users free of charge; public users have created 9,410 applications with the no-code platform, with 1,600 of then published and used actively

We believe developing internal AI tools, as AI at work. We want to use AI to enhance employee productivity and the workplace experience. That remains a key priority in our AI initiative. So in the last quarter, we continued to improve the AI coding capabilities for engineers and actively promote internal adoption of AI coding. So currently, about 52% of new code in our company is generated by AI. And in some R&D teams, over 90% of the team members use AI coding tools intensively. And our goal is to gradually achieve 100% adoption across all engineers.

And we have our own no-code platform, and it’s for all employees and it has been widely adopted internally. The no-code platform allows user to quickly generate applications through natural language dialogue without requiring prior coding experience. And no-code is now used by all professional roles within our company, including product managers, user experience designers, business analysts, HR and finance staff. They leverage no-code for creating product prototypes, interactive pages and efficiency tools, with 62% of product managers and 28% of business analysts using the no-code platform internally. Last week, we launched the no-code platform for public users free of charge. And the URL is nocode.cn. And users can bring various creative ideas to life without adding coding skill…

…On nocode.cn, users have created 9,410 applications, with more than 1,600 of them published and in active use. 

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks the company’s document model database more accurately reflects the messiness of real-world data and provides customers with greater flexibility, faster time to market and the ability to scale without re-architecting; management thinks MongoDB is exceptionally well-positioned as AI changes application-development and business operations, because AI applications require unstructured data; management sees MongoDB as having 3 things that modern AI applications need – which are (1) real-time data, (2) powerful search, and (3) smart retrieval – all in 1 platform; management thinks MongoDB’s integration of embeddings, text search, vector search, and operational data is a unique differentiator for developers when building AI applications

MongoDB’s document model and the associated platform enables developers to more easily represent the messiness of real-world data, which includes understanding relationships between structured and unstructured data and managing data that is constantly evolving and changing. This fundamental architectural advantage provides customers greater flexibility, faster time to market and the ability to scale without re-architecting…

…As AI redefines how applications are built and how businesses operate, MongoDB is exceptionally well positioned. Real-world AI applications require high-quality, context-rich and offer unstructured data to deliver trustworthy outputs…

…MongoDB now brings together 3 things that modern AI-powered applications need: Real-time data, powerful search and smart retrieval. By combining these into one platform, we make it dramatically easier for developers to build intelligence responsive apps without stitching together multiple systems…

…We have best-in-class Voyage embeddings to improve the accuracy of these results to help people get comfortable with using AI. And by integrating text search, vector search and embeddings and operational data, that’s a unique differentiator. It makes the developer’s life easy, reduces cost and complexity. And so we feel we’re well positioned for this, but it’s still early as most enterprises are still early in the adoption of AI.

MongoDB’s management sees competitors retrofitting JSON and vector support on existing relational (or tabular) databases but the retrofits fail in production for AI, unlike MongoDB’s approach of being a native JSON and document-model database; management thinks that the fact that the retrofitting is happening indicates that tabular architecture databases do not suit AI applications; management thinks that recent Postgres acquisitions made by Databricks and Snowflake show that OLTP (online transaction processing) or operational data stores are the strategic high ground for AI applications, and they are where AI inference happens; management thinks inference is the big market for AI applications; management thinks the acquisitions by Databricks and Snowflake show that it is really hard to build an OLTP datastore; management thinks the acquisitions by Databricks and Snowflake are not a big deal; management thinks that both relational databases and document databases can win; management sees the popularity of Postgres as a function of the consolidation of the SQL database market; management thinks that comparing MongoDB purely with Postgres is incomplete, it should be comparing MongoDB with Postgres plus many other services

In their desire to keep up with evolving customer needs, some vendors are retrofitting their products such as adding JSON or Vector support as afterthoughts, which are superficial and brittle. This is a passive admission that MongoDB’s approach of using JSON and the docu model is the best way to model real-world data. These features may check the box, but they fall apart in production, leading to performance bottlenecks, operational headaches and spiraling infrastructure costs. Fundamentally, these vendors are constrained by the relational underpinnings. It’s important to understand that superficial compatibility with modern data types is not the same as deeply integrated production-grade functionality. MongoDB, by contrast, was purpose-built to address these needs natively…

…[Question] If you look this week at — we saw Snowflake kind of moved and — make the move towards Postgres. We saw Databricks kind of doing something there. Can you kind of frame that?

[Answer] I think the moves by both Databricks and Snowflake, I think, validate one thing that OLTP or the operational data store is the strategic high ground, especially for AI. That’s where inference happens. Inference is the big market. That’s where everyone wants to go, and you need to have an operational data store to do that. And I think the other thing it points out is building organically an OLTP store is really hard, especially when you need to meet the requirements of enterprise scale, availability, resiliency and security. And both organizations had signaled that they were working on organic approaches. Snowflake talked about Unistore, Databricks have talked about their own organic efforts, and it’s clear that they couldn’t make it happen. So this is not an easy task.

The second point I’d make is that just because they’re buying a small Postgres companies, I think — and Neon, I would say, was in the vibe coating space. And I would say Crunchy Data is a small relational company based in South Carolina. I would say that it’s not clear to me why the world needs a 15th or 16th Postgres derivative database. I think we’ll find that out. And I think there’s also some noise about how Neon is 80% of its instances are provisioned via code. I should point out that nearly 80% of MongoDB instances on Atlas are provisioned via code. And so we do that to help our customers provision and scale clusters very, very quickly…

…We believe that the fact that Postgres and other relational platforms are now adding JSON is a faceted mission that the core Tabular architecture just doesn’t get the job done in the world of AI. Developers need to be able to model the real-world data, which is complex, messy, nested, which means it has highly interdependent relationships and is constantly evolving and changing. And then when you look at the fact that they’ve bolted on these capabilities, if you add a document size greater than 2 kilobytes, it’s going to deliver a very poor performance…

…[Question] A key part of the bull narrative for Mongo has been that document databases would steadily take share from relational and then Mongo would become the default general-purpose database for modern apps. I guess my question is, does the rising popularity of Postgres among developers and a strong ecosystem it has, as we see from stuff like what Databricks did and what the cloud guys were doing. Does that suggest that relational just may have greater long-term relevance than initially anticipated?

[Answer] One is that this is a big market. It’s a $100 billion-plus market, so there can be multiple winners, right? Second, the Postgres popularity is really a function of the consolidation of the SQL market. People are leaving Oracle, leaving SQL Server, leaving MySQL and going to Postgres…

…A lot of people compare MongoDB to Postgres, and that’s actually a false comparison. By us embedding keyword search, by us embedding a native vector search, by us embedding, embedding models, you’re really comparing MongoDB to Postgres plus Elastic plus Pinecone plus something like Cohere…

…I would tell you that Postgres is a tabular database, much like all relational databases.

MongoDB’s management is hearing from customers that high-accuracy is important in AI adoption; MongoDB’s acquisition of Voyage helps MongoDB meet customers’ need for accuracy in AI applications; Voyage has leading embedding and reranking models that allow users to feed their data into AI models; Voyage’s latest release, Voyage 3.5, outperforms the next best embedding model and reduces storage costs by more than 80%; management will soon enable MongoDB users to seamlessly generate embeddings from data sitting within MongoDB in a private preview

We continually hear from large enterprises that high accuracy is a critical requirement to drive wide-scale adoption of AI. Our recent acquisition of Voyage AI enhances our ability to serve this need. Embeddings are the bridge between a large language model and a customer’s private data. Voyages leading embedding and reranking models allow customers to feed precise and relevant context into LLMs, significantly improving the accuracy and reliability of the output of AI applications…

…With the release of Voyage 3.5, we’ve taken another step forward, meaningfully outperforming the next best embedding models while reducing storage costs by more than 80%…

…We acquired Voyage. That’s going to be natively part of the platform. We’re going to — later this month, we will enable people to seamlessly generate embeddings from data sitting inside MongoDB, and that will be in private preview. So that’s within 4 months of the acquisition.

Startups and enterprises are using MongoDB for their AI applications; MongoDB has some high-profile AI customers using its platform

Start-ups and mature companies are using MongoDB to help to deliver the next wave of AI-powered applications to their customers, including Cursor, Haleon, Vonage, the Financial Times and LG Uplus…

…We have some high-profile AI customers already on our platform and lots of other smaller customers.

MongoDB’s management continues to see enterprises being early in the adoption of AI; management thinks that the barriers to adoption of AI are limited skills with AI and lack of trust in AI because of the risk of hallucination; some early use cases that management has seen with AI are around round operating efficiency, chatbots, and domain-specific software; management thinks that the real enduring value will come when enterprises build custom AI apps, because there is no competitive advantage for an enterprise in using an AI application that the enterprise’s competitors can also use

We see thousands of customers building thousands of apps on MongoDB, and that’s growing quarter-over-quarter. We are seeing some high-profile, well-known AI companies. I mentioned Cursor on the call, and there’s some — a few other high-profile companies who are building on top of MongoDB. And obviously, those businesses are really taking off. But what we see is that enterprises are still early in the adoption of AI. The barriers include there’s a limited set of skills and experience with AI, trust with AI systems that are probabilistic, which is another way of saying the risk of hallucinations. And so we see obviously some early use cases around operating efficiency, chatbots, cogen and domain-specific ISVs like Harvey, but that customers are using…

…But the real enduring value will come when people start building custom AI apps. And the point I want to make is that anyone can use an ISV to run their business, but that doesn’t give them a competitive advantage because their competitors could use the same ISV. What really gives them a competitive advantage is building custom solutions around using AI to transform their business, whether it is to seize new opportunities to respond to new threats to drive more operating efficiency.

Examples of messy real-world data that are really difficult to work with in relational databases but that are easy with MongoDB

If you want to model the message that has attachments or reactions or part of the threaded conversation, how do you do that in a structured table? If you want to deal with adding new fields or new values and all that, how do you — for example, if you have a user who has something multiple phone numbers, how do you model that quickly? How do you deal with nested structures, right, where a customer record could have — include past orders each with their own line items and order history. Like how do you do that with a — it’s much more difficult where you can model that so much more easily in MongoDB, how do you deal with like messy, inconsistent data that there is no uniformity to.

NVIDIA (NASDAQ: NVDA)

NVIDIA’s Data Center revenue again had incredibly strong growth in 2025 Q1, driven by AI factory build outs and the ramp of the Blackwell family of chips

Data Center revenue of $39 billion grew 73% year-on-year…

…AI factory build-outs are driving significant revenue…

…Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete.

AI workloads on NVIDIA’s chips have now transitioned strongly to inference; NVIDIA’s management is seeing a huge jump in inference demand; major NVIDIA customers, such as OpenAI, Microsoft, and Google, are seeing huge leaps in AI token generation; Microsoft processed 100 trillion tokens in 2025 Q1, up 5x year-on-year; inference-serving startups have tripled their token generation rate and revenues

AI workloads have transitioned strongly to inference…

…We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis…

…Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.

The US government recently issued export controls to China on NVIDIA’s H20 chips, which caused the company to write-off the value of the chips the company can no longer sell; NVIDIA’s management believes China’s AI accelerator market will exceed US$50 billion; management thinks NVIDIA’s loss of access to the Chinese market will harm the company’s business, and benefit the company’s competitors in China and elsewhere; as a percentage of total Data Center revenue, NVIDIA’s Data Center revenue in China was below management’s expectations in 2025 Q1 and was down sequentially; management expects a large decline in China data center revenue in 2025 Q2; Singapore is used by many of NVIDIA’s large customers for centralized invoicing and the NVIDIA products billed under Singapore are shipped elsewhere; nearly all of NVIDIA’s H100, H200, and Blackwell Data Center revenue billed to Singapore was for orders from US customers; management sees that half of the world’s AI researchers are based in China; management thinks that the AI platform that wins China will lead globally; because of the US government’s latest export controls, the Chinese AI market is effectively closed to the US; management sees China moving on with AI with or without the US, and the export controls weakening the US’s position; management thinks the US government’s assumption that China cannot make AI chips is clearly wrong; management sees China’s DeepSeek and Qwen as among the best open-source AI models, and these models have gained traction outside of China; management thinks the US wins when top open-source models, even those from China, are built on American infrastructure

On April 9, the U.S. government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory. In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide…

…China as a percentage of our Data Center revenue was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 billed revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200, and Blackwell Data Center compute revenue billed to Singapore was for orders from U.S.-based customers…

…With half of the world’s AI researchers based there, the platform that wins China is positioned to lead globally. Today, however, the $50 billion China market is effectively closed to U.S. industry…

…China’s AI moves on with or without U.S. chips. It has the compute to train and deploy advanced models. The question is not whether China will have AI, it already does. The question is whether one of the world’s largest AI markets will run on American platforms. Shielding Chinese chip makers from U.S. competition only strengthens them abroad and weakens America’s position. Export restrictions have spurred China’s innovation and scale…

…The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers win AI wins AI. Export controls should strengthen U.S. platforms, not drive half of the world’s AI talent to rivals…

…, DeepSeek and Qwen from China are among the most — among the best open source AI models. Released freely, they’ve gained traction across the U.S., Europe and beyond…

…DeepSeek also underscores the strategic value of open source AI. When popular models are trained and optimized on U.S. platforms, it drives usage, feedback and continuous improvement, reinforcing American leadership across the stack. U.S. platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure.

Blackwell’s ramp is the fastest product ramp in NVIDIA’s history; management believes the introduction of the GB200 NVL architecture within the Blackwell family allows users to achieve the lowest cost per inference token; management has seen a significant improvement in manufacturing yields for the GB200 NVL; GB200 NVL is now generally available; hyperscalers are deploying 1,000 NVL72 racks, or 72,000 Blackwell GPUs, on a weekly basis, and are on track to increase their deployment-pace in 2025 Q2; Microsoft has already deployed tens of thousands of Blackwell GPUs for OpenAI, and Microsoft is ramping up to hundreds of thousands of Blackwell GPUs; major CSPs (cloud services providers) are already sampling GB300 systems, with production expected later in 2025 Q2; the GB300’s design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200; software optimisations have already improved the performance of the Blackwell family by 1.5x in May 2025; NVIDIA has brought the Blackwell family of chips to mainstream gaming; compared to the Hopper family, the Blackwell family of chips has 40x higher speed and throughput, which is critical in driving down the cost of inference

Our Blackwell ramp, the fastest in our company’s history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center-scale workloads and to achieve the lowest cost per inference token. While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 NVL racks are now generally available for model builders, enterprises and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers…

…Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields…

…While Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone…

…This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti, starting at just $299. The RTX 5060 also debuted in laptops, starting at $1,099. These systems doubled the frame rate and slashed latency. These GeForce RTX 5060 and 5060 Ti desktop GPUs and laptops are now available…

…Compared to Hopper, Grace Blackwell is some 40x higher speed and throughput, compared. And so this is going to be a huge, huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time.

NVIDIA Dynamo can increase the AI inference throughput of Blackwell NVL72 by 30x for AI reasoning models; Capital One reduced its AI chatbot’s latency by 5x with Dynamo

NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry. Developer engagements increased, with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo.

In the latest MLPerf inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our 8-GPU 200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance per GPU as well as 9x more GPUs all connected on a single NVLink domain.

NVIDIA’s CUDA software ecosystem has improved the inference performance of the Hopper family of chips by 4x over 2 years

We increased the inference performance of Hopper by 4x over 2 years. This is the benefit of NVIDIA’s programmable CUDA architecture and rich ecosystem.

There were nearly 100 NVIDIA-powered AI factories in flight in 2025 Q1, up 2-fold year-on-year; the number of GPUs in each AI factory also doubled from a year ago; management has line of sight to tens of gigawatts of AI data center projects requiring NVIDIA AI infrastructure; there are many more AI factories that have yet to be announced

The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a twofold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period…

…We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not-too-distant future…

…In the remarks, Colette mentioned there’s some 100 AI factories being built. There’s a whole bunch that haven’t been announced.

NVIDIA’s management sees AI agents as a new digital workforce that can handle simple as well very complex tasks; management has used the Llama model architecture to build the Llama Nemotron family of open reasoning models for agentic AI; the Nemotron models are available as NVIDIA inference microservices (NIMs); management has improved the accuracy and inference speed of the Nemotron by 20% and 5x, respectively; large enterprises including Accenture and Microsoft are using Nemotron

We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs, or NVIDIA inference microservices, with multiple sizes to meet diverse deployment needs. Our post-training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft are transforming work with our reasoning models.

Cisco used NVIDIA NeMo microservices improve its code assistant’s accuracy by 40% and improve response time by 10x; NASDAQ used NVIDIA NeMo to improve the accuracy and response time of its AI platform’s search capabilities by 30% each; Shell used NVIDIA NeMo to reduce the training time of its custom LLM by 20% and improved its accuracy by 30%

NVIDIA NeMo microservices are generally available across industries that are being leveraged by leading enterprises to build, optimize and scale AI applications. With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. NASDAQ realized a 30% improvement in accuracy and response time in its AI platform’s search capabilities. And Shell’s custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo’s parallelism techniques accelerated model training time by 20% when compared to other frameworks.

Yum! Brands will use NVIDIA AI on 500 of its restaurants this year, before expanding to 61,000 restaurants over time, to improve operations; Cybersecurity companies such as Crowdstrike are using NVIDIA AI for agentic workflows; Crowdstrike achieved 2x faster detection with 50% less compute cost through NVIDIA AI

We also announced a partnership with Yum! Brands, the world’s largest restaurant company to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order-taking, optimize operations and enhance service across its restaurants. For AI-powered cybersecurity, leading companies like Check Point, CrowdStrike and Palo Alto Networks are using NVIDIA’s AI security and software stack to build, optimize and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.

NVIDIA’s networking revenue increased sequentially in 2025 Q1; NVLink 72 offers 14x the bandwidth of PCIe Gen 5; NVLink 72 can carry 130 terabytes per second of bandwidth in a single rack (the world’s peak internet traffic is also around 130 terabytes per second); NVLink shipments in 2025 Q1 exceeded $1 billion; NVIDIA recently announced NVLink Fusion, which (1) allows hyperscalers to connect semi-custom CCUs (close control units) to NVIDIA racks, (2) allows ASIC and CPU providers to connect to NVIDIA racks; management thinks Spectrum-X (NVIDIA’s Ethernet networking solution) offers the highest throughput and lowest latency networking solution for AI; Spectrum-X had strong sequential and year-on-year growth; Spectrum-X is widely adopted by major CSPs and consumer internet companies; Google Cloud and Meta became Spectrum-X customers in 2025 Q1; NVIDIA has introduced silicon photonic switches to Spectrum-X and Quantum-X, which increases an AI factory’s power efficiency by 3.5x, network resiliency by 10x, and time-to-market by 1.3x; management sees NVIDIA has having 3, maybe 4, networking platforms right now; latency matters a lot in AI, so achieving low latency in AI networking is important; Spectrum-X has improved the utilisation of Ethernet in AI clusters by 50%-90%

Sequential growth in networking resumed in Q1 with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. 

We created the world’s fastest switch, NVLink, for scale up. Our NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink 72 carries 130 terabytes per second of bandwidth in a single rack, equivalent to the entirety of the world’s peak Internet traffic. NVLink is a new growth vector and is off to a great start with Q1 shipments exceeding $1 billion.

At COMPUTEX, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies and Astera Labs as well as CPU suppliers such as Fujitsu and Qualcomm, to leverage and relink Fusion to connect our respective ecosystems.

For scale out, our enhanced Ethernet offerings deliver the highest throughput, lowest latency networking for AI. Spectrum-X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including CoreWeave, Microsoft Azure and Oracle Cloud and xAI. This quarter, we added Google Cloud and Meta to the growing list of Spectrum-X customers. We introduced Spectrum-X and Quantum-X silicon photonics switches, featuring the world’s most advanced co-packaged optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasing power efficiency by 3.5x and network resiliency by 10x, while accelerating customer time to market by 1.3x…

…We now have 3 networking platforms, maybe 4. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do. Scaling out is easier to do, but scaling up is hard to do. And that platform is called NVLink… In addition to InfiniBand, we also have Spectrum-X… the last one is BlueField, which is our control plane…

…In the case of AI, you have a lot of computers working together. And the traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking and it wants to get work done as quickly as possible, and you’ve got a whole bunch of nodes working together…

…We enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet. And as a result, we improved the utilization of Ethernet in these clusters, these clusters are gigantic, from as low as 50% to as high as 85%, 90%. And so the difference is, if you had a cluster that’s $10 billion, and you improved its effectiveness by 40%, that’s worth $4 billion. It’s incredible. And so Spectrum-X has been really, quite frankly, a home run.

NVIDIA’s GeForce is the largest AI personal computing footprint for developers; NVIDIA added AI laptop models in 2025 Q1 that can run Microsoft’s CoPilot+; NVIDIA’s DGX Spark and DGX Station delivers 1 petaflop and 20 petaflops, respectively, of AI compute in a desktop formfactor; DGX Spark and DGX Station will be available later in 2025 

With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft’s CoPilot+….

…DGX Spark delivers up to 1 petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 Superchip. DGX Spark will be available in calendar Q3 and DGX Station later this year.

NVIDIA’s Omniverse is being adopted even more widely by leading software companies; TSMC used Omniverse to save months of work by designing fabs virtually; Foxconn used Omniverse to accelerate thermal simulations by 150x; Pegatron used Omniverse to reduce assembly line defect rates by 67%; GE Healthcare is using Omniverse to develop robotic imaging and surgery systems.

We have deepened Omniverse’s integration and adoption into some of the world’s leading software platforms, including Databricks, SAP and Schneider Electric. New Omniverse Blueprint such as Mega for at-scale robotic fleet management are being leveraged in KION Group, Pegatron, Accenture and other leading companies to enhance industrial operations. At COMPUTEX, we showcased Omniverse’s great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually, Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%…

…GE Healthcare is using the new NVIDIA Isaac platform for health care simulation built on NVIDIA Omniverse and using NVIDIA Cosmos for platform speed, development of robotic imaging and surgery systems.

NVIDIA’s automotive revenue had strong growth in 2025 Q1, driven partly by the ramp of self-driving technologies; NVIDIA is partnering with GM (General Motors) to build next-gen vehicles with NVIDIA AI, simulation, and accelerated computing; NVIDIA is now in production with its full-stack solution for Mercedes-Benz

With our Automotive group. Revenue was $567 million, down 1% sequentially but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs. We are partnering with GM to build the next-gen vehicles, factories and robots using NVIDIA AI, simulation and accelerated computing. And we are now in production with our full-stack solution for Mercedes-Benz starting with the new CLA, hitting roads in the next few months. 

NVIDIA recently announced Isaac GROOT N1, the world’s first open fully customizable foundation model for humanoid robots; NVIDIA recently launched Cosmos World Foundation models; leading robotics companies have begun using Isaac and Cosmos; management is very bullish on the development of robotics and thinks future manufacturing plants in the US will deeply incorporate robotics

We announced Isaac GR00T N1, the world’s first open fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmos World Foundation models. Leading companies include 1X, Agility Robotics, Figure AI, Uber and Waabi. We’ve begun integrating Cosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and XPENG Robotics are harnessing Isaac’s simulation to advance their humanoid efforts…

…The era of robotics is here, billions of robots, hundreds of millions of autonomous vehicles and hundreds of thousands of robotic factories and warehouses will be developed…

…Regarding onshore manufacturing, President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs and strengthen national security. Future plants will be highly computerized in robotics. We share this vision.

NVIDIA’s management sees reasoning models as being compute-intensive and requiring hundreds to thousands times more tokens per task than one-shot inference models; management thinks reasoning models are driving a step-function surge in inference demand

Reasoning AI enables step-by-step problem-solving, planning and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands more — thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand.

NVIDIA’s management sees AI scaling laws as being firmly intact, with inference now being a new driver

AI scaling laws remain firmly intact, not only for training, but now inference too requires massive scale compute.

TSMC’s new plants in the USA are for manufacturing of NVIDIA’s chips; other important chip-manufacturing partners of NVIDIA, besides TSMC, are also investing in US manufacturing; NVIDIA has made substantial long-term purchase commitments for US-made chips; management’s goal for US-manufacturing of AI chips is “From chip to supercomputer, built in America, within a year”; management sees the USA as always being NVIDIA’s largest market and home to the largest installed base of NVIDIA’s infrastructure

TSMC is building 6 fabs and 2 advanced packaging plants in Arizona to make chips for NVIDIA. Process qualification is underway with volume production expected by year-end. SPIL and Amkor are also investing in Arizona, constructing packaging, assembly and test facilities. In Houston, we’re partnering with Foxconn to construct a 1 million square foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we’ve made substantial long-term purchase commitments, a deep investment in America’s AI manufacturing future. Our goal: From chip to supercomputer built in America within a year. Each GB200 NVLink 72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job…

…The U.S. will always be NVIDIA’s largest market and home to the largest installed base of our infrastructure.

NVIDIA’s management is seeing the US government, under the Trump administration, changing its tune on AI diffusion rules; the US government now has a new policy to promote US AI technology with trusted partners; NVIDIA’s management is seeing the US government as wanting US AI technology to lead

On AI Diffusion Rule, President Trump rescinded the AI Diffusion Rule, calling it counterproductive, and proposed a new policy to promote U.S. AI tech with trusted partners. On his Middle East tour, he announced historic investments. I was honored to join him in announcing a 500-megawatt AI infrastructure project in Saudi Arabia and a 5-gigawatt AI campus in the U.A.E. President Trump wants U.S. tech to lead. The deals he announced are wins for America, creating jobs, advancing infrastructure, generating tax revenue and reducing the U.S. trade deficit.

NVIDIA’s management thinks every country now sees AI as a core technology for the next industrial revolution

Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities. At COMPUTEX, we announced Taiwan’s first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure. Japan, Korea, India, Canada, France, the U.K., Germany, Italy, Spain and more are now building national AI factories to empower start-ups, industries and societies.

NVIDIA’s management is seeing plenty of enterprise-data living on-premises, so NVIDIA is moving AI into enterprises instead of waiting for enterprises to shift to the cloud

We’re going to see AI go into enterprise, which is on-prem. Because so much of the data is still on-prem, access control is really important, it’s really hard to move all of — every company’s data into the cloud. And so we’re going to move AI into the enterprise. And you saw that we announced a couple of really exciting new products: our RTX Pro enterprise AI server that runs everything enterprise and AI; our DGX Spark and DGX Station, which is designed for developers who want to work on-prem. And so enterprise AI is just taking off.

NVIDIA’s management thinks 6G technology will be built on AI

Telcos, today, a lot of the telco infrastructure will be, in the future, software-defined and built on AI. And so 6G is going to be built on AI.

NVIDIA’s management thinks agentic AI has really dispelled a lot of worries people had over AI hallucinations

AI really busted through. Concerns about hallucination or its ability to really solve problems, I think a lot of people are crossing that barrier and realizing how incredible, incredibly effective agentic AI is and reasoning AI is.

Okta (NASDAQ: OKTA)

Okta’s new products, including Identity Threat Protection with Okta AI, had strong contribution in 2025 Q1

New products such as Okta Identity Governance, Okta Privileged Access, Okta Device Access, Fine Grained Authorization, Identity Security Posture Management and Identity Threat Protection with Okta AI had another quarter of strong contribution.

Okta’s latest advancements help organisations protect AI systems; Okta has been protecting nonhuman identities, or NHIs, for a long time, but NHIs have boomed in recent times with the rise of AI agents; in 2024, only 15% of organisations were confident of their ability to secure NHIs, and Okta has products that help solve this problem; Okta’s products to secure NHIs also help secure human identities; Okta’s products to secure NHIs ensure AI interactions remain governed under Zero Trust policies; Okta’s Auth0 platform now has Auth for GenAI, which solves the problem of AI agents creating unsecured NHIs; Auth for GenAI has a successful developer preview, and general availability (GA) is expected in the coming months; Auth for GenAI is currently a usage-based pricing model; Auth for GenAI is useful for both large and small companies; management is seeing a lot of interest for the Auth for GenAI developer preview from small companies; management thinks the problem of NHIs will become even more prominent as more and more AI projects enter production-mode; management thinks Okta will win with NHIs in the AI age because it is the only company with a complete solution

Our newest advancements help organizations protect their employees, customers and AI systems. The key themes that Showcase this year were: one, how Okta is protecting nonhuman identities or NHIs; and two, how Auth0 is helping developers build, secure AI agents. NHIs have been around for a long time. What’s new is how the recent boom in AI agents has resulted in exponential growth in NHIs. NHIs include service accounts, shared accounts, machines and tokens. NHIs often operate outside traditional identity governance frameworks and can leave organizations vulnerable to security risks. In fact, last year, only 15% of organizations said they are confident in their ability to secure NHIs. Okta addresses this problem with Identity Security Posture Management and Okta Privileged Access. By combining these 2 products, customers can discover, secure and manage NHIs with an end-to-end secure identity fabric to secure both human identities and NHIs across a single system. This integrated approach protects non-federated and privileged identities, ensuring AI-driven automation and machine-to-machine interactions remain governed under Zero Trust policies while continuously monitoring NHI risks and vulnerabilities across the enterprise…

…Auth for GenAI addresses the problem of AI agents creating unsecured NHIs by enabling developers to integrate secure identity into their Gen AI applications. This helps ensure that AI agents have built-in authentication, fine grained authorization, async workflows and secure API access. Auth for GenAI secures AI agents at every step without slowing them down, providing developers with the trusted tools and flexibility they need. The product has had a successful developer preview, and we expect the GA launch this summer…

…Auth for GenAI is a usage-based pricing model. So it’s the number of requests to Auth0. So it’s monetized in a similar way to the way Auth0 is now…

…I think that space is — there are big companies building things that could be taking advantage of Auth for GenAI, but it’s also a lot of smaller companies, too. Every small company start-ups trying to innovate around AI agents. And I know a lot of the interest in the developer preview around Auth for GenAI has been from small companies…

…When you look at our Identity Security Posture Management, its ability to detect these NHI and you look at our privileged solution and our general access management solution, which allows companies to secure those nonhuman identities, it’s very relevant for a company even if they’re just POC in these agents. And they’re in a proof of concept. They’re not really in production. It just puts us — shines a light on this problem as they think about moving to production. So that’s a very important aspect of this dynamic in the market. Now we do think as more of these projects move into production, it’s really, really going to force this issue even more. And so I think we’re going to see further acceleration as more and more companies move into production…

…[Question] Follow up on, as you say, the nonhuman side of the business. And the broader question is why do you think Okta will win in that environment? And I think a lot of investors assume it is going to be a big market. Pricing may be different. But why does Okta win versus when we were at RSA talking to CyberArk or SailPoint or Saviynt, whoever it is, all think that they’re in a position to win, particularly since our take, it sounds like governance will be part of identity with agents, more so than, say, just access.

[Answer] I think today, it’s because we’re the only one with a complete solution. And we have this breadth of products that can help solve this problem from detection to vaulting to governance workflows. And I’m talking specifically about NHIs. And I think — but that’s — I mean, that’s only kind of entry to the race. Now we have to execute well, and we have to keep innovating.

Adversaries are now conducting IT contracting scams with AI, and Okta has recommendations to counter these threats

I encourage you to check out a blog post we shared that highlighted Okta threat intelligence’s in-depth research on how adversaries are conducting IT contracting scams using AI and our recommendations to help mitigate these threats.

Okta’s management has been having conversations with customers that are moving AI projects from POC (proof-of-concept) to production, and how Okta can help them; management is seeing that only the most advanced enterprises are in production with AI projects right now

There’s just the conversations we’re having with customers about how important what we do is to them and how much they’re investing in everything from the traditional things we’ve helped them with cloud transformation and of course, security. But now with what’s going on with all these AI projects and moving from POCs to production and how we can help with that and how we can help them build Auth for GenAI applications…

…Only the most advanced forward-leaning enterprises are actually doing production AI right now and use cases at scale where they’re seeing tangible business benefit at scale in production.

Okta’s management thinks that MCP (model context protocol) is a big deal for AI, but also recognises that it’s still very early; management sees MCP as a way for AI agents to use technology resources; management is very excited about the possibility of adding OAuth to MCP; the pricing model for OAuth within MCP is to-be-determined (TBD)

The MCP is a big deal, as you all know. And the way I think about it is it’s basically a way to — it’s almost like a new Internet. It’s a new way to communicate with tools and technology in a way that these LLMs and these emerging set of browsers and user agents on the AI Internet can use all these resources. And that’s very exciting. People don’t — people forget that if you look at the internals of the web, HTTP, the tag for a browser is actually called a user agent and it uses HTTP to connect to web resources. Well, MCP could be a new kind of Internet where the clients are actually AI agents, not user agents and they can talk to these MCP servers. So it’s very exciting from a shifting of the industry and a shifting the capabilities of what these kinds of software systems could do. But it’s also very early. We’re talking about a protocol that was announced, I think, 6 weeks ago. And everyone’s running around, adding MCP servers to their capabilities and developers are experimenting with what this means. We’re very excited about the ability to work with the standards bodies and the community to add actual OAuth to the MCP, so authentication and OAuth protocol to the MCP protocol and handshake there…

…The way MCP will be monetized and how — if we add product capabilities to extend what an authentication handshake is to an MCP server, that’s — we haven’t built that yet, and we haven’t released that yet. So that will be TBD there.

A lot of large global companies are still using on-premise identity technologies and this is an opportunity for Okta, especially when these companies want to take advantage of AI, as cloud-migration is necessary for AI

We still have tons of room to grow inside the Global 2000 and really the top 5,000 biggest companies and organizations in the world is a tremendous opportunity for us. A lot of those organizations are invested a lot in on-premise technology and a lot in on-premise identity with big identity teams that they spend a lot of money on, a lot of cost there. And those companies are with all the change around cloud migration, which has been going on for years and years and years and the focus on security. And now with all of them trying to take advantage of the AI revolution, there’s another catalyst for them to change and upgrade their identity system.

Okta’s management does not see AI-agent apps as being a big accelerant for Okta’s Customer Identity business, but the overall trend is still towards buying instead of building when it comes to customer identity solutions

[Question] When you first started talking about the customer identity opportunity, I think to us, it kind of made a lot of sense why your customers would choose to buy this stuff instead of building it out of the box. That was, I guess, more for the traditional SaaS world. So what I’m trying to understand is there seems to be a lot of newfound excitement on the customer identity side as we head into this agentic world. Is there anything about a future of agent-based apps that is going to make it even more of a no-brainer to go with buying this out of the box from you guys on the customer identity side instead of trying to develop it themselves compared to maybe the old school SaaS world?

[Answer] In general, the trend is toward more buy, less build. And I think AI probably is — I’m not sure it’s a huge accelerant of that. I think it’s probably on trend just because I think it’s mostly like the solutions are getting better. If you go 10 years ago, there wasn’t really good customer identity solutions that were easy to use, reliable, scalable. And now with Auth0 had an amazing developer experience and were easy to start using and then upsell over time. And that continues. And I think I think the moving to the world of AI and agents and embedding customer identity inside of those apps, I don’t know if it’s material different, but it’s on a trend line that’s toward buying these solutions versus building.

Okta’s management is building a whole set of capabilities and products for AI agents that are not released or announced yet

This whole agentic revolution and agents working on your behalf, I think that’s a whole other set of capabilities and products that we’re thinking about and building, and we haven’t released and announced them yet. But there’s a whole layer on top of what we talk about service accounts and tokens and API access. That’s actually tracking the agent and knowing what that means and knowing what security posture you want and what governance, life cycles, et cetera, et cetera.

Salesforce (NYSE: CRM)

Salesforce’s management thinks Informatica will enhance Salesforce’s data advantage in AI; Informatica is important in helping Salesforce customers harmonise their data for AI applications

If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right. And Informatica combined with Salesforce’s Data Cloud, combined with Tableau, combined with other key assets that we’re going to bring to bear, this is what is creating this incredible data business…

…Today, for our customers, they all want to get there. They all have the hunger to do that. They all want to have this great success, but it takes some time for them to start to build their data sets. And that is why the Informatica acquisition is so important because they all need to not only translate their data to build their master data management. They need to harmonize their data. They need to do all these things. And we see that and we go into these customers and like, “Let’s go.” And they’re like, “We can do some, but we can’t do all.” And the reason they can’t do all is because their whole enterprise data set is not fully harmonized, which is why

In enterprise AI, especially agentic AI, preparation of data sets is very important; management sees the existence of data-silos in enterprises as a key obstacle in enterprises more widely adopting AI

I think everyone who is going through an AI transformation, every business, including mine, we’re going to talk about some great businesses that are going through transformations whether it’s Pepsi or Falabella or OpenTable, et cetera, but every AI transformation is a data transformation. And you don’t see it on the consumer side because when you’re using a consumer AI, you have to remember that the data set has kind of been prefabricated for you. That is the training data and everything is put together. It’s an amalgamated data set applied to this consumer AI model. That’s not how an enterprise AI really works. You have to have your enterprise data together to get the result that you want…

…If you can imagine this idea that you want to deploy all of this incredible agentic data, well, you’ve got to get your data right…

…The enterprise has data sets that are highly controlled, highly governed and highly secured. And these data sets are everything from your customer data set to your financial data set to your HR data set, and the reality is that not all enterprise data is available to all users. Like, for example, you work, Kash, at Goldman Sachs. You can’t see all the Goldman Sachs customer information. There’s regulations around that. You can’t see all the employees’ salary information. You don’t have access to all the Goldman Sachs financial data. So when you’re using these models, they’re not just giving you access to all of this stuff. Are they, Kash? No, they have to be tightly controlled. But if I’m a Goldman Sachs customer, and I want to come in and I want to ask about my account balance or information about my — who I am and what my portfolio looks like or what my opportunities are or even if I’m a Goldman Sachs employee and I want information on — the general information on benefits or how to enable myself or how to sell products more efficiently to customers, all of those things could easily happen right now with the agentic platform. However, there’s a lot of things that could not happen as I kind of just amplified, and that is kind of the constraint.

Salesforce has closed 8,000 Agentforce deals since launch, of which half are paid; Agentforce has handled over 750,000 requests on Salesforce’s help site, lowering cases by 7% year-on-year; 800 customers are already in production with Agentforce; management has launched hundreds of prebuilt Agentforce templates; management has introduced the new Flex Credits consumption-based pricing model for Agentforce after customer feedback; management will add FedRamp High authorisation for Agentforce in June 2025; AgentForce is delivering AI agents to both employees and consumers; management thinks Salesforce is already delivering more agents than any other company in the world; AgentForce reached $100 million in AOV (annual order value) in only a few months and it’s the fastest product to do so in Salesforce’s history even without being fully deployed; 30% of Agentforce’s bookings in 2025 Q1 (FY2026 Q1) came from customers increasing consumption; Salesforce’s internal use of Agentforce has already reduced its hiring needs, driving $50 million in savings; Agentforce is a really fast-growing product that management has not seen before; Agentforce helps pull customers into other Salesforce products; all Agentforce deals in 2025 Q1 (FY2026 Q1) included 4 other clouds on average; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors

Salesforce has closed over 8,000 deals since launching Agentforce, of which half are paid. On help.salesforce.com, Agentforce has handled over 750,000 requests, cutting case volume by 7% Y/Y…

…We’ve got 800 customers already in production with Agentforce, including amazing companies like ENGIE, and that has been a success — incredible success story and with incredible velocity and conversations in OpenTable, Finnair, Grupo Globo, Falabella…

…We have launched hundreds of prebuilt Agentforce templates for different industries, roles, tasks, making it faster and easier for customers to deploy Agentforce…

…Earlier this month, we introduced our Flex Credits. It’s a new consumption-based pricing model. That’s how we’ve tuned our pricing after a huge amount of customer feedback…

…Next month, we’re going to add FedRAMP High authorization for Agentforce, so the U.S. public sector can also experience this incredible success…

…Agentforce does agentic augmentation for employees. Agentforce is also doing it directly to consumers. I think that we are really delivering at this point probably more agents and more conversations and more capability to more enterprises than any other vendor in the world. I really see us as the #1 agent platform already…

…It’s only been a few months. In fact, Agentforce reached more than $100 million in AOV. It’s much faster than any product in our history, and we’re not even fully deployed on all geographies, currencies or languages…

…Even though Agentforce is only in its second quarter, 30% of its bookings also came from customers increasing their consumption…

…In customer support, Agentforce has handled 750,000 cases and is on track to surpass 1 million help portal requests this quarter, cutting case volume by 7% year-over-year. As a result, we have reduced some of our hiring needs, enabling us to rebalance and redeploy 500 customer support employees to higher impact data plus AI roles by year-end, driving $50 million in savings…

…I don’t think the word agent was even on our earnings call a year ago. Maybe it wasn’t even on our earnings call 9 months ago. But it started to appear, and when we released the product end of October, it’s November, December, January, February, March, April, here we are in May. So just think about in a relatively short period of time, I’ve never seen in my career over 45 years in enterprise software this idea that we now have 8,000 customers, 4,000 of whom are paying, many of them who are at scale deployments where this is working in months. It just makes no sense actually to me…

…When we sell an Agentforce, we’re not just dropping some box off and saying, okay, we sold an Agentforce. We’re pulling all of our clouds in. And I’m sure that you heard like, for example, in the example I think of Pepsi, they have 11 of our clouds. So when we’re pulling in Agentforce, where all the other products are coming along with it…

…. We took all the deals, all the Agentforce deals for the quarter. On average, there were 4 other clouds on those deals…

…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce.

Data Cloud surpassed 22 trillion records in 2025 Q1 (FY2026 Q1), up 175% year-on-year (was 50 trillion records in 2024, or FY2025); 60% of Salesforce’s top 100 deals in 2025 Q1 (FY2026 Q1) included Data Cloud; 50% of Data Cloud’s new bookings in 2025 Q1 (FY2026 Q1) came from existing customers; Salesforce’s Data Cloud and AI ARR (annual recurring revenue) exceeded $1 billion in 2025 Q1 (FY2026 Q1), up 120% year-on-year; Salesforce closed 30 net new bookings exceeding $1 million that included Data Cloud and AI; Salesforce’s top 6 deals in 2025 Q1 (FY2026 Q1), which have average TCV (total contract value) of $34 million, mostly have Agentforce and Data Cloud as anchors; Salesforce had 3x more Data Cloud deals in 2025 Q1 (FY2026 Q1) compared to a year ago

In this quarter, our Data Cloud, just our Data Cloud surpassed 22 trillion records, up 175% year-over-year. Nearly 60% of our top 100 deals included investments in both Data Cloud and AI…

…50% of Data Cloud’s Q1 new bookings came from existing customers. I think that’s really important because it really speaks to the adoption of the product and the incredible usage by the customers who have it…

…Data Cloud and ARR grew more than 120% year-over-year, and it’s more than $1 billion part of our business…

…In Q1, we closed more than 30 net new annual bookings over $1 million that include both data and AI…

…I look at the top 6, the top 6, which on average, $34 million of TCV on average on each of them. On those 6, 5 of them have Data Cloud as an anchor and also Agentforce as an anchor. The 1 customer that didn’t buy, the top 6 on Data Cloud is because they bought in Q4 a multimillion-dollar deal Data Cloud. They set the data foundation before they went to adding more clouds and Agentforce. On the top 6 on Agentforce, on the top 6 deals, 5 bought Agentforce. The one that didn’t buy is the one that, Srini, you know very well. We are negotiating now the extension to Agentforce…

…We had 3x more Data Cloud deals in Q1 than we had the year before.

Salesforce’s management has the ADAM framework for thinking about agents, apps, data, and meta data for AI; management thinks the ADAM framework is necessary in order for companies to achieve success with agentic AI; a new Tableau product, named Tableau Next, is an example of Salesforce’s ADAM framework

When I talk about agents and data and apps and metadata, that’s what we really call our ADAM framework. It’s in our experience to see now these 4 elements, the app, the data, the agents and the metadata, that make Salesforce unique, that companies need to achieve the real promise of agentic AI…

…If you were in San Diego, you saw Tableau Next. And what you saw was the DataFam. That’s the Tableau community kind of fully inspired because not only were they looking at Tableau Next, this incredible new product, but what they saw was Tableau, the Tableaus they love. And they also saw an agentic layer, and they saw it deeply integrated into our data cloud and all running on our metadata platform. That’s our ADAM framework, the agents, the data, the apps, the metadata all together…

…In this new agentic AI era, every company is going to say that they have agents. Well, I think every company does say that they have agents. But without these 4 parts of what we call ADAM, the — really the agents, the data, the apps, the metadata framework, you’re just not really able to deliver this complete experience for the enterprise, including delivering digital labor.

Salesforce’s management continues to see Slack as the interface for users to converse with Salesforce’s AI agents; every Slack user gets a digital teammate when Agentforce is deployed in Slack; Salesforce’s own sales agent within Slack is improving the efficiency of Salesforce’s sales teams by saving 44,000 hours of work annually; pairing Data Cloud with the sales agent has led to a significant reduction in lead-routing time from 20 minutes to 19 seconds

Slack is, of course, where I believe you’re going to really begin and end every Agentforce conversation. It’s the conversational interface for managing all of your work across apps, systems, teams. And Service Cloud, Sales Cloud, Tableau Next, any Salesforce app can live inside Slack…

…With Agentforce in Slack, every employee has a digital teammate that can make notes for your meeting, summarize your Slack channels. And you really see like AI taking place on Slack when you look at Slack recap or you look at agents just coming right into your channels to talk to you in real time…

…Our sales agent in Slack is transforming how our teams sell. Our AEs have already logged over 21,000 interactions, simplifying everyday sales activity, saving our teams over 44,000 hours annually. Further, Data Cloud is amplifying that impact, cutting lead routing from 20 minutes to 19 seconds in Slack.

Finnair is using AgentForce for customer service; Agentforce is in thousands of conversations a week with Finnair customers; using AgentForce, Finnair aims to automate 80% of customer service queries and reduce rep onboarding time by 25%; management sees the airline industry as a big opportunity for Agentforce

Finnair is using Agentforce to help manage customer service for 12 million passengers. Agentforce is already having thousands of conversations a week with Finnair customers, and the airline is aiming to automate 80% of customer service queries and reduce new rep onboarding time by 25% with Agentforce…

…We’re talking to so many airlines about how they not only can use all our Customer 360 apps, not just the Data Cloud, not just our meta platform but build this agentic capability around the airline. This is going to be a huge opportunity for that entire industry, which is so customer service obsessed.

Latin America retailer Falabella started using Agentforce in Colombia, deployed through WhatsApp, a few months ago; Falabella’s Agentforce experience was very successful and a six-figure Agentforce deal has now become a $1 million deal

Here’s this company that’s pioneering Agentforce just a couple of months ago in their Colombia business. And then it’s so successful, they’re actually deploying it on WhatsApp, which we hadn’t really seen before. And they’re using WhatsApp. The customers are coming in. They’re coming in and, “Hey, what’s my order? What’s going on?” And this what’s my order use case is the main thing that’s driving Falabella, and boom, all of a sudden, they go, “You know what, this is working so well. We’re going all over Latin America,” and what was kind of, I think, a low 6-figure deal. I mean, Miguel is going to have to come in here and tell me, turned into like a $1 million deal overnight…

…Yes, it was $300,000, right, from just Colombia.

OpenTable is using Agentforce and started with restaurants, before deploying to employees, and now consumers

OpenTable, we’ve been talking about this story for a while, which is [ Glenn ] is doing a great job deploying Agentforce. And he started with the restaurants. Then, he did employees. And now he’s like doing the consumers, and this is an incredible thing that OpenTable has been so successful.

Brazilian media conglomerate Grupo Globo bought Agentforce in 2024 Q4 (FY2025 Q4); Agentforce has since increased Grupo Globo’s customer retention rate by 22%

Another Latin American success is Grupo Globo. The Brazilian media conglomerate purchased Agentforce in Q4. In less than 3 months, Agentforce basically boosted Globo’s retention rate by 22%, driving revenue upgrades, cross selling, converting nonsubscribers.

Large Japanese enterprises are very excited about Agentforce and using to build agentic layers around their businesses

We’ve talked about the speed of which Agentforce is gone, but it’s not just a U.S. phenomenon. It’s an international phenomenon. And as I mentioned last week, I was in Japan, and one of our customers in Japan, Fujitsu, is really doing some amazing things. But when I heard at the rate and scale and speed that they want to deploy the product, and their vision in terms of how it can be all encompassing for an agentic layer around the entire company, I really just could not believe it. I really sat with 5 of the largest Japanese companies. And I think somehow every company’s imagination has been captured that they have this idea that they can build an agentic layer around their company.

Salesforce’s management is seeing that the rate of innovation in AI is far exceeding customer adoption

This idea that agents are kind of starting to provision to become digital labor, this is exceeding my expectation that it crosses industries. It’s crossing geographies. And as I said, all of this is really just happening in only 6 months. By the time we get to Dreamforce, which is still another 6 months ahead, I expect another huge massive transformation. We’re starting to cut the code right now on what will be one of the main releases of Dreamforce. And when we look at what will come as the release after Dreamforce, our technology, our product doesn’t look at all like what it looked like just a few months ago. So we’re moving very, very fast. And I think that I really would say this hasn’t really happened too many times in the last 30, 40 years. The rate of innovation far exceeds the rate of customer adoption.

Salesforce’s management thinks that most of the AI models are within 3-6 months of innovation of each other; management thinks the models have not improved a lot in accuracy because they are all trained on the same datasets

When we all are using ChatGPT or Gemini, or You.com or Perplexity or Anthropic or any of these models or an open source model or DeepSeek, okay, all of these models are mostly the same. They’re within 3 to 6 months of innovation of each other. We all know that. And then all these models are trained on mostly the same datasets because there’s only so much data that they can be trained on. Now there’s some synthetic data, but it doesn’t mean very much to a lot of these models. That’s why, by the way, that these models still have not improved a lot of their accuracy in the consumer side.

Salesforce’s management thinks Salesforce has been the best technology company in the world at building an agentic layer around itself; Salesforce used AI agents to handle 1 million conversations in customer support in 2025 Q1 (FY2026 Q1) and this has led to a dramatic reduction in the number of people needed to handle customer issues; Salesforce is Agentforce’s Customer Zero

What is it going to take to get this transformation to happen, where we have a much bigger agentic wrapper around Goldman Sachs, your company, or around all companies? We’ll look at my company to start. I think we’ve probably done the best of maybe any tech company. We’ve done now — this quarter, we’ll pass through 1 million conversations in customer support. It’s a dramatic reduction in the amount of human beings who have had to get involved to answer customers’ issues. I don’t think any other tech company at scale has delivered this capability. It is a proof point without any doubt that Salesforce has been able to deliver on its vision of digital labor, and Agentforce’s #1, Customer Zero, Salesforce. So we eat our own dog food, and this is amazing.

Salesforce’s management thinks the proclamation from some AI experts that AI will very soon cause massive job-losses in white-collar work to be alarmist with the current state of AI 

[Question] The CEO of Anthropic recently commented that AI could wipe out 50% of entry-level white-collar jobs and drive unemployment a lot higher, unfortunately. And since you’ve been very astute and very ahead of the curve on commoditization of LLMs and you’ve been very outspoken on the topic of digital labor, I’m curious just to get your thoughts on that concept.

[Answer] In terms of the amount of white-collar jobs that are going to disappear, you’re all experts at this point in the current generation of AI. You’re using it every day. We’re all using it. It doesn’t matter who I speak to. Probably all of your children, all of your family members are using it, and you can see how it’s impacted. Like people are smarter. They get their medical labs. They ask, “Well, what do you think about this?” But then when you call your doctor, sometimes the doctor goes, “Well, actually, that’s not completely true.” And we’re kind of at this point where it’s very good on some things but not for everything. And because of that, even in the enterprise, while there’s a lot of things that we can do, edit this press release or write me this speech or whatever, but the reality is, oh, you’re probably still going to want to get in there and work on it. And I think we all know that. So look, we’re at an exciting moment in AI, and maybe we’re moving into this world where there’s going to be like these AI prophets and obviously, I’m a huge fan of Dario’s. He’s great, amazing person, incredible company, wonderful. But some of these comments, I think, are alarmist and get a little aggressive in the current form of AI today.

Sea Ltd (NYSE: SE)

Sea’s management thinks AI will help Sea’s business on the consumer-facing side and internal product improvement; on the consumer-facing side, Sea has used AI to improve search recommendations, advertising efficiency, help sellers create better product descriptions, and help seller create videos based on images or descriptions of products; management measures the returns of Sea’s AI-related investments through click-through rates and conversion rates; management is seeing that most of Sea’s large AI-related investments on the consumer-facing side have delivered a positive return on investment (ROI); for internal product improvements, management is using AI to filter counterfeit products and detect fraud, among other areas; management measures the ROI of AI investments for internal product improvements through cost savings and most AI investments have positive ROI

For the AI investment, we believe that AI will make a big change to our industry, both from a consumer-facing side and also from our internal product improvement…

…One of the big improvement that we did is on our search recommendations and our ad. So we’re deploying AI solution to help us to target our user a lot more efficient when users search us and when people come to our app, so we can recommend more accurate products to them and also help us to have better efficiency on the ad product. That’s why we can improve the ad take rate over time. Another example is the AIGC production that we can help our seller to create for their product descriptions. We have been increasing the video coverage for our product description a lot over time, and part of that is driven by the — we are enabling the seller to create videos based on the images or based on some of the descriptions. And typically, for this investment, we always have a very clear ROI measurement for any of the investment, as I shared before, whether we are spending our AI resources on better our ads, we’re spending our AI resources on better the product descriptions, we measure the return on investment on this through our click-through rate, measure our investment through our conversion rate. And most of our investments so far, anything in meaningful size, has been positive return for any investment with AI resources…

…We are also investing quite a lot on improving our internal productivities, for example, that we’re using AI to help our internal listing team to filter the product in our marketplace a lot more efficient so we can discover the counterfeit, the fraud, et cetera, in a lot cheaper way. And again, those — for all those things we measure based on our AI investments versus the savings that we have typically bring a positive return. 

Tencent (OTC: TCEHY)

Tencent’s management is seeing AI having tangible contributions to Tencent’s businesses, such as performance advertising and evergreen games

During the first quarter of 2025, our high-quality revenue streams sustained their solid growth trajectory. AI capabilities already contribute tangibly to business such as the performance advertising and evergreen games. 

Tencent’s management has stepped up Tencent’s spending on AI opportunities, such as the Yuanbao app and AI in Weixin; management believes the operating leverage from Tencent’s existing revenues will absorb the costs associated with AI investments and contribute to the company’s growth; Tencent’s AI investments are in the form of both capital expenditures and operating expenses; some of Tencent’s AI investments are already generating revenue, such as through (1) improved advertising targeting, (2) improved content recommendation which increases user time-spent, (3) more time spent in games from usage of AI, and (4) cloud revenue from the deployment of GPUs, or graphics processing units; other AI investments will need more time to deliver a return on investment (ROI) and these investments will lead to lower-margin growth in the short-term compared to recent quarters

We also stepped up our spending on new AI opportunities such as Yuanbao application and AI in Weixin. We believe the operating leverage from our existing high-quality revenue streams will help absorb the additional costs associated with these AI-related investments and contribute to healthy financial performance during this investment phase. We expect this strategic AI investment will create value for users and society and generate substantial incremental returns for us over the longer term…

…As we have highlighted in the prior quarter earnings call, we are stepping up investments in AI in the form of capital expenditures as well as operating expenses. Some of these GPU and AI investments already generate revenue for us, such as improved ad targeting, which boosts ad revenue; improved content recommendation, which boosts user time spent and thus ad revenue; usage of AI within evergreen games, which boosts user engagement and thus game revenue; and deployment of GPUs and AI across our computing infrastructure, APIs and platform solutions, which generates cloud revenue.

For our other GPU and AI investments, which are more long cycle in nature, there’s a natural time lag between making the investments and those investments starting to generate significant revenue for us. During this time lag period, we expect the costs of those GPU and AI investments to offset our underlying operating leverage, resulting in a temporary smaller gap between our revenue and operating profit growth rate than we have achieved in recent quarters. That said, we’re confident that our stepped-up investment in longer-cycle AI projects will create substantial long-term value for our users, business and shareholders.  

Tencent is in the early stages of rolling out AI features for Weixin, such as (1) Yuanbao (Tencent’s AI chatbot) within Weixin chat, (2) AI answers within Weixin Search, (3) AI tools for content creators for easier content production, and (4) an AI coding assistant to make it easier to create Mini Programs in Weixin

We’re in the early stages of rolling out AI features within Weixin. Users can now add Yuanbao as a Weixin context for seamless AI interaction within Weixin Chat, providing context-aware responses and facilitating content discovery while leveraging the Weixin ecosystem and the worldwide web. Weixin Search is now starting to include results powered by large language models, including the fast thinking model Hunyuan Turbo S, and the chain of thoughts reasoning models Hunyuan T1 and DeepSeek R1. We provide AI tools so that content creators can generate images matching the text of their official accounts articles and generate video effects for video accounts videos utilizing preset templates. We reduced the Mini Programs development time via an AI coding assistant for creating AI programs that supports natural language prompts and image inputs. 

The Marketing Services segment’s revenue was up 20% year-on-year in 2025 Q1 because of higher user engagement and AI upgrades of the advertising platform; Marketing Services revenue grew across all major advertising categories; management has upgraded the Market Services segment’s advertising platform with enhanced generative AI capabilities to accelerate advertising creation and live-streaming content; management is using LLMs (large language models) to deliver better advertising recommendations

For Marketing Services, our revenue grew 20% year-on-year to RMB 32 billion, benefiting from higher user engagement, ongoing AI upgrades to our ad platform and a strengthening transaction ecosystem within Weixin…

…On the ad tech front, we upgraded our advertising platform with enhanced generative AI capabilities such as ad generation and video editing tools to accelerate ad creation and digital human solutions to facilitate live streaming activities for content creators and merchants. We’re using large language models to deepen our systems, understanding of merchandise and of user interests across our apps and so deliver better ad recommendations.

AI-related revenue within Tencent Cloud grew quickly in 2025 Q1, driven by demand for GPUs (graphics processing units), APIs (application programming interacts), and platform solutions; Tencent Cloud’s growth was constrained by GPU availability

AI-related revenue within Tencent Cloud grew quickly year-on-year, driven by increased customer demand for GPUs, APIs and platform solutions, although constrained by limited GPU availability. 

Tencent’s management thinks there’s room for both a general AI agent, and a Weixin-specific AI-agent that sits within the Weixin ecosystem; management believes that as Tencent’s AI chatbots Yuanbao and iMA improves and evolves over time, they can answer questions better and be able to interact with other apps and external APIs (application programming interfaces); management thinks that Yuanbao and iMA are similar to AI agents developed by peers; management believes that Tencent can create a unique AI agent that connects with users within Weixin’s ecosystem

So on Agentic AI, it’s a very hot concept, right? And the idea is actually, oh, the AI can actually help you to complete a very complicated tasks that involve many different steps as well as the use of tools and maybe in connection with other apps. So if we look at that concept, then there is a general Agentic AI, which everybody can do. Essentially, you create this agent and you go out to the world and try to complete tasks for your user. But at the same time, there’s also an Agentic AI that can sit within Weixin and the unique ecosystem of Weixin. And I think those are two different products…

…I think we are creating that capability within some of our AI native products such as Yuanbao and iMA, over time, as these AIs continue to evolve to increase in terms of their capability. So in the very beginning, these AIs actually answer questions very quickly. So those are the sort of quick response. And then over time, they include — they start including the chain of thoughts, a long thinking reasoning model and you can answer complicated questions. And over time, the capability can actually allow them to start doing more complicated tasks. So they start evolving to have Agentic capability, and they will be interacting with all other apps and programs and external APIs to help the users. So that would continue to evolve. And it’s not that much different from other Agentic AIs provided by our peers. 

But on the other hand, right, within the Weixin ecosystem, I think there is the opportunity for us to create a pretty unique Agentic AI that connects with the unique components of the Weixin ecosystem, including the social graph, including the communications and community capability, including the content ecosystem, such as our Official Accounts and Video Accounts and all the millions of Mini Programs that exist within Weixin, which actually sort of gets into all kinds of information as well as transactional and operative capabilities across many different verticals of applications. So I think that would be extremely unique compared to other more general Agentic AIs, and that’s sort of a very differentiated product for us. 

Tencent’s management thinks AI business models include (1) increasing advertising revenue through AI targeting, and (2) GPU rentals; management sees GPU rentals as a low priority form of business; management thinks the subscription model for AI services will not be an important business model within China

In terms of your question on AI business models, I think if you look at advertising, it’s directly augmented by AI because AI can actually help to improve the targeting capability of our ads. And when we deliver better results, then it translates directly into additional advertising revenue. And I think that is a big opportunity that we are already realizing in our performance ads, but there’s more opportunity to develop over time. Now I think transaction is actually very closely tied to advertising, right? When you have advertising that leads to direct transactions and then advertising value actually goes up significantly. And I think that’s the way we are actually also trying to increase our advertising revenue. That’s another component and pillar of our advertising revenue growth driver. 

GPU rental is sort of directly related to cloud business, and that’s more like a reselling business mostly. And to a large extent, right now, we are putting it on a lower priority because — especially when there’s a short supply of GPUs, right, then GPU rental is a lower priority for us.

And subscriptions, I think it’s not the most likely business model for AIs in China, right? Now everybody is actually providing AIs for free. So the subscription model, which exists outside of China, I think it’s not going to be mainstream business model for AI in China.

Tencent’s management sees long runway for growth in both the Domestic and International Games businesses; one driver for growth is the use of AI, in ways such as deploying an AI coach for new players, and to help prevent cheating

We do believe we have a long runway for our domestic and indeed international game revenue growth looking forward. And there’s many reasons, but just to pick on three for now. First of all, we talked extensively this time last year about some of the changes we were making to how we envisage and therefore, how we operate and therefore, who operates our biggest Domestic Games. And you can see that we have made those changes and they’re bearing the fruit that we hoped they would bear and we see them bearing more fruit going forward.  A second driver or enabler of that long runway, is the utilization of AI, which we think is particularly beneficial to the big competitive multiplayer games that we’ve talked about extensively and that represent the majority of our domestic game revenue. And that’s the case because while there’s many ways that we can and we’re starting to deploy AI within games some of the most interesting include using AI to help coach new players, to help accompany existing players, to help prevent cheating and hacking and so forth. And all of those are particularly important within competitive multiplayer games.

Tencent’s management is seeing users of Tencent’s new AI services use them for asking questions, following up with more questions, and analysing photos

At this stage, I think we’re trying to create functionalities and user experiences that would leverage AI and try to see what may or may not stick with the users. So as I said, right, the users sort of like to ask questions, like to interact with the AI with further follow-up questions. And when we put in various functionalities such as allowing photos to be analyzed and sort of people use it. So there are a lot of functionalities, which right now we have put in, and we’re starting to get to see people like them a lot or are not using it that much

The NVIDIA H20 chips was banned in 2025 Q1 and there are now new BIS guidelines (effectively new chip controls), but Tencent has a good stockpile of AI chips; management will use the stockpile of AI chips to generate immediate returns and also train Tencent’s models; management believes that Tencent can achieve very good training results even with small chip-clusters, so the company’s current stockpile of chips will be sufficient to train models for a few more generations; management thinks that the concept of scaling laws embraced by American technology companies, where AI models need to be trained on ever-larger chip-clusters, is outdated; management sees Tencent having a larger need for GPUs around inference, especially if the company moves toward agentic AI; to improve inference efficiency and reduce GPU-reliance, management thinks Tencent can leverage software optimisations, customise AI models depending on the use cases, and use other chips (such as ASICs, or application-specific integrated circuits) that are available in China

On the GPU front, it’s actually a very dynamic situation, right? So there — since the last earnings call, we have seen an H20 ban. And then after that, there was the BIS new guidelines that just came in overnight. So it’s a very dynamic situation, and we just sort of have to manage the situation, on one end, sort of in a completely compliant way, and on the other end, sort of we try to figure out the right solution for us to make sure that our AI strategy can still be executed. So the good thing that we are in is that, number one, I think we have a pretty strong stockpile of chips that we acquired previously, and that would be very useful for us in executing our AI strategy. And if you look at the allocation of the usage of these chips, obviously, they will be used for the applications that will generate immediate return for us. So for example, in the advertising business as well as content recommendation product, right? We actually would be using a lot of these GPUs to generate results and generate return for us. Secondly, in terms of the training of our large language models, they will be of the next priority. And the training actually requires higher-end chips. And the good thing on that front is that over the past few months, right, we start to move off the concept or the belief of the American tech companies, which they call the scaling law, which require continuous expansion of the training cluster. And now we can see even with a smaller cluster, you can actually achieve very good training results. And there’s a lot of potential that we can get on the post-training side, which do not necessarily meet very large clusters. So that actually sort of help us to look at our existing inventory of high-end chips and say, we should have enough high-end chips to continue our training of models for a few more generations going forward.

And then the larger need for GPUs are actually sort of around inferences and especially sort of when you see a growth in demand for inference on the user side as well as when we move into the chain of thoughts reasoning model, it actually requires many more tokens to answer a complicated question. And if we move into Agentic AI, right, it requires even more tokens, there’s actually a lot of need on the inference side. But on the inference side, there’s actually a lot of work that could be done for us to manage the need.

One is just sort of leveraging software optimization. I think there’s still quite a bit of room for us to keep on improving the inference efficiency, right? So if you can improve inference efficiency 2x, then basically, that means the amount of GPUs get doubled in terms of capacity. So that’s actually a very good way of investing our resources to improve on the inference efficiency. And the other approach is we can customize different sizes of models, especially some applications do not require very large models, right, and we can tailor-made models and distill models so that they can be used for different use cases, and that can actually save on the inference usage of GPUs. And finally, we actually sort of can potentially make use of other chips, compliant chips available in China or available for us to be imported as well as ASICs and GPUs in some cases for smaller models inferences. So I think there are a lot of ways to which we can fulfill the expanding and growing inference needs, and we just need to sort of keep exploring these venues and spend probably more time on the software side rather than just force buying GPUs.

Tencent’s management is unsure of when some of Tencent’s investments in AI will pay off because they see the whole world as being in uncharted territory when it comes to AI investments, but Tencent has historically experienced a pay-off within a 1-2 year timeframe for investments in new areas; management expects a narrowing of the difference in growth rate between Tencent’s revenue growth and operating profit growth, but operating leverage is still expected

[Question] You guys mentioned earlier in your opening remarks, smaller gap between revenue and operating profit. Can you kind of elaborate a bit more on this, the magnitude and what kind of extensive period that we’re talking about?

[Answer] We’re at uncharted territory, not only for Tencent, but for the whole world in terms of the deployment of artificial intelligence. So I don’t have necessarily a very high degree of confidence in these statements. But if you’re thinking about measuring the duration, then the past may be the best guide to the future in that Tencent has been through many time periods where we have cultivated a new product toward critical mass and substantial popularity ahead of monetizing that product. And typically, the duration of those gaps between investment to cultivate versus monetization and revenue generation would be in the sort of 1-year to 2-year time range. So obviously, it will depend on what our peer companies do in China, obviously, will depend on consumer habits, on advertiser habits. But I think that’s a reasonable time frame to think about.  In terms of magnitude, I won’t go beyond what we said earlier, which is referring to a narrowing. So we don’t expect the delta between revenue growth and operating profit growth that we experienced this quarter to continue. There will be a narrowing. But on the other hand, we don’t expect our operating leverage to turn negative either.

Tencent’s AI investments are mostly in the form of capital expenditures, but there’s also incremental marketing expenses for Yuanbao and salaries for AI engineers

In terms of what costs other than CapEx or really depreciation could cause that narrowing, then CapEx depreciation is, by far, the most important. We do have some incremental marketing expenses for Yuanbao, although not so much for AI within Weixin. And then we referenced the fact that engineers with expertise in AI are expensive, but that’s more of a sort of mix comment rather than an aggregate headcount comment. We don’t see a step-up in headcount. We’ll continue to manage headcount closely, but we observe that engineers with that AI expertise are rightly well paid.

Historically, Tencent’s banner ads had 0.1% click-through rates while feed ads had 1%, but with AI, management has seen that certain ad inventories have reached a 3% click-through rate; management thinks no one knows the upper limit of AI-powered advertising click-through rates; AI can benefit Tencent’s advertising revenue by showing more appealing content to consumers to increase their time-spent, but the increase in click-through rates is still most important improvement from AI

A big part of the uplift that AI is providing to advertising revenue today can be quantified in the form of the click-through rate on ads. And historically, banner ads achieved a roughly 0.1% click-through rate. Feed ads achieved roughly 1.0% click-through rate. With the benefit of AI, we have seen that the click-through rate on certain ad inventories can improve toward 3.0%, for example.  And then the question is, what’s the upper limit on that click-through rate. And at this point, no one knows the answer because it almost becomes philosophical if you had complete information or insight into a consumer if you had the ability to infer what the consumer wants or the consumer given their prior behaviors should want and then deliver an ultra targeted ad to that consumer, then it’s very hard to say that the upper limit should be X percent rather than Y percent…

We can use AI to target more appealing content to the consumer, which means they spend more time in the feed, which means they then view more ads, but I think that ad click-through rate is perhaps the most important.

Veeva Systems (NYSE: VEEV)

Veeva’s management announced the Veeva AI initiative in April 2025; Veeva AI will see the company build AI into its applications across clinical and commercial; management thinks the addition of AI will significantly improve productivity for customers; Veeva AI agents will have application-specific context and direct access to Veeva data; the first release of Veeva AI will happen in December 2025; the first 2 of Veeva’s AI Agent solutions, CRM Bot and MLR Bot, is planned for December 2025; Veeva AI is part of Veeva’s overall AI strategy, which includes the Veeva Direct Data API and the Veeva AI Partner Program; management sees the Vault CRM product as a fast path to AI productivity for many customers; management thinks Veeva AI can improve the efficiency of the life sciences industry by 15% in the next few years; management thinks Veeva AI’s efficiency gains will manifest because the AI technology will be deeply embedded in core applications; early reception from customers to Veeva AI has been very positive; management will charge an appropriate license fee for Veeva AI that balances revenue growth for the company, and broad customer adoption; customer response to a demo of the CRM Bot was great

Announced in April, Veeva AI is a major initiative for us with a clear vision that’s focused on delivering tangible value. We’re building AI into Vault Platform and Veeva applications across all major areas from clinical to commercial. Adding AI – through AI Agents and AI Shortcuts – to our core applications can significantly improve productivity for customers and the industry. Veeva AI Agents have application-specific context and direct, secure access to Veeva application data, documents, and workflows. AI Shortcuts enable end users to set up personal AI-powered automations for their most frequent user-specific tasks. The first release of Veeva AI is planned for December 2025. Our first two AI Agent solutions, CRM Bot and MLR Bot in commercial, are also planned for year end. Veeva AI is part of our overall AI strategy which also includes the Veeva Direct Data API and the Veeva AI Partner Program, which are both available and operating well today…

…We showed our AI Agents – CRM Bot, Voice Control, Compliant Free Text, and MLR Bot. Vault CRM will be a fast path to highly productive AI for many of our customers…

…I think if you look over the next 3, 4, 5 years out to 2030, I think Veeva can help increase life sciences efficiency by 15% or so with Veeva AI…

…Why I’m so bullish on it? Because Veeva has the core applications, and we’re building the AI very deeply embedded in the core applications. So when we build AI, we’re not building a generic AI. We’re building a medical legal regulatory approval agent, a CRM agent that does pre-call planning, a safety AI agent that can transcribe pretext into a safety case, so deep AI applications. And you need the deep core applications and the AI working together. And that’s where the magic will happen. It’s just very, very, very clear to me…

…[Question] Understanding it’s Veeva AI just recently kind of rolled out. Any initial feedback from customers and how you think in the coming years, how it might impact your overall business?

[Answer] The reception from customers is very positive because it just makes sense. It’s not a lot of hype. They need the AI working with the core applications…

…I think Veeva AI is something that we will charge an appropriate license fee for. So I think it will be a net positive for Veeva. We don’t have that packaging worked out yet. We do want to price it so that it can be very reasonable and broadly adopted, help the industry move forward. And yes, that certainly help our revenue all the time…

…When I showed the demo of Veeva AI, one example of what Veeva AI can do with CRM Bot, you can just see the aha moment go with the customers because what they want is they want AI to help them with the engagement planning, right, do all that work. And then all the data entry afterwards, do that work so they can focus on the engagement in their field. And you just see the light bulbs going on.

Veeva’s management sees the core AI technology as settling down a little; management thinks it’s clear that AI is a new computing paradigm that can produce new kinds of automation; management thinks that core applications will still be very relevant despite the automation that AI can deliver

I think we — the core technology has settled down a little bit in terms of large language models, what’s going on there. And then it’s very clear that this AI is a new computing paradigm. It’s something that can automate certain things that humans can do, which basic software, traditional software couldn’t do that. It doesn’t work like a human. This is nondeterministic computing. It can automate some things that a human can do, but it doesn’t obviate the need for a core application.

Veeva’s management thinks that AI can deliver the biggest positive productivity impacts in the pharma industry within the sales function

Where is AI likely to be and across sales, marketing or service. Yes, it’s a good question. Overall, the way to think about the pharma industry is the human relationships, the sales organizations, the spend on the sales force is very significant and very meaningful. And if you can provide productivity gains and effectiveness gains for the field team, you have a very significant impact. So I think there’s a — we’re seeing a lot of focus in the sales side, which is why one of our first agents will be in the CR in the core CRM space, the CRM bot. We think we can make customers significantly more productive from a field team perspective. So it’s not to say that we’re not seeing investment in other areas, certainly, customer service. Case intake as an example. There’s a lot of examples on the marketing side. But I would prioritize sales higher given the size, the importance, the relationships and the potential impact for it to have.

Veeva’s management thinks the pharma industry has problems with fragmented data (which hampers the use of AI), and has not been able to produce deep industry-specific AI yet; management thinks Veeva can help with both problem areas

The industry still has fragmented data and getting the data to work together, getting the data into the software so you can make decisions and you can get insights fast about that. That is still a challenge for the industry. It’s not a solved problem yet…

…We talk a lot about the excitement around AI, but there’s also a lot of unsolved problems in the AI space. And part of that is bringing together or the industry hasn’t yet been able to bring together very industry-specific processes with deep industry-specific AI. That’s a problem that they’ve made investments. They haven’t often seen the full return on their investment in some of the AI projects. And I think that’s another area where they’re excited about our ability to help them over time.

Wix (NASDAQ: WIX)

Wix’s management recently introduced a new AI-powered product, Wixel, which is a stand-alone visual design platform for things other than websites; Wixel is constantly choosing and optimizing the best AI models for each task behind the scenes, which is a unique feature among similar offerings; Wixel helps make image and video editing more accessible; Wix has partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot; it’s still very early days for Wixel and management expects Wixel to evolve meaningfully throughout 2025; management is currently treating Wixel as a separate subscription with its own pricing of $79 currently, and management is testing pricing now; management believes legacy players will find it hard to change their user interface and experience as they already have big customer base, creating differentiation for Wixel

Earlier this month, we introduced Wixel, our new stand-alone visual design platform that extends Wix’ vast design expertise beyond websites for the first time. Wixel marks the beginning of our next-generation approach to visual design, combining Wix’s intuitive creation tools and user-friendly interface with the power of generative AI. This platform combines the best AI models on the market today tailored for specific image needs, including object, background editing and much more with a constant pipeline of new AI enhancements. This makes Wixel unique from everything else available on the market. It handles the complexity of today’s high-end AI technology behind the scenes, choosing and continuously optimizing the best models for each task. This allows our users to always have access to the most advanced and up-to-date tools for image generation and editing…

Our goal is to give total control over photo and video editing to everyone, the same way we did for website creation. Wixel is for Wix users, for entrepreneurs, freelances and business owners who already rely on Wix to build and grow online. It’s for the millions of DeviantArt artists, who want to add an easy to use yet powerful editing tool to their toolkit, without sacrificing the quality of their art…

… Excitingly, we partnered with Microsoft to integrate Wixel’s capabilities into Microsoft Copilot. This collaboration allows Microsoft 365 users, small business owners, students and everyday creators, to design in a smarter, more intuitive way with Wixel.  Though this launch is a cornerstone of our product road map, we are still very early in the journey with plenty of work ahead in order to achieve our vision for Wixel. In the coming year, you can expect the platform to evolve meaningfully with breakthrough capabilities. As we continue to innovate, I’m excited to see how Wixel reshapes the digital creation space…

[Question] Some thoughts on pricing, how you landed at $79 a year, and how you’re trying to strike the balance between monetization and adoption?…

…When it comes to Wixel, we don’t try to build another drag and drop editing environment, which I think all the tools that you’re referring to are a drag and drop editing environment. What we’re trying to do is really how would — if you would think in the 5 years from today, how you could edit images content with AI, how would that look like? And we’re trying to build that into Wixel. So I think the way that the tool itself behaves is very different than the traditional editing environment. Now I’m not saying that they cannot do that. I’m sure, they can, there are a lot of smart people there. I’m just saying that if you try to rebuild your tools into this thinking about how will the universe look in 5 years or how would AI look in 5 years, you’ll find that you have to change a lot of the user interface, a lot of the experience, a lot of the underlying technologies in those existing tools, which I believe is a bit of a challenge when you have a lot of users.

Wix’s management recently launched Astro, a new AI assistant embedded within the Wix dashboard; management expects Astro to improve user engagement, product upgrades, and reduce churn; management plans to launch more AI agents

We also introduced Astro, our new AI assistant embedded within the Wix dashboard. Astro simplifies the user journey by guiding users, surfacing relevant tools and insights and helping them complete key tasks. We expect Astro to improve user engagement, boost package upgrades and reduce churn over the long term. And it’s only the first in a series of AI agents we plan to roll out.

Wix’s management recently launched new AI-powered tools for website automations and customisations; the tools include (1) the creation of dynamic content based on site-visitor characteristics, (2) no-code interface for users to drive business outcomes, and (3) automating advanced business workflows

Additionally, we launched new AI-powered tools for website automations and real-time site customization, including adaptive content application, Wix Functions and Wix Automations. These features are designed to make our platform smarter and more efficient while delivering highly personalized experiences to site visitors…

…. This suite includes: 

  • Adaptive content application: a tool designed to personalize website experiences for site visitors by generating dynamic content based on visitor characteristics and instructions, ultimately enhancing engagement and user experience
  • Wix Functions: a no-code interface that allows users to customize outcomes for various business scenarios, enabling businesses to operate more smoothly and effectively
  • Wix Automations: a builder designed to support advanced business workflows with a highly intuitive, fully customizable automation engine

These tools help businesses effortlessly optimize their operations for enhanced efficiency, while ensuring a seamless visitor experience without performance drawbacks like increased load times.

Wix’s management recently launched Wix Model Context Protocol or MCP Server, an infrastructure advancement that lets users leverage natural-language prompts to connect Wix’s business functionality with their preferred AI tools; management demonstrated at a recent conference how Wix MCP Server can be used to generate code for fully functional payment solutions

Finally, we rolled out the Wix Model Context Protocol or MCP Server, a key infrastructure advancement that allows users to leverage natural-language prompts to seamlessly connect Wix’ comprehensive business functionality with their preferred compatible AI-powered tools. The Wix MCP Server enables AI-driven app development for users to build custom experiences on top of Wix or manage their Wix-based business using natural language and AI coding assistance. As the use case presented at Stripe’s recent conference, our team demonstrated how to use LLMs to generate reliable code for fully functional payment solutions. They built a complete website that accepts online payments via credit cards, Apple Pay and Google Pay through Wix Payments and Stripe.

Wix’s management thinks that agencies, which are the customers of Wix’s Partners business, still have a big role to play today and in the future even as AI agents proliferate; management thinks AI agents today still need to evolve significantly in order to achieve big goals; management sees agencies picking up AI technologies faster than consumers and small businesses

Well, in theory, right, in theory, if we look at the far future, then why would you need an agency, right? Because in theory, you can just tell the AI, hey, build this website for me, change those things, now make it successful. And — but practically, we’re not there yet. I think there’s a big distance that we have to have for those AI agents to evolve in order to be able to help you actually achieve all of those goals. Even when we are trying to build this exact agents to do each one of those, there’s still a lot of human interactions and I think a lot of expertise that the human can bring to help it. So I think there is a lot of room for agencies even in the next — in the years to come. Currently, when we look at the AI data, I would say that agencies probably pick up technologies faster than consumers and small businesses. So we’ve actually kind of gave them a bit of a shift in terms of what they can do. 

Wix’s management is optimistic about vibe coding but it’s still a young technology that produces code that tends to break over time; management thinks vibe coding will help to expand Wix’s market reach

I think vibe coding is a super exciting concept. It’s still very early. And so things tend to break. After a while, they’re not stable. They’re not good at SEOs or search engine optimization. There’s a lot of things that need to get there to be mature in order for it to be a viable product for our customers. Just the simplest one is if you edit something, right, it takes 4 minutes for any small change to happen, right? In the best case scenario, it’s 4 minutes. So moving a button will take you a few minutes. So there’s a lot of super exciting potential in vibe coding…

…We’re going to start by — with, of course, a few things including the ability to code components into the Wix Editor, which is one of the obvious things that we’re going to be doing.  I do think that this will allow us and companies like us to expand our market reach because things that you could not have done traditionally on website building platforms, right, now you’ll be able to do because you are able to write this custom-code without coding. So I do — I’m very optimistic. I think it’s going to present to us a lot of really interesting opportunities. But I want to emphasize again, it’s really a young technology, it’s still not stable.

Wix’s management thinks websites will be structurally different in the AI age; management is using Google less when searching for information; management believes that the complexity of building websites in the age of AI will increase, which will benefit website builders such as Wix

[Question] Help us expand our mind, so to speak, on whether websites somehow kind of need to be like structurally different in the AI era, particularly from like a utility and discoverability perspective, and kind of how do you position for that?

[Answer] I do believe that there is a big change coming. I know that for myself, I’m using ChatGPT more than Google when I search for things now. So I would love to have a content — and ChatGPT digest a lot of content from the Internet and try to give you this limited version and there’s advantages and there’s disadvantages, right?…

…LLMs work today by just scrolling the Internet, of course, is not good enough. It’s not going to provide you any knowledge about will my hairdresser have an appointment in 2 days, right? And so — and we’re starting to see the first layer of protocols, right? Microsoft just announced once, Anthropic announced MCP, which is a way for an LLM to query complicated services on — in a way that the agent know how to learn, how to ask an API. We just announced that we supported and released everything that we say now is available for MCP…

…I do also believe that in many ways, that will play — help platforms like Wix because the complexity of building a website that know how to offer its services for APIs and MCP to LLM, and how to do the equivalent of SEO for LLM are just going to make building a website 10x harder, right? So if you — today, you can take somebody who know how to write HTML CSS and in theory, build a distant website, then in a year, that will be impossible. I think the complexity that will be created by those tools and the speed of innovation, right? MCP was announced 1.5 months ago, already released to [indiscernible] I think about 1.5 months ago. And so the complexity and the need to support and to accelerate, I think that is something that will actually help all the website and content-building platform because it’s going to be much harder to do it with your own internal team.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet (parent of Google), Meituan, Meta Platforms, Microsoft, MongoDB, Okta, Salesforce, Sea Ltd, Tencent, Veeva Systems, and Wix. Holdings are subject to change at any time.