All articles

What We’re Reading (Week Ending 23 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 23 June 2024:

1. The C Word – Jonathan Clements

ON SUNDAY MORNING, May 19, I was enjoying croissants and coffee with Elaine at the kitchen table, while watching the neighborhood sparrows, finches, cardinals and squirrels have their way with the bird feeder. All was right in our little world, except I was a little wobbly when walking—the result, I suspected, of balance issues caused by an ear infection.

It was going to be a busy week, and I figured that it would be smart to get some antibiotics inside me, even if visiting the urgent care clinic on Sunday might be more expensive than contacting my primary care physician on Monday and perhaps having to go in for an appointment.

Long story short, I ended the day in the intensive care unit of a local hospital, where the staff discovered lung cancer that’s metastasized to my brain and a few other spots. This, as you might imagine, has meant a few changes in my life, and there will be more to come.

I have no desire for HumbleDollar to become HumbleDeathWatch. But my prognosis is not good. I’ve had three brain radiation treatments and I started chemotherapy yesterday, but these steps are merely deferring death and perhaps not for very long. I’ll spare you the gory medical details. But as best I can gather, I may have just a dozen okay months ahead of me…

The cliché is true: Something like this makes you truly appreciate life. Despite those bucket-list items, I find my greatest joy comes from small, inexpensive daily pleasures: that first cup of coffee, exercise, friends and family, a good meal, writing and editing, smiles from strangers, the sunshine on my face. If we can keep life’s less admirable emotions at bay, the world is a wonderful place.

We can control risk, but we can’t eliminate it. I’ve spent decades managing both financial risk and potential threats to my health. But despite such precautions, sometimes we get blindsided. There have been few cancer occurrences in my family, and it’s never been something I had reason to fear. Chance is a cruel mistress.

It’s toughest on those left behind. I’ll be gone, but Elaine and my family will remain, and they’ll have to navigate the world without me. I so want them to be okay, financially and emotionally, and that’s driving many of the steps I’m now taking…

Life’s priorities become crystal clear. Even at this late stage, I believe it’s important to have a sense of purpose, both professionally and personally. I can’t do much about the fewer years, and I have no anger about their loss. But I do want the time ahead to be happy, productive and meaningful.

2. Central Banking from the Bottom Up – Marc Rubinstein

From his office a few blocks from the River Rhine in Dusseldorf, Theo Siegert had been scouring the world for investment opportunities. His research process had thrown up an under-appreciated banking stock headquartered across the border in Switzerland, and he started building a stake. Siegert knew a bit about the banking business – he was already a non-executive director of Deutsche Bank – but this stock was different. In his home country, as in many others, central banks tend not to trade freely on the stock exchange. Not so in Switzerland. Before long, Siegert had become the largest shareholder of the Schweizerische Nationalbank, the Swiss National Bank…

…It would be difficult for the Swiss National Bank to pursue its mandate – ensuring that money preserves its value and the economy develops favorably – if it also had to pander to the demands of private shareholders. So it limits private shareholders to voting just 100 of their shares – equivalent to a 0.1% position – leaving Siegert with 4,910 shares on which he is ineligible to vote. And it caps the dividend at 15 Swiss Francs a share, equivalent to a 0.4% yield at today’s price of 3,850 Swiss Francs. Of the remaining distributable net profit, a third accrues to the central government and two-thirds to regional cantonal governments.

As a result, the 10.4 kilograms of gold per share the bank carries and its 1.2 million Swiss Francs of overall net assets per share (at March valuations) remain out of grasp for private shareholders. At best, the stock is a safe haven, providing a preferred return in a strong currency, with no counterparty risk…

…The trouble was, 2022 wasn’t a good year for asset prices, leaving the Swiss National Bank highly exposed…

…Having earned 174 billion Swiss Francs cumulatively over the prior thirteen years, the Swiss National Bank lost 133 billion Swiss Francs in a single year in 2022, equivalent to 17% of GDP. It canceled its dividend for only the second time in over 30 years, signaling that there is risk in a 0.40% dividend after all.

And although asset markets recovered in 2023, strength in the Swiss Franc during the year – partly driven by the bank selling down some of its foreign assets – led to a record foreign exchange hit, triggering another overall loss (of 3 billion Swiss Francs) and another canceled dividend. Fortunately, 2024 has so far been better and, as of the first quarter, over 40% of the two-year loss has been recovered…

…In some cases, such large losses have eaten into capital, leaving many central banks operating on negative equity. As a private sector analyst, this looks frightening, but explicit government support makes it moot. Even before the current spate of losses, some central banks, including those in Chile, the Czech Republic, Israel and Mexico, carried on their business for years with negative capital. A study from the Bank for International Settlements concludes that none of them compromised on their ability to fulfill their mandate.

Because it maintains both a distribution reserve to carry forward some profit and a currency reserve that is not distributable, the Swiss National Bank did not slip into negative equity despite its large loss. At the end of 2023, its equity to asset ratio stood at 7.9% and by the end of March, it was up to 14.3%. That contrasts with the Federal Reserve, which has $43 billion of capital supporting $7.3 trillion of assets, not including almost a trillion dollars of unrealized losses.

But going forward, the business of central banking will grow more challenging. Not only do higher rates expose central banks to losses related to assets purchased in the past, they also make it difficult to generate net interest income on the current balance sheet. Seigniorage income still persists but the falling use of cash may erode it in future years. Meanwhile, commercial bank deposits – which form the bulk of a central bank’s liabilities (449 billion Swiss Francs in the case of the Swiss National Bank, compared with 76.3 billion Swiss Francs of banknotes) – are typically remunerated at market rates, which are higher than yields on legacy securities. Central banks are paying a floating rate while locked into a (lower) fixed rate on their assets.

The challenge is evident in a closer look at the Swiss National Bank. In the era of negative interest rates, it earned income on sight deposits it held on behalf of commercial banks. In 2021, the last full year of negative rates, that income was 1.2 billion Swiss Francs. Having raised rates to 1.50%, the relationship flipped and the central bank began paying interest to commercial banks, which in 2023 amounted to 10.2 billion Swiss Francs. With the yield on Swiss Franc-denominated securities still low, net interest income on the book came to a negative 8.7 billion Swiss Francs…

…From its most recent high of 7,900 Swiss Francs at the beginning of 2022, the Swiss National Bank stock price has halved. Against its muted profit outlook, this is no surprise: The golden era of central bank profitability is likely over…

…For others, though, it’s fine. As the general manager of the Bank for International Settlements noted last year, “Unlike businesses, central banks are designed to make money only in the most literal sense.” Viewing central banks as stocks is instructive, but fortunately for the economy at large, there is more to them than that.

3. Reports of the petrodollar system’s demise are ‘fake news’ – here’s why – Joseph Adinolfi

Earlier this week, reports circulating widely on social-media platforms like X offered up a shocking proclamation: A 50-year-old agreement between the U.S. and Saudi Arabia requiring that the latter price its crude-oil exports in U.S. dollars had expired on Sunday.

The collapse of the accord would inevitably deal a fatal blow to the U.S. dollar’s status as the de facto global reserve currency, various commentators on X opined. Surely, financial upheaval lay ahead…

…But as speculation about an imminent end to the U.S. dollar’s global dominance intensified, several Wall Street and foreign-policy experts emerged to point out a fatal flaw in this logic: The agreement itself never existed…

…The agreement referred to by Donovan is the United States-Saudi Arabian Joint Commission on Economic Cooperation. It was formally established on June 8, 1974, by a joint statement issued and signed by Henry Kissinger, the U.S. secretary of state at the time, and Prince Fahd, the second deputy prime minister (and later king and prime minister) of Saudi Arabia, according to a report found on the Government Accountability Office’s website.

The agreement, as initially envisioned, was intended to last five years, although it was repeatedly extended. The rational for such a deal was pretty straightforward: Coming on the heels of the 1973 OPEC oil embargo, both the U.S. and Saudi Arabia were eager to flesh out a more formal arrangement that would ensure each side got more of what it wanted from the other.

The surge in oil prices following the OPEC embargo was leaving Saudi Arabia with a surplus of dollars, and the Kingdom’s leadership was eager to harness this wealth to further industrialize its economy beyond the oil sector. At the same time, the U.S. wanted to strengthen its then-nascent diplomatic relationship with Saudi Arabia, while encouraging the country to recycle its dollars back into the U.S. economy…

…According to Donovan and others who emerged on social-media to debunk the conspiracy theories, a formal agreement demanding that Saudi Arabia price its crude oil in dollars never existed. Rather, Saudi Arabia continued accepting other currencies – most notably the British pound (GBPUSD) – for its oil even after the 1974 agreement on joint economic cooperation was struck. It wasn’t until later that year that the Kingdom stopped accepting the pound as payment.

Perhaps the closest thing to a petrodollar deal was a secret agreement between the U.S. and Saudi Arabia reached in late 1974, which promised military aid and equipment in exchange for the Kingdom investing billions of dollars of its oil-sales proceeds in U.S. Treasurys, Donovan said. The existence of this agreement wasn’t revealed until 2016, when Bloomberg News filed a Freedom of Information Act request with the National Archives…

…Still, the notion that the petrodollar system largely grew organically from a place of mutual benefit – rather than some shadowy agreement established by a secret cabal of diplomats – remains a matter of indisputable fact, according to Gregory Brew, an analyst at Eurasia Group…

…Even more importantly as far as the dollar’s reserve status is concerned, the currency or currencies used to make payments for oil (BRN00) (CL00) are of secondary importance. What matters most when it comes to the dollar maintaining its role as the world’s main reserve currency is where oil exporters like Saudi Arabia decide to park their reserves, Donovan said.

4. On the Special Relativity of Investment Horizons – Discerene Group

We believe that it is hard for corporate executives to think long-term if they are overwhelmingly rewarded for short-term results. In their paper, “Duration of Executive Compensation,”2 Radhakrishnan Gopalan, Todd Milbourn, Fenghua Song, and Anjan Thakor developed a metric for “pay duration.” It quantifies the average duration of compensation plans of all the executives covered by an executive intelligence firm’s survey of 2006-2009 proxy statements. The average pay duration for all executives across the 48 industries in their sample was just 1.22 years. We think that such performance-based compensation duration borders on the absurd for leaders of ostensibly multi-decade institutions buffeted by so many factors beyond their short-term control.

Perhaps unsurprisingly, incentives drive behavior.3 Executive-pay duration was longer in firms that spent more on R&D, firms with a higher proportion of independent board directors, and firms with better stock-price performance. Conversely, firms that offered shorter pay duration to their CEOs were more likely to boost short-term earnings with abnormal accruals of operating expenses.

In a survey4 of 401 US CFOs conducted by John Graham, Campbell Harvey, and Shiva Rajgopal,   80% of survey participants reported that they would decrease discretionary spending on R&D, advertising, and maintenance to meet earnings targets. 55.3% said that they would delay starting a new project to meet an earnings target, even if such a delay entailed a sacrifice of value. 96.7% prefer smooth to bumpy earnings paths, keeping total cash flows constant. One CFO said that “businesses are much more volatile than what their earnings numbers would suggest.” 78% of survey participants would sacrifice real economic value to meet an earnings target.

Likewise, Daniel Bergstresser and Thomas Philippon have found5 that the more a CEO’s overall compensation is tied to the value of his/her stock, the more aggressively he/she tends to use discretionary “accruals” to affect his/her firm’s reported performance…

…According to the World Economic Forum and International Monetary Fund, the average holding period of public equities in the US has fallen from >5 years in 1975 to ~10 months in 2022…

…Another effect of short-termism has been to encourage firms to shed or outsource functions formerly considered to be critical to businesses, including R&D, manufacturing, sales, and distribution, thus creating atomized and fragile slivers of businesses that nevertheless often command illogically lofty valuations. For example, in recent times, aerospace, pharmaceuticals, and software companies that do not attempt to sustain going-concern investments and instead seek to continually acquire other companies in order to hollow out such companies’ engineering, R&D, and/or sales/distribution teams — thereby eliminating all possible sources of competitive advantage — have been feted as “asset-light” and “high-ROIC” poster children of their respective industries.

5. An Interview with Terraform Industries CEO Casey Handmer About the Solar Energy Revolution – Ben Thompson and Casey Handmer

But let’s dig into this solar thing. What is driving the cost curve decrease that was forecasted in 2011 that attracted you? And that has absolutely manifested over the last 10 years, famously exceeding every official projections for future costs. It always ends up being cheaper, faster than people realize. What is the driver of that?

CH: Well, so actually even Ramez Naam’s predictions were too conservative. No one, back then, predicted that solar would get as cheap as it has now. If you look at the DOE’s predictions in 2012 for how long it would take for us to get to current solar costs, their best guesses were 2150, and I don’t know if I’ll live that long.

So of course their entire roadmap for decarbonization didn’t include this, but now we have it. Can we use it? Yes, we sure as hell can and we sure as hell should, because it’s a massive gift that enables us to — we don’t have to de-growth in order to stop emitting pollution into the atmosphere. We can build our way out of the climate crisis by just increasing energy consumption and making energy cheaper for everyone.

In terms of how it gets cheaper, well, essentially, as I say, once the technology is inside the tent of capitalism, it’s generating value for people. It tends to attract wealth, it tends to attract capital, and that capital can be used to do things like hire manufacturing process engineers, and they’re very, very clever and they work very hard, particularly probably hundreds of thousands of engineers working at various solar factories in China right now. And sooner or later, they will find every possible configuration of matter necessary to force the price down. So same as with Moore’s law, essentially, we’ve just seen steady improvements.

Yeah, I was going to ask, is this an analogy to Moore’s law or is it actually the same sort of thing? Moore’s law is not a physical law, it is a choice by companies and individuals to keep pushing down that curve. Number one, what I get from you is that’s the same sort of concept here, but number two, are the actual discoveries actually similar to what’s going on?

CH: Yeah, actually to a large extent because it’s a silicon-based technology.

Right, exactly.

CH: There’s a lot of commonality there, but I think Moore’s law is not a law of nature, it’s what we call a phenomenological law, an emergent law. But basically all it says is there’s a positive feedback loop between cost reductions, increases in demand, increase in production, and cost reductions. So provided that the increase in demand, the induced demand as a result of the cost reduction, exceeds the cost reduction for the next generation of technology, you have a positive feedback loop. Otherwise, it’ll converge at some point, right? You’ll achieve maybe a 10x cost reduction and then it’ll stop, and we start to hit diminishing returns on all these technologies. But if you look at Moore’s law, it’s actually a series of maybe 20 or 30 different overlapping technology curves that kind of form this boundary of technology throughout time, and you see the same thing in solar technology if you really look under the hood and see what’s going on.

But yeah, the fundamental thing is there’s just enormous demand for solar at lower and lower prices and so manufacturers are justified in investing the capital they need in order to hit those prices and then the feedback mechanism keeps going. Solar manufacturing itself is a brutally competitive business which is both good and bad, it means like if you decide that you want to compete in solar, you don’t have to be at it for 50 years in order to compete. If you can capitalize, you can build a solar factory and if you’re smart enough and you work hard enough, in five years you can be in the top 20 manufacturers globally which is huge. Talking about billions of dollars of revenue every year just because everyone’s existing capital stock gets depreciated really quickly.

Right. But to your point, it’s also commodity then, right? So how do you actually build a sustainable business?

CH: Well, picks and shovels essentially. So actually one of the things that we like to say at Terraform, and I’m jumping the gun slightly here, but Terraform’s product essentially is a machine that converts solar power into oil and gas, so it bridges these two technology spans. It allows you to arbitrage essentially economically unproductive land that would otherwise just be getting hot in the sun. You throw some solar panels on there, that’s your computing hardware, but that’s not very useful, right? I could hand you an H100 but doesn’t do anything for you until you’ve got software to run on it and the software allows the raw computing power of that H100 to become useful for an end consumer…

Actually let’s run through some of the objections to solar power and then I think that will inherently get to some of these things. So we talked about the nuclear bit, what happens when the sun doesn’t shine?

CH: Yeah, so we’re actually seeing this in California right now. It creates a time arbitrage, right? If you have the ability to store power during the day and then release it during the night, you can make an incredible amount of money and that’s why we’ve seen battery deployments in California, for example, increased by I think a factor of 10x in the last four years, and the effect of that is it’s basically allowing people to transport power, or transport energy, through time in much the same way that power lines, transmission lines, allow people to transport electricity through space.

So what is happening with the battery cost curve? Because if that’s sort of an essential component to make this happen-

CH: Same thing, same story.

For the same reasons?

CH: Exactly the same reasons, same story. Battery manufacturing is probably a little bit more complex and not quite as well-developed as silicon solar panel manufacturing, but we’re seeing year-on-year growth of battery manufacturing. It’s like well over 100%, so it’s actually growing faster than solar, and then the cost improvement’s not quite as steep, but it’s easily like 5% or 10% per year depending on which technology you’re looking at.

In 2021, for example, it was extremely confidently predicted that lithium ion batteries would never get under $100 per kilowatt hour at the cell level and the pack level, and of course Tesla was widely mocked for claiming that they would be able to get ultimately below $100 bucks per kilowatt hour at the pack level. But then again, I think January this year or December last year, a Chinese manufacturer came out with a sodium ion battery cell, which is at $56 per kilowatt hour, so it’s like a 2x reduction in cost on top of what is already considered cutting edge, and we just go down from there.

Now, sodium ion batteries might not be perfectly suited for all kinds of applications, but they’re probably cheaper to produce than the lithium ion batteries. We know they’re cheaper to produce in lithium batteries and they’re more than capable of doing the sort of load shifting required to essentially store power during the day and then use it in the evening.

Are we in a situation already, or do we still have a bit to go, where the sort of combined weighted cost of solar, which is much cheaper than nuclear as you talked about, plus batteries, which sounds like it’s still more expensive now, but when you combine the two is it already lower?

CH: Yeah, so again just look at the data, right — the market reveals its preference. CleanTechnica ran an article almost five years ago now showing that in Texas they were developing battery plants 10:1 compared to gas peaker plants. Texas runs its own its own grid under slightly different rules where you can basically just build and connect and then the grid can force you to curtail if they’ve got overproduction, but that typically means it’s a more liquid market. And even in Texas, which is certainly not ideologically committed to solar, and actually incidentally this year deployed more solar than California did.

Yeah, I was going to say.

CH: Also Texas has the cheapest natural gas in the history of the universe, but they’re deploying more battery packs than they are gas peaker plants 10:1…

…CH: But I just want to say there’s a conception that, oh, solar and batteries only are on the grid because they’re massively subsidized and they’re actually screwing everything up. That’s actually, that’s not true. Solar and batteries is what’s keeping the grid working right now, it’s the only thing that’s providing expanded capacity.

The major challenge with additional solar development, particularly here in the States, is we now have this ten-year backlog or kind of development queue before you can connect your solar array to the grid, and the reason for that is the grid is old and it’s kind of overwhelmed, and it’s not able to transport all that power effectively to market.

Of course, one solution to this is just to build more grid. Another solution is to put some batteries on the grid. And, you know, the third solution is basically just build batteries and solar wherever you can, it’s actually working really well.

Then obviously what Terraform is doing is taking this otherwise un-utilized capacity for solar development and then pouring it into another aspect of our civilization’s absolutely unquenchable thirst for energy. Just to give you some hard numbers here, roughly a third of U.S. energy is consumed in the form of electricity and about two-thirds in the form of oil and gas. So even if we successfully electrified huge amounts of ground transportation and also moved all of the electricity grid to say wind, solar and a bit of nuclear and some batteries and maybe some geothermal or something like that, so completely decarbonize the grid, that would only deal with about a third of the economy. Two-thirds of the economy still runs on oil and gas and so that’s what Terraform is here to try and deal with.

One more question on the batteries.

CH: Yeah.

There’s always been, or the common refrain has been, we need a battery breakthrough, we need something completely new. Is the take, and you mentioned the sort of sodium ion, but even with terms of lithium ion, is the actual expectation or is the actual realization in your expectation going forward that actually the technology we have — sure, it’d be great to get a breakthrough, but there’s actually way more improvements and in what we have that will carry us a long way?

CH: Lithium ion batteries are already amazing. I mean, they’ve been around for about 35 years now, I think they were first commercialized for Panasonic camcorders or something and even then they were extremely compelling. They pushed NiCAD [nickel-cadmium] out of the market almost instantaneously, which is the previous battery chemistry and numerous applications. They’re more than good enough.

You say, “Well, I’d like a battery breakthrough”. Why? “Because I want to run my supersonic electric jet off batteries.” Well, good luck with that. But for all ground transportation purposes, for static backups, for all these kinds of applications, not only is the technology already great, it’s got a 30 year history of manufacturing at scale. We know how to make it safe, we know how to make it cheap, it’s extremely compelling and the numbers speak for themselves.

Battery manufacturing capacity expansion is not just happening for no reason, there’s enormous untapped demand for batteries. The way I like to think of it is what’s your per capita lithium ion allocation? Maybe in 1995, you might have a Nokia 3210 with — actually that would be after 1995 — but with a small lithium ion battery in it. So you’ve got 10 grams per person of lithium ion battery and nowadays my family has two electric cars, and that’s probably most of our batteries.

Yeah, now we have laptops, we have computers.

CH: But in terms of the bulk mass, like 400 kilograms per person or something for people to have electric cars and then if you have a static backup battery in your house and then maybe a share of your per capita part of the grid scale batteries and so on. I think it could easily scale to a couple of tons per lithium ion battery per person, particularly in like the more energy intensive parts of the United States.

Is that a large number? No, not really. I easily have a couple of tons per person in terms of steel just in my cars. I easily have probably 50 tons of concrete per person in terms of my built environment. I don’t actually think this is a particularly large number, I just think it’s unusual to see in such a short span of time some product go from the size of your thumb to the size of a large swimming pool, a large hot tub or something like that, in terms of your per capita allocation.

Where are we at as far as availability of say lithium or of all the various rare minerals or rare earths, whether that go into both solar and batteries?

CH: Yeah, I mean, again, I’m not a super expert on batteries, but the cure for high prices is high prices. Lithium is the third most common element in the universe, there’s no shortage of it. You could argue there’s a shortage of lithium refining capacity in the United States, particularly if you’re concerned about strategic vulnerability.

It’s like the rare earth thing, right? Rare earths are not actually rare. It’s just the actual ability to refine them.

CH: They’re super common, and actually solar solves that. It turns out that you can electrically catalytically separate rare earth elements using cheap solar power, more significantly lower environmental impact and much lower cost than traditional refining, and I have some friends working on that.

It is certainly true that batteries, people are concerned about cobalt. Actually, I have some cobalt here, here’s a cube of cobalt on my desk. Cobalt is a fabulous metal, but there’s not a huge amount of it necessarily. It’s not scarce like gold, but the mining situation is not quite sorted out. But at the same time, like almost all the major battery manufacturers use almost no cobalt right now because they’re able to adapt their processes to basically optimize their costs towards the cheaper materials.

Capitalism solves this, we don’t have to worry too much about it, there’s literally hundreds of thousands of chemists out there right now who are solving this problem right now, you don’t have to lose sleep over it, it is a completely commoditized production system…

What happens with old solar panels and old batteries? Obviously this is an objection to nuclear which is nuclear waste, and the good thing with nuclear waste is it’s really not that much. We’re talking about this deployment of massive amounts of solar panels, all these batteries. Where are we at in 10, 20 years if this build out happens? Is that a potential issue?

CH: I’m not too worried about it. And again, you need to look at your waste stream on a per capita basis. If we deployed as many solar panels as I want to, how many solar panels will you end up disposing of? I think if you ground them up it’d be one garbage bag per year. For a suburban family, we probably have 1,000 garbage bags of trash every year that gets landfilled.

But to talk about specifics, batteries I think are prime targets for recycling because the materials in them are essentially, as Elon Musk once said, super concentrated for the raw materials you need to make batteries. There’s multiple companies out there, including Redwood Materials, that are doing exclusively battery recycling, or battery component recycling, which is super obvious. That said, as battery production increases, even if you recycle all the old batteries, it will only be 1% of the input stream or something, but I just don’t see a future where we have giant piles of batteries lying around.

Then as far as solar panels go, they’re like a layer of silicon dioxide, which is glass, a layer of silicon, which used to be glass, and then a layer of silicon dioxide and maybe some aluminum around the edges. Well, you can strip off the aluminum and recycle that trivially, we’ve been recycling aluminum for 100 years, and the glass is glass. You can grind it up and landfill it, it’s basically sand.

People will say, “Oh, what about cadmium or something?” — well first, solar uses a cadmium telluride process to make their solar panels. But again, the amounts involved are trivial, they’re inert, they’re solid, they can’t run or leach or anything like that, I’m not too worried about it. As far as the sort of trash that humans routinely landfill, solar panels would actually significantly increase the purity of our dumps because they’re so inert compared to everything else…

…CH: One of the things I like to say is that oil and gas is so common in our civilization, it’s invisible because every single thing that you see with your eyes is a surface that’s reflecting light, it’s usually pigmented or made of plastic, and that pigment or plastic is made of oil or it’s made of natural gas. So unless you go outside and look at a tree, which is ultimately made of a kind of plastic also derived from sunlight and air, it’s extremely difficult to lay your eyes on anything that’s not made of hydrocarbons and obviously, so we’re extremely bullish about growth.

Now it could be the case that there’s zero growth. It could be the case that the oil and gas industry just motors along at about $8 trillion of revenue per year, which is about $1 billion per hour. So just in the time we’ve been talking, it’s $1 billion, which is just insane. But I actually think that once we unlock these cheaper forms of hydrocarbons that it will promote substantial growth, particularly in the energy-intensive industries.

So just to underscore the vision here, I get really, really fired up about this, because when I think of aviation and how amazing it is, and how we’ve only had it as a species for about a hundred years, and it’s only really been something that we can enjoy in jet transport for maybe 50 years. But actually the people who routinely fly on aircraft, and I know that you’re one of them because you’re an expert obviously, and myself, it’s probably only 50 million people on earth who’ve ever had that experience of flying in a jet, I don’t know more than 10 times in their life. Wouldn’t it be incredible if that number was 500 million or 5 billion, but to get there from here in terms of fossil fuel consumption, emits a lot of CO₂, but it also requires a huge amount of fuel. Aviation currently consumes about 2% of the world’s oil and gas just to fly less than 1% of the world’s population around, and so obviously we need to bring on a new source of fuel.

So when you think, well, what is a nice climate-positive version of aviation? Is it like the European model where we force airlines to make customers pay for carbon sequestration or carbon credits or something like that, which is either extremely expensive or extremely fraudulent or both, but in any case makes aviation more expensive and less accessible to people, just makes it more exclusive? Or do we say, “Why don’t we solve both these problems at once, and just bring online enormous new supply of high quality, cheap gas and natural gas for the future liquefied natural gas powered supersonic aircraft?”

At the same time it just happens to be carbon-neutral, so you don’t have to worry about CO₂ emissions, it’s not polluting the atmosphere with new CO₂ from the crust, and at the same time, instead of Boeing producing 500 aircraft a year, Boeing and maybe a few more startups can be producing 10,000 aircraft per year to service this kind of massive explosion in demand driven by economic expansion. That is a sick vision, that is so cool, we should absolutely do this as quickly as we can.

I think whether or not Terraform plays a huge role in this process or not, and I’m certainly intending for it to be — currently we’re leading this process — the economics is inevitable that we’re going to switch over to synthetic fuel sooner or later, and when we do, it’s going to get really, really cheap because we’re running it off solar power and when it gets really, really cheap, we’re going to do amazing aviation and other energy applications, and increase manufacturing and maybe some little bit of geo-engineering on the side to keep things in check, increase water supply in dry areas and so on. Why wait until 2060? We could have this done in 2040 if we just apply ourselves the right way and find the right business model…

How does it work? Give the non-physicist overview of how Terraform works.

CH: Yeah, sure. So from a customer’s perspective on the outside, essentially what a Terraformer does is it allows you to build your own oil and gas well in your backyard, regardless of the fact that you don’t own a drill rig, and in fact you don’t live anywhere near where oil and gas occurs naturally, which is again pretty cool. But how does it work under the hood? Well, it consumes electricity and most of that electricity gets used locally.

Actually I should state the Terraformer itself sits in the solar array, and that’s to reduce the cost of transmission of electricity, which would be absolutely prohibitive in this case, and the electricity gets used to capture CO₂ from the air and to split water into hydrogen and oxygen. We throw the oxygen away like trees do, we take the hydrogen and we react that in a classical old school chemical reactor with the CO₂ to produce methane and water. Then we can separate the water out because it condenses at a much higher temperature from the methane and we’re just left over with methane plus a little bit of leftover CO₂ and hydrogen and a tiny bit of water vapor. That’s natural gas, right?

Actually, when you get natural gas out of the ground, if you did have a drill rig and you did live in a place where natural gas occurs and you drill a hole in the ground, gas comes out. Well now you’ve got to build a well top and a bunch of other stuff that’s actually really complicated, and you might have a blowout and then what comes out of the ground is like between 10 and 80% natural gas and a bunch of other contaminants on top of that which have to be removed before you can sell it.

We don’t have that problem. What we produce is the pure product. It’s really compellingly elegant the way we do this. There’s no geology risk, plug-and-play once you plug it in it just generates a predictable amount of gas every day for however long the system lasts, which is most likely measured in decades.

In this case, you don’t have a battery capital cost, I presume it only runs when then suns out, right?

CH: Yeah, that’s absolutely correct. And I’ll say for anyone who’s considering doing a hardware tech startup, well, there is basically a recipe that we’ve stumbled upon for taking any existing industry and then applying it to solar power and getting the benefit of that extremely cheap power.

The first is you have to get the CapEx way, way down because your utilization is low, you’re only using your plant maybe 25% of the time, so you have to get the cost down by at least a factor of four. Then on top of that, you also have to make it compatible with the sun coming up and going down. So time variability, which is difficult, but not impossible. We have many processes that we can routinely throttle up and down in our everyday lives so you understand this intuitively, but if you can do that, and it sounds impossible, of course, “I just want a chemical reactor that’s 1/10 the size and 1/4 the cost and I can ramp it up and down”.

Well, the way you make this work is you just use more power. So you say, “Well, I don’t care about efficiency quite as much because my power is so cheap”, and that’s what makes it easy. But if you can do this, then you have —

You have to change that core assumption. Whereas almost every invention today is all about increasing the efficient use of power, and the whole point of solar is, “What if we assume power is basically infinite, but it’s bounded by time, then what would we do?”.

CH: It’s like cycles in your computer are basically free or on your cell phone or something…

Desalination seems like a potentially massive win here and very pertinent to the American West for example. But this idea that if you assume energy is infinite, we’re not short of water on earth, we’re short of water without salt.

CH: That’s right, yeah. I mean there are some places where it’d be relatively difficult to transport even fresh water from the ocean, but in California that’s not the case. California is at the end of the Colorado River, which is declining, and California of course has senior water rights, we take about 5 million acre feet of water per year.

So unlike Terraform, which is definitely developing new proprietary technology in-house, it’s quite exciting, but with solar desalination, you don’t need any new technology. You just go and build a plant essentially with stuff you can buy off the shelf. How much would it cost to build a plant that is able to substitute 100% of California’s water extraction from the Colorado River, essentially doubling Southern California’s water supply, and at the same time allowing you to fix the Salton Sea and also set up a massive light metals industry and a bunch of other things?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Tencent, and Tesla. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2024 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q1 earnings season.

Last month, I published The Latest Thoughts From American Technology Companies On AI (2024 Q1). In it, I shared commentary in earnings conference calls for the first quarter of 2024, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2024’s first quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management thinks that creativity is a human trait and that AI assists and amplifies human ingenuity without replacing it

Adobe’s highly differentiated approach to AI is rooted in the belief that creativity is a uniquely human trait and that AI has the power to assist and amplify human ingenuity and enhance productivity.

Adobe’s Firefly generative AI models within its Creative Cloud suite were trained on proprietary data; Adobe’s management has infused AI functionality into its flagship products within the Creative Cloud suite; management has built Adobe Express as an AI-first application; Firefly has generated over 9 billion images since its launch in March 2023 (was 6.5 billion in 2023 Q4); customers are excited about the commercial safety of Firefly; Firefly Services can create significantly more asset variations in a much shorter time, and the speed enables Adobe to monetise Firefly through the volume of content created; Firefly generations in May 2024 was the most generations of any month-to-date; Firefly Services has started to see customer wins; Firefly Services allows users to build customer models and access APIs and these are early in adoption, but customer-interest is better than expected; early Firefly Services usage is on (1) creating multiple variations in the ideation process, (2) creating geography-based variations on ads, (3) assets for community engagement

In Creative Cloud, we’ve invested in training our Firefly family of creative generative AI models with a proprietary data set and delivering AI functionality within our flagship products, including Photoshop, Illustrator, Lightroom, and Premier. We’re reimagining creativity for a broader set of customers by delivering Adobe Express as an AI-first application across the web and mobile surfaces. Since its debut in March 2023, Firefly has been used to generate over 9 billion images across Adobe creative tools…

…This week’s Design Made Easy event, which focused on Express for Business, was another big step forward for us. Companies of all sizes are excited about the integrated power and commercial safety of Firefly, the seamless workflows with Photoshop, Illustrator and Adobe Experience Cloud, and enterprise-grade brand controls that are now part of Express for Business, making it the optimal product for marketing, sales and HR teams to quickly and easily create visual content to share…

… Firefly Services can power the creation of thousands of asset variations in minutes instead of months and at a fraction of the cost. This allows us to monetize the volume of content being created through automation services. The increasing availability of Firefly in Creative Cloud, Express, Firefly Services and the web app is giving us opportunities to access more new users, provide more value to existing users and monetize content automation. These integrations are driving the acceleration of Firefly generations with May seeing the most generations of any month to date…

…On the models, we released Firefly Services. We’ve started to see some customer wins in Firefly Services. So they’re using it for variations, and these are the custom models that we’re creating as well as access to APIs. I would say that’s early in terms of the adoption, but the interest as customers say how they can ingest their data into our models as well as custom models, that’s really ahead of us, and we expect that to continue to grow in Q3 and Q4…

…In terms of what I would say we’re seeing usage of, I think the initial usage of Firefly Services in most companies was all around ideation, how can they create multiple variations of them and in the ideation process really just accelerate that ideation process? Most companies are then starting with as they’re putting it into production, how can they, with the brand assets and the brand guidelines that they have, do this in terms of the variations, whether they be geographic variations or they be just variations. I mean, if you take a step back also, every single ad company right now will tell you that the more variance that you provide, the better your chances are of appropriately getting an uplift for your media spend. So I would say that most companies are starting with creating these variations for geographies. The other one that we see a fair amount of is engaging with their communities. So when they want their communities to have assets that they have blessed for usage within community campaigns, that’s the other place where Firefly Services are being used. And a company has a community portal where the community can come in, take something and then post whether it’s on whatever social media site that you want. 

Adobe’s management has introduced Acrobat AI Assistant, an AI-powered tool for users to have conversations with their documents, within Adobe’s Document Cloud suite; Acrobat AI Assistant features are available as a standalone offer or as an add-on subscription to existing Adobe products; Acrobat AI Assistant for English documents was made generally available in April; management is seeing early success in adoption of Acrobat AI Assistant; Acrobat AI Assistant can be applied to document types beyond PDFs

In Document Cloud, we’re revolutionizing document productivity with Acrobat AI Assistant, an AI-powered conversational engine that can easily be deployed in minutes. This enhances the value of the trillions of PDFs, which hold a significant portion of the world’s information. Acrobat AI Assistant features are now available through an add-on subscription to all Reader and Acrobat enterprise and individual customers across desktop, web and mobile…

The introduction of Acrobat AI Assistant made generally available in April for English documents marks the beginning of a new era of innovation and efficiency for the approximately 3 trillion PDFs in the world. Acrobat AI Assistant is empowering everyone to shift from reading documents to having conversations with them in order to summarize documents, extract insights, compose presentations and share learnings. AI Assistant is available as a stand-alone offer for use in reader and as an add-on to Acrobat Standard and Pro. We’re seeing early success driving adoption of AI Assistant as part of our commerce flows and remain optimistic about the long-term opportunities…

…Other business highlights include general availability of Acrobat AI Assistant support for document types beyond PDF, meeting transcripts and enterprise requirements. 

The Adobe Experience platform, which is part of the Digital Experience segment, is on track to become a billion-dollar annual revenue business; management has released AEP (Adobe Experience Platform) AI Assistant to improve the productivity of marketing professionals; Adobe is the #1 digital experience platform; customer interest and adoption of AEP AI Assistant is great

At the end of May, we celebrated the 5-year anniversary of Adobe Experience Platform, which we conceived and built from scratch and which is on track to be the next billion-dollar business in our Digital Experience portfolio. We released AEP AI Assistant to enhance the productivity of marketing practitioners through generative AI while expanding access to native AEP applications…

When we introduced Adobe Experience Platform 5 years ago, it was a revolutionary approach to address customer data and journeys. Today, we’re the #1 digital experience platform and AEP with native apps is well on its way to becoming a billion-dollar business…

…We are excited by the customer interest and adoption of our latest innovations, including AEP AI Assistant, a generative AI-powered conversational interface that empowers practitioners to automate tasks, simulate outcomes and generate new audiences and journeys. For example, customers like General Motors and Hanesbrands have been working with AEP AI Assistant to boost productivity and accelerate time to value while democratizing access to AEP and apps across their organizations…

…When you think about the AEP AI Assistant, it’s doing a couple of things. One, it’s really making it easier for customers to deploy use cases. When you think of use cases that they have around, for example, generating audiences and running campaigns around those audiences, these are things today that require some data engineering. They require the ability to put these audiences together. So they require marketing and IT teams to work together. The AEP AI Assistant is making it much easier for marketers to be able to do it themselves and be able to deploy a lot more use cases.

Adobe’s management’s vision for Adobe Express is to make design easy; the launch of the new Adobe Express app in 2024 Q1 (FY2024 Q2) has been well received, with monthly active users doubling sequentially; management has been deeply integrating AI features into Adobe Express; cumulative exports from Adobe Express has increased by 80% year-on-year in 2024 Q1; management is building Adobe Express to be AI-first; management thinks Adobe Express is leveraging people’s need for AI 

Our vision for Adobe Express is to provide a breakthrough application to make design easy for communicators worldwide, leveraging generative AI and decades of Adobe technology across web and mobile. Our launch of the all-new Express application on iOS and Android earlier this quarter is off to a strong start with monthly active users doubling quarter-over-quarter…

There’s a lot of buzz with Express here at Adobe coming off the event we just had earlier this week, but it’s really based on the fact that the innovation in Express is on a tear, right? A few months ago, we introduced an all-new Express for the web. This quarter, we introduced an all-new Express for mobile. We introduced Express for Business. We also now have, as we’ve just talked about, been more deeply integrating AI features, whether it’s for imaging generation or Generative Fill or text effects, character animation, design generations, more deeply into the flow for Express. And that combination has led to an incredible set of metrics over the last quarter, in particular, but building throughout the year. Express MAU is growing very quickly. We talked about in the script earlier that MAU on mobile has more than doubled quarter-over-quarter, which is fantastic to see. And cumulative exports, if you look at year-over-year, has grown by over 80%. So really feeling good about sort of the momentum we’re seeing…

Express that is now in market is built on a brand-new platform, right? And that brand-new platform lays the groundwork for the AI era. And this will be — Express will be the place that anyone can come and create through a combination of conversational and standard inputs. That’s the vision that we have. And I think it’s an opportunity for us to really leap forward in terms of what we can do on the web and mobile at Adobe…

Express is really being driven by sort of the need for AI and how people are able to describe what they want and get the final output. When David talked about exports, just to clarify, what that means is people who have successfully got what they want to get done, done. And that’s a key measure of how we are doing it, and AI is certainly facilitating and accelerating that.

Adobe GenStudio uses AI to help enterprises transform their content supply chain; enterprise customers view customer experience management and personalisation at scale as key investments to make

We’re now transforming the content supply chain for enterprises with Adobe GenStudio, enabling them to produce content at scale, leveraging generative AI through native integrations with Firefly Services and Adobe Express for Business. Enterprise customers, both B2C and B2B, view customer experience management and personalization at scale as key areas of differentiation, making it a priority investment for Chief Marketing Officers, Chief Information Officers and Chief Digital Officers.

Adobe’s management thinks the biggest opportunity in AI for Adobe is in interfaces, such as performing tasks faster, improving workflows etc; in AI interfaces, management is seeing significant usage in AI Assistant and Photoshop; management believes that (1) the real benefits from disruptive technologies such as AI come when people use interfaces to improve their work, and that (2) in the future, more people will be using these interfaces

I think the biggest opportunity for us and why we’re really excited about GenAI is in the interfaces because that’s the way people derive value, whether it’s in being able to complete their tasks faster, whether it’s be able to do new workflows. And I would say, in that particular space, Acrobat has really seen a significant amount of usage as it relates to AI Assistant and Photoshop…

… And so we’re always convinced that when you have this kind of disruptive technology, the real benefits come when people use interfaces to do whatever task they want to do quicker, faster and when it’s embedded into the workflows that they’re accustomed to because then there isn’t an inertia associated with using it…

And so net-net, I am absolutely betting on the fact that 5 years from now, there’ll be more people saying, “I’m using creative tools to accomplish what I want,” and there’ll be more marketers saying, “I can now, with the agility that I need, truly deliver a marketing campaign in an audience that’s incredibly more specific than I could in the past.” And that’s Adobe’s job to demonstrate how we are both leading in both those categories and to continue to innovate.

Adobe’s management’s primary focus for generative AI is still on user adoption and proliferation

From the very beginning, we’ve talked to you guys about our primary focus for generative AI is about user adoption and proliferation, right? And that has continued to be the primary thing on our mind.

Adobe’s management thinks there are different routes to monetise AI, such as winning new users, and getting higher ARPU (average revenue per user)

And to your point, there are many different ways that we can monetize this. First is as you think about the growth algorithms that we always have in our head, it always starts with, as Shantanu said, new users, right? And then it’s about getting more value to existing users at higher ARPU, right? So in the context of new users, first and foremost, we want to make sure that everything we’re doing generative AI is embedded in our tools, starting with Express, right?

Adobe has seen strong growth in emerging markets because users need access to the cloud for all of the AI functionality

I mean I think in the prepared remarks, Dan also talked about the strength in emerging markets. And I think the beautiful part about AI is that since they need access to the cloud to get all of the AI functionality, emerging market growth has been really strong for us.

Adobe’s management thinks that they have hit a sweet spot with pricing for generative AI credits in Adobe’s subscription plans for imaging and vector work, but they will need to explore different plans for generative AI credits when it comes to video work

When we think about what we’ve done with imaging and video, we’ve done the right thing by making sure the higher-value paid plans that people don’t have to think about the amount of generative capability. And so there, the balance between for free and trialist users, they’re going to run into the generative capability limits and therefore, have to subscribe. But for the people who actually have imaging and vector needs, that they’re not constantly thinking about generative, I think we actually got it right. To your point, as we move to video, expect to see different plans because those plans will, by necessity, take into account the amount of work that’s required to do video generation. So you’re absolutely right as a sort of framework for you to think about it.

Adobe’s management thinks that there’s a lot of excitement now on AI infrastructure and chips, but the value of AI will need to turn to inference in order for all the investment in AI infrastructure and chips to make sense

It’s fair to say that the interest that exists right now from investors, as it relates to AI, is all associated with the infrastructure and chips and perhaps rightly so because that’s where everybody is creating these models. They’re all trying to train them. And there’s a lot of, I think, deserved excitement associated with that part of where we are in the evolution of generative AI. If the value of AI doesn’t turn to inference and how people are going to use it, then I would say all of that investment would not really reap the benefit in terms of where people are spending the money.

Adobe’s management think it doesn’t matter what AI model is used to generate content – DALL-E, Firefly, Midjourney, or more – because the content ultimately needs to be edited on Adobe’s software; management is building products on Firefly, but they are also happy to leverage on third-party AI models

So Firefly might be better at something. Midjourney might be something at something else. DALL·E might do something else. And the key thing here is that, around this table, we get excited when models innovate. We get excited when Firefly does something amazing. We get excited when third-party models do something because our view, to Shantanu’s point, is that the more content that gets generated out of these models, the more content that needs to be edited, whether it’s color correction, tone matching, transitions, assembling clips or masking compositing images. And the reason for this is that this is not a game where there’s going to be one model. There’s — each model is going to have its own personality, what it generates, what it looks like, how fast it generates, how much it costs when it generates that, and to have some interface layer to help synthesize all of this is important. And so just sort of to note, we’ve said this before but I’ll say it again here, you will see us building our products and tools and services leveraging Firefly for sure, but you’ll also see us leveraging best-of-breed personalities from different models and integrate them all together.

Ultimately, generative AI is going to create more growth in Adobe’s category

[Analyst] Awesome, the message here is that GenAI is going to create more growth in the category. And Shantanu, you did that with the pivot to cloud. You grew the category, so here we go again.

DocuSign (NASDAQ: DOCU)

DocuSign Navigator is a new AI-powered product that allows users to store and manage their entire library of accumulated agreements, including non-DocuSign agreements

Second, DocuSign Navigator allows you to store, manage and analyze the customer’s entire library of accumulated agreements. This includes past agreements signed using DocuSign eSignature as well as non-DocuSign agreements. Navigator leverages AI to transform unstructured agreements into structured data, making it easy to find agreements, quickly access vital information, and gain valuable insights from agreements. 

DocuSign acquired Lexion, an AI-based agreements company, this May; management thinks Lexion can improve Docusign’s Agreement AI and legal workflow; the Lexion acquisition is not for revenue growth, but to integrate the AI technology into DocuSign’s products

AI is central to our platform vision, and we’re thrilled to welcome Lexion to the DocuSign family. Lexion is a proven leader in AI-based agreement technology, which significantly accelerates our IAM platform goals. We maintain a high bar for acquisitions, and Lexion stood out due to its sophisticated AI capabilities, compatible technology architecture, and promising commercial traction with excellent customer feedback, particularly in the legal community…

… With regard to capital allocation, we also closed the Lexion acquisition on May 31…

In terms of how it adds to DocuSign, I think overall, agreement AI, their extraction quantity and quality where we augment our platform. Another area where I think they’re really market-leading is in legal workflow. So workflow automation for lawyers, for example, if you’re ingesting a third-party agreement, how can you immediately use AI to assess the agreement, understand how terms may deviate from your standard templates and highlight language that you might want to propose as a counter that really accelerates productivity for legal teams. And they’ve done an excellent job with that. So overall, that’s how it fits in…

We’re not breaking it out just because of its size and materiality. It’s not material to revenue or op margin for us. The overarching message that I would like to send on Lexion is that the purchase of Lexion is about integrating the technology into the DocuSign IAM platform. That opportunity for us, we think, in the long term, can apply to the well over 1 million customers that we have.

MongoDB (NASDAQ: MDB)

MongoDB’s management wants to prioritise investments in using generative AI to modernise legacy relational applications; management has found that generative AI can help with analyzing existing code, converting existing code and building unit and functional tests, resulting in a 50% reduction in effort for app modernisation; management sees a growing list of customers across industries and geographies who want to participate; interest in modernising legacy relational applications is high, but it’s still early days for MongoDB

Second, we are more optimistic about the [ opti-tech ] to accelerate legacy app modernization using AI. This is a large segment of the market that has historically been hard to penetrate. We recently completed the first 2 GenAI powered modernization pilots, demonstrating we can use AI to meaningfully reduce the time, cost and risk of modernizing legacy relational applications. In particular, we see that AI can significantly help with analyzing existing code, converting existing code and building unit and functional tests. Based on our results from our early pilots, we believe that we may be able to reduce the effort needed for app modernization by approximately 50%. We have a growing list of customers across different industries and geos, who want to participate in this program. Consequently, we will be increasing our level of investment in this area…

…We have an existing relational migrated product that allows people to essentially migrate data from legacy relational databases and does the schema mapping for them. The one thing it does not do, which is the most cumbersome and tedious part of the migration is to auto generate or build application code. So when you go from a relational app to an app built on MongoDB, you still have to essentially rewrite the application code. And for many customers, that was the inhibitor for them to migrate more apps because that takes a lot of time and a lot of labor resources. So our app modernization effort is all about or using AI is all about now solving the third leg of that stool, which is being able to reduce the time and cost and effort of rewriting the app code, all the way from analyzing existing code, converting that code to new code and then also building the test suites, both unit tests and functional tests to be able to make sure the new app is obviously operating and functioning the way it should be…

…That’s why customers are getting more excited because the lower you reduce the cost for that migration or the switching costs, the more apps you can then, by definition, migrate. And so that is something that we are very excited about. I will caution you that it’s early days. You should not expect some inflection in the business because of this. 

MongoDB’s management wants to prioritise investments in building an ecosystem for customers to build AI-powered applications because management recognises that there are other critical elements in the AI tech stack beyond MongoDB’s document-based database; management has launched the MongoDB AI Application Program, or MAP, that combines cloud computing providers, model providers, and more; Accenture is the first global systems integrator to join MAP

Third, although still early in terms of customers building production-ready AI apps, we want to capitalize on our inherent technical advantages to become a key component of the emerging AI tech stack…

Recognizing there are other critical elements of the AI tech stack, we are leveraging partners to build an ecosystem that will make it easier for customers to build AI-powered applications. Earlier this month, we launched the MongoDB AI application Program, or MAP, a first-of-its-kind collaboration that brings together all 3 hyperscalers, foundation model providers, generative AI frameworks, orchestration tools and industry-leading consultancies. With MAP, MongoDB offers customers reference architectures for different AI use cases, prebuilt integrations and expert professional services to help customers get started quickly. Today, we are announcing that Accenture is the first global systems integrator to join MAP and that it will establish a center of excellence focused on MongoDB projects. We will continue to expand the program through additional partnerships and deeper technical integrations.

MongoDB’s document-based database architecture is a meaningful differentiator in AI because AI use cases involve various types of data, which are incompatible with legacy databases; there was a customer who told management that if he were to design a database specifically for AI purposes, it would be exactly like MongoDB

Customers tell us that our document-based architecture is a powerful differentiator in an AI world, the most powerful use cases rely on data of different types and structures such as text, image, audio and video. The flexibility required to handle a variety of different data structures is fundamentally at odds with legacy databases that rely on rigid schemes, which is what makes MongoDB’s document model such a good fit for these AI workloads…

…One customer told us if he had to build a database, it would be designed exactly like MongoDB and so for this new AI era. And so we feel really good about our position. 

A unit with Toyota that is focused on AI and data science migrated to MongoDB Atlas after experiencing reliability issues with its original legacy database system; the Toyota unit now uses MongoDB Atlas for over 150 micro-services and will use MongoDB Atlas as its database of choice for future AI needs

Toyota Connected, an independent Toyota company focused on innovation, AI, data science, and connected intelligence services, migrated to MongoDB Atlas after experiencing reliability issues with the original legacy database system. The team selected MongoDB Atlas for its ease of deployment, reliability and multi-cloud and multi-region capabilities. Toyota Connected now uses Atlas for over 150 micro-services. Their solution benefits from 99.99% uptime with Atlas as a platform for all data, including mission-critical vehicle telematics and location data needed for emergency response services. MongoDB’s Toyota Connected’s database of choice for all future services as they explore vector and AI capabilities, knowing they’ll get the reliability and scalability they need to meet customer needs.

Novo Nordisk is using MongoDB Atlas Vector Search to power its generative AI efforts in producing drug development reports; Novo Nordisk switched from its original relational database when it wasn’t capable of handling complex data and lacked flexibility to keep up with rapid feature development; reports that Novo Nordisk used to take 12 weeks to prepare can now be completed with MongoDB Atlas Vector Search in 10 minutes

By harnessing GenAI with MongoDB Atlas Vector search, Novo Nordisk, one of the world’s leading health care companies is dramatically accelerating how quickly can get new medicines approved and delivered to patients. The team responsible for producing clinical study report turn to Atlas when the original relational database wasn’t capable of handling complex data and lack the flexibility needed to keep up with the rapid feature development. Now with GenAI and the MongoDB Atlas platform, Novo Nordisk gets the mission-critical assurances that needs to run highly regulated applications, enabling them to generate complete reports in 10 minutes rather than 12 weeks. 

MongoDB’s management still sees MongoDB as well-positioned to be a key beneficiary when organisations embed AI into next-gen software applications

Our customers recognize that modernizing legacy applications is no longer optional in the age of AI. And are preparing for a multiyear journey to accomplish that goal. They see MongoDB as a key partner in that journey. We are well positioned to be a key beneficiary as organizations embed AI into the next generation of software applications that transform their business.

MongoDB’s management  believes that MongoDB’s performance in 2024 Q1 was less upbeat than the cloud computing hyperscalers because the hyperscalers’ growth came primarily from reselling GPU (graphic processing unit) capacity for AI training and there’s a lot of demand for AI training at the moment, whereas MongoDB is not seeing AI apps in production at scale, which is where MongoDB is exposed to

In contrast to the hyperscalers, like we believe the bulk of their growth across all 3 hyperscalers was really spent on reselling GPU capacity because there’s a lot of demand for training models. We don’t see a lot of, at least today, a lot of AI apps in production. We see a lot of experimentation, but we’re not seeing AI apps in production at scale. And so I think that’s the delta between the results that the hyperscalers produce versus what we are seeing in our business.

MongoDB’s management  thinks that AI is going to drive a step-fold increase in the number of apps and software created, but it’s going to take time, although the process is happening

I think with AI, you’re going to see a stepfold increase in the number of apps and the number of amount of software that’s being built to run businesses, et cetera. But that’s going to take some time. as with any new adoption cycle, the adoption happens in what people commonly refer to as S curves. And I think we’re going through one of those S curves.

MongoDB’s management sees the possibility of customers’ desire to spend on AI crowding out other software spending, but does not think it is an excuse for MongoDB not meeting new business targets

Is AI essentially crowding out new business? We definitely think that that’s plausible. We definitely see development teams experimenting on AI projects. The technology is changing very, very quickly. But that being said, we don’t see that as a reason for us to not hit our new business targets. And as I said, even though we started slow, we almost caught up at the end of this quarter, and we feel really good about our new business opportunity for the rest of this year. So — so I don’t want to use that as an excuse for us not meeting our new business targets.

Okta (NASDAQ: OKTA)

A new product, Identity Threat Protection with Okta AI, is expected to become generally available soon

We’re also excited about the launch of Identity Threat Protection with Okta AI, which includes powerful features like Universal Logout, which makes it possible to automatically log users out of all of their critical apps when there is a security issue. Think of this as identity threat detection and response for Okta. We expect Identity Threat Protection to become generally available this summer.

Okta’s management does not expect the company’s new products – which includes governance, PAM, Identity Threat Protection with Okta AI, and Identity Security Posture Management – to have material impacts on the company’s financials in FY2025; of the new products, management thinks Identity Threat Protection with Okta AI and Identity Security Posture Management will make impacts first before PAM does

I wouldn’t expect for these newer things that are coming out like posture management or threat protection, I wouldn’t expect it in FY ’25 at all. I probably wouldn’t even think it would impact it in FY ’26 because we’re talking about a $2.5 billion business at this point. It takes a lot of money in any of these products to make a material difference to the overall numbers. So we’re setting these up for the long term…

…How we’re thinking about this internally is that the — I think it will mirror the order of broad enablement. So we’re broadly enabling people in the following order: governance is first, followed by a combination of posture management and identity threat protection, followed by privileged access. So we think that Identity Threat Protection with Okta AI and Identity Security Posture Management, that bundle could pretty quickly have as much of an impact as governance. And then we think the next sequential enablement in the next order of impact will probably be Privileged Access.

Okta’s management is currently not seeing companies changing their software spending plans because they want to invest in AI, although that might change in the future

[Question] There is a shift in the marketplace among the C-suite from fear about the economy to, gee, I need to focus on how I’m going to implement AI. And in that context, there’s uncertainty around the mechanics of what they need to do to secure AI within their organizations. And I guess my question to you is we’re hearing the pipelines of the VAR channels, particularly in security, are extremely robust into the back half of the year. But the uncertainty around AI decision is keeping people from implementing it. So how robust is the pipeline that you’re looking at? And are you, in fact, hearing that from your C-suite customers when you talk to them?

[Answer] What I’ve heard is everyone is figuring out how they can deploy this new wave of technology to their products and services and business and how they can use it for security and how they can use it for innovation. But they’re not at the stage where it’s broadly impacting other plans. It’s more of like a — their planning exercise at this point. I think that might change in the future.

Okta’s management thinks that more companies will invest in AI in the future, and this will be a tailwind for the company because more identity features will be needed; the current AI wave is not impacting spending on Okta at the moment, but might be a boon in the future

My bet is that they’re going to be building new apps. They’re going to be deploying more technology from vendors that are building apps with AI built in, which is going to — all of that’s going to lead to more identity. They’re going to have to log people into their new apps they build. They’re going to have to secure the privileged accounts that are running the infrastructure behind the new apps. They’re going to have to make sure that people in their workforce can get to the apps that are the latest, greatest AI-driven experiences for support or for other parts of the business. So I think that identity is one of these foundational things that’s going to be required whether it’s the AI wave, which is going to be really real and impactful and — or whether it’s whatever comes after that.

[Question] So not impacting spending today but might impact to help it in the future.

[Answer] Yes, yes. That’s how I see it.

Okta’s management sees 2 ways of monetising Okta AI: Through new products, and through making existing products better

Okta AI will be monetized through 2 ways. one will be new products like Identity Threat Protection with Okta AI; and the other way, it will be — it will just make products better. For example, the Identity Security Posture Management, it has a new capability that’s going to be added to that product that’s just going to make it smarter about how it detects service accounts. That Identity Security Posture Management scans a customer’s entire SaaS estate, and says, here are all the things you should look at. You should take — this account needs MFA. This other account is — probably has overly permissive permissions. The challenge there is how does the customer know which of those accounts are service accounts, so they can’t have human biometrics. And we added — we used some AI capability to add that to the scan. So that’s an example of just the product gets better versus Identity Threat Protection is like it’s a whole new product enabled by that.

Salesforce (NYSE: CRM)

Salesforce is managing 250 petabytes of customer data and management thinks this is going to be critical and positions Salesforce for success when Salesforce’s customers move into AI; management thinks that customer data is the critical success factor in AI, not AI models and UIs (user interfaces); management thinks most of the AI models that are being built today, both large and small, are just commodities and will not survive

We’re now managing more than 250 petabytes of data for our customers. This is going to be absolutely critical as they move into artificial intelligence…

…When you look at the power of AI, you realize the models and the UI are not the critical success factors. It’s not critical where the enterprise will transform. There are thousands of these models, some open source and some close source models, some built with billions, some with just a few dollars, most of these will not survive. They’re just commodities now, and it’s not where the intelligence lies. And they don’t know anything about a company’s customer relationships. Each day, hundreds of petabytes of data are created that AI models can use for training and generating output. But the one thing that every enterprise needs to make AI work is their customer data as well as the metadata that describes the data, which provides the attributes and contacts the AI models need to generate accurate, relevant output. And customer data and metadata are the new gold for these enterprises…

…Not every company is as well positioned, as you know, for this artificial intelligence capability of Salesforce is because they just don’t have the data. They may say they have this capability or that capability, this user interface, that model, that whatever, all of these things are quite fungible and are expiring quickly as the technology rapidly moves forward. But the piece that will not expire is the data. The data is the permanent key aspect that, as we’ve said, even in our core marketing, it’s the gold for our customers and their ability to deliver our next capability in their own enterprises.

Salesforce’s management is seeing incredible momentum in Data Cloud, which is Salesforce’s fastest-growing organic and next billion-dollar cloud; Data Cloud’s momentum is powered by the need for customers to free their data from being trapped in thousands of apps and silos; the need to free their data is important if Salesforce’s customers want to embrace AI; Data Cloud was in 25% of Salesforce’s >$1 million deals in 2024 Q1; 2024 Q1 was the second quarter in a row when >1,000 Data Cloud customers were added; in 2024 Q1, 8 trillion records were ingested in Data Cloud, up 42% year-on-year, 2 quadrillion records were processed, up 217% year-on-year, and there were 1 trillion activations, up 33% year-on-year

Many of these customers have a central business and customer data that exists outside of Salesforce that’s trapped in thousands of apps and silos. It’s disconnected. That’s why we’re seeing this incredible momentum with our Data Cloud, our fastest-growing organic, and our next billion-dollar cloud. It’s the first step to becoming an AI enterprise. Data Cloud gives every company a single source of truth and you can securely power AI insights and actions across the entire Customer 360.

Now let me tell you why I’m excited about Data Cloud and why it’s transforming our customers and how it’s preparing them for this next generation of artificial intelligence. Data Cloud was included in 25% of our $1 million-plus deals in the quarter. We added more than 1,000 Data Cloud customers for the second quarter in a row. 8 trillion records were ingested in the Data Cloud in the quarter, up 42% year-over-year, and we processed 2 quadrillion records. That’s a 217% increase compared to last year. Over 1 trillion activations drove customer engagement, which is a 33% increase year-over-year. This incredible growth of data in our system and the level of transactions that we’re able to deliver not just in the core system but especially in data cloud is preparing our customers for this next generation of AI.

Salesforce’s predictive AI, Einstein, is generating hundreds of billions of predictions daily; Salesforce is working with thousands of customers in generative AI use cases through the launch in 2024 Q1 of Einstein Copilot, Prompt Builder,and Einstein Studio; Salesforce has closed hundreds of Einstein Copilot deals since the product’s general availability (GA)

Einstein is generating hundreds of billions of predictions per day, trillions per week. Now we’re working with thousands of customers to power generative AI use cases with our Einstein Copilot, our Prompt Builder, our Einstein Studio, all of which went live in the first quarter, and we’ve closed hundreds of Copilot deals since this incredible technology has gone GA. And in just the last few months, we’re seeing Einstein Copilot develop higher levels of capability. We are absolutely delighted and could not be more excited about the success that we’re seeing with our customers with this great new capability.

Luxury fashion company Saks is using Salesforce’s Einstein 1 Platform in Data Cloud to create AI-powered personal experiences for customers

Saks, a leader in the luxury fashion market, part of Hudson’s Bay, went all-in on Salesforce in the quarter. CEO Marc Metrick is using AI to create more personal experiences for every customer touch point across their company. And with our Einstein 1 Platform in Data Cloud, Saks can unify and activate all its customer data to power trusted AI.

Salesforce is helping FedEx generate savings and accelerate top-line partly with the help of its AI solutions

The Salesforce data and app and AI capabilities generate expense savings. This is the core efficiency while growing and accelerating top line revenue. This is the effectiveness that we’re delivering for FedEx. This efficiency includes next best action for sellers, automated lead nurturing, Slack for workflow management, opportunity scoring, a virtual assistant, AI on unstructured data for delivering content to sales and customer service. And when we think about effectiveness, we see our Journey Builder delivering hyper personalization, integrating customer experiences across service, sales, marketing, the ability to tailor and deliver customer experiences based on a Customer 360 view. When we look at these incredible next generation of capability we’ve delivered at FedEx, gone now are these days of static business rules that leave customers dissatisfied, asking, “Do they not know that I’m a valued customer of FedEx?” Now FedEx has not only the power of the Customer 360 but the power of AI to unlock so much more commercial potential by conducting an orchestra of commercial functions that never played well together before.

Air India is using Data Cloud and Einstein across 550,000 service cases each month to improve its customer experience and deliver more personalised customer service

And with Data Cloud, Air India is unifying Data Cloud across loyalty, reservations, flight systems and data warehouses. They have a single source of truth to handle more than 550,000 service cases each month. And now with Einstein, we’re automatically classifying and summarizing cases and sending that to the right agent who’d recommend the next steps and upgrading in high-value passenger experiences. Even when things happen like a flight delay, our system is able to immediately intervene and provide the right capability to the right customer at the right time. All of that frees up agents to deliver more personal service and create more personal relationships, a more profitable, a more productive, a more efficient Air India, a company that’s using AI to completely transform their capability.

Salesforce’s management is seeing good demand, driven by customers recognising the value of transforming their front-office operations with AI, but buying behaviour among customers is measured (similar to the past 2 years) with the exception of 2023 Q4

We’re seeing good demand as AI technology rapidly evolves and customers recognize the value of transforming into AI enterprises. CEOs and CIOs are excited about the opportunity with data and AI and how it can impact their front-office operations…

…We continue to see the measured buying behavior similar to what we experienced over the past 2 years and with the exception of Q4 where we saw stronger bookings. The momentum we saw in Q4 moderated in Q1 and we saw elongated deal cycles, deal compression and high levels of budget scrutiny.

Siemens used Einstein 1 Commerce to build and launch its AI-powered digital marketplace, named Accelerator Marketplace, in just 6 months

Siemens lacked a centralized destination for customers to easily choose the right products and buy on demand. To simplify the buying experience for customers, Siemens worked with Salesforce to develop and launch its Accelerator Marketplace, an AI-powered digital marketplace built on Einstein 1 Commerce, providing AI-generated product pages, smart recommendations and self-service ordering. And they did it all in just 6 months.

Salesforce is using AI internally with great results; Salesforce has integrated Einstein into Slack and Einstein has already answered 370,000 employee queries in a single quarter; Salesforce’s developers have saved 20,000 hours of coding through the use of AI tools

AI is not just for our customers. As part of our own transformation, we continue to adopt AI inside Salesforce. Under the leadership of our Chief People Officer Nathalie Scardino and our Chief Information Officer Juan Perez, we’ve integrated Einstein right into Slack, helping our employees schedule, plan and summarize meetings and answer employee questions. Einstein has already answered nearly 370,000 employee queries in a single quarter. In our engineering organization, our developers now save more than 20,000 hours of coding each month through the use of our AI tools.

Slack AI was launched in February and it provides recap, summaries and personalized search within Slack; >28 million Slack messages have been summarised by Salesforce’s customers since the launch of Slack AI

We also launched Slack AI in February, an amazing innovation that provides recap, summaries and personalized search right within Slack. I personally have been using it every day to get caught up on the conversations happening in every channel. And we’ve seen great traction with our customers with this product, and our customers have summarized over 28 million Slack messages since its launch in February.

Los Angeles city will use Salesforce’s Government Cloud and other solutions to integrate AI into its software system

And in the public sector, the city of Los Angeles chose Salesforce to modernize how the city’s 4 million residents request city services using its MyLA311 system. The city will use government cloud and other Salesforce solutions to integrate AI assistance into MyLA311 and modernize its own constituent-facing services, giving residents more self-service options and improving service reliability and responsiveness.

Salesforce’s products for SMBs (small and medium businesses), Start and Pro Suite, which both have AI built-in, are building momentum; Salesforce added 2,300 new logos to the products in 2024 Q1

Our new offerings for small and medium businesses, Starter and Pro Suite, which are ready-to-use, simplified solutions, with AI built in, are building momentum. In Q1, we added another 2,300 new logos to these products. Since Starter’s launch last year, we’ve seen customers upgrade to our recently launched Pro Suite and even to our Enterprise and Unlimited editions.

Studies have shown that 75% of the value of generative AI use cases is in the front office of companies; Salesforce is the leader in front-office software, so management thinks this is why – with Data Cloud at the heart – the company is in a good position for growth going forward

We all saw the report from McKinsey, 75% of the value of Gen AI use cases is in the front office. And everybody knows Salesforce is the leader in front-office software. That’s our fundamental premise for our growth going forward. We’re already managing 250 petabytes of data and metadata that’s going to be used to generate this incredible level of intelligence and artificial intelligence capability to deliver for our customers a level of productivity and profitability they’ve just never been able to see before. And at the heart of that is going to be our Data Cloud. 

Salesforce’s management is focused on 2 things at the company: The ongoing financial transformation at Salesforce, and the use of AI

Look, we really are focused on two things in our company. One is this incredible financial transformation that we’ve all gone through with you in the last year. The second one is this incredible transformation to artificial intelligence, which is going to be based on data. 

Salesforce’s management thinks that the relative weakness seen in the software world currently is because of pull-forward in demand from COVID, and not because of crowding out by AI; management thinks AI is a growth driver for software companies

[Question] When we think about this measured buying environment, is there any sort of crowding effect around AI that’s impacting software in your view, meaning when you think about all these companies starting to gear up for this next platform shift, was it just the uncertainty of what they’re going to spend on over the next 6 to 12 months, holding them back perhaps on what their normal sort of pace of spending might be with you all or other enterprise software companies?

[Answer] As we entered the post-pandemic reality, we saw companies who had acquired so much software in that time looked to actually rationalize it, ingest it, integrate it, install it, update it. I mean it’s just a massive amount of software that was put in. And so every enterprise software company kind of has adjusted during end of this post-pandemic environment. So when you look at all of these companies, especially as you saw them report in the last 30 days, they’re all basically saying that same thing in different ways. When you take AI, that has to be our growth driver for future capabilities for these companies. 

Salesforce’s management sees the consumer AI world and the enterprise AI world as having very different needs for the kind of data they use for AI implementations, even though the model architectures are very similar; enterprise AI requires internal datasets from companies

It’s been pretty magical to use OpenAI over the last year, especially in the last release, when I’m really talking to it. And when I think about the incredible engineering effort that OpenAI has done, it’s pretty awesome. They’ve built a great UI. I love talking to the software. They have really strong algorithms or what we call Models, especially their new one, which is their 4o Model. And then they stole data from lots of companies like Time, Dow Jones, New York Times, Reddit. Now they’re all making good, doing agreements with all of us, saying, “We’re sorry,” and paying for it. And they took that data, they normalized it, they delivered a comprehensive data set that they train their model on…

…And then we’ve seen a lot of fast followers with the models. It could be open source models like Llama 3. It could be some proprietary models like Gemini from Google and others. Now there’s thousands and thousands of these models. And if you look on Hugging Face, everybody is a fast follower. And 6 months later, everybody is where everybody else was 6 months ago. And the data, well, a lot of these companies are all thinking they can rip off all this data, too, and they’re all having to pay that price. Okay, that’s the consumer world.

The enterprise world is a little different, right? We have great user interfaces, great apps, all kinds of great technology that our users are using, the millions and millions of users. Then we have the same models, in many cases, or maybe we’ve written some of our own models with our engineers. But then the third piece is the data. And that data is a little bit different. Because in the enterprise, how do you put together these large, fully normalized data sets to deliver this incredible capability, and that is where the magic is going to be. Because for all companies, including ours and others, who want to deploy generative AI internally, it’s not going to be Times Magazine that’s going to give you the intelligence, it’s going to be our customer data and your transaction history and how you’re how your company operates in your workflow and your metadata. And that idea that we can deliver another level of productivity for companies using that architecture is absolutely in front of us. But that idea that we have to do it with the right architecture, that also is in front of us. And I think that while we can say it’s a different kind of architecture, it’s still the same idea that we need a great UI, we need models, but we’re going to need very highly normalized and federated data. And that data needs to be stored somewhere, and it needs to come from somewhere. And that is going to be something that’s going to continue in perpetuity over time as these models and UIs are quite fungible. And we’ll be using different models and different UIs over the years, but we’ll be using the same deep data sources. And I think that is why, when I look at what Salesforce is doing, this is going to be critical for our customers.

Salesforce’s management has seen many instances where software vendors promise customers they can deliver AI magic, only for the customers to come up empty-handed because (1) the vendors did not put in the work – and are unable – to make the customers’ data AI-ready, and (2) there’s no proper UI that’s commonly accessed within the customer

Don’t think that there aren’t a lot of people walking into these companies saying, “Hey, you can do this. You can do that. You can do these other things”. We’ve seen a lot of that in the last 6 to 12 months, and then it turns out that you can’t. “Hey, I can make this happen. I can make that happen. I can pull a rabbit out of the hat in the enterprise for you by doing this, that and the other thing,” and then it doesn’t actually happen. And then what it turns out is you got to do a lot of the hard work to make this AI happen, and that starts with building highly normalized, large-scale, federated, highly available data sources. And then building on top of that the kind of capabilities to deliver it to our customers. I think a common story is, “Hey, oh, yes, I am a provider of a data lake or a data capability. And just by going to that, I’m going to be able to provide all your AI.” But then it turns out that no one in the enterprise actually uses that product. There is no UI that’s commonly accessed. That’s why I’m so excited that Salesforce has Sales Cloud and Service Cloud and Tableau and Slack and all of our amazing products that have these huge numbers of users that use these products every single day in a trusted, scalable way and then connecting that into this new capability.

Veeva Systems (NYSE: VEEV)

Veeva’s management’s strategy with generative AI is to enable customers and partners to develop generative AI solutions that work well with Veeva’s applications; generative AI applications require access to data and Veeva provides the access through solutions such as Direct Data API; Direct Data APi provides data access 100 times faster than traditional APIs; management is seeing customers being appreciate of Veeva’s efforts to allow generative AI applications to work well with its own applications; management thinks that the generative AI applications its customers and partners will develop will be very specific; Veeva’s work on Direct Data API started more than 2 years ago

In these early days as GenAI matures, our strategy is to enable our customers and partners to develop GenAI solutions that work well with Veeva applications through our AI Partner Program and powerful Vault Platform capabilities like the Vault Direct Data API. GenAI applications need access to accurate, secure, and timely data from Vault and our data applications. Released in April, our Direct Data API provides data access up to 100 times faster than traditional APIs…

…In general, customers are appreciative of our strategy to enable a broad range of GenAI use cases and experimentation through their own resources and our partner network…

…In terms of the AI strategy, our strategy is to really enable customers and their partners to develop AI applications because they’re going to be very specific AI applications, GenAI applications for very specific use cases whether it’s field information, pre-call planning, next best action, what have you. They’re going to be very specific applications. That innovation has to come from everywhere. And one of the things it needs is clean data. All of these AI applications need clean, concurrent, fast data. So one of the things we did — started about 2 years ago actually is put in a new API on the Vault platform called the Direct Data API, and that was just released this April. 

Veeva’s management has no plans to develop or acquire generative AI solutions currently, but are open to the idea as they observe how the technology evolves; Veeva’s applications do use AI technology, but not specifically generative AI; customers really trust Veeva, so management wants to move carefully when it comes to Veeva developing generative AI applications

We don’t have plans to develop or acquire GenAI solutions today, but that may change in the coming years as we see how GenAI technology evolves, and we determine which use cases can provide consistent value for the industry. In the meantime, we will continue to add advanced automation to our applications. Some, like TMF Bot and RIM Bot, use AI technology, but generally not GenAI…

… We have that trust. We have to continue to earn that trust. So we don’t really get into things that are too speculative. We definitely don’t overpromise. The trust is the most valuable thing we have. So we’ll be really targeted when we get into an AI application if we do. It will be an area where, hey, that’s a use case that we’re pretty sure that can be solved by GenAI, and there’s not a great partner to do it. Okay. Then we might step in because we do have that trusted position.

Veeva’s management lowered the company’s FY2025 revenue guidance slightly (was previously $2.725 billion – $2.74 billion) because of macro challenges and crowding-out from companies wanting to reallocate resources to AI; management is seeing some deferment of spending on core systems because customers are busy investing in AI, but the deferment creates pent-up demand and it’s not spending that has stopped

For fiscal year 2025, we now expect total revenue between $2.700 and $2.710 billion. This is a roughly $30 million reduction compared to our prior guidance, mostly in the services area. As we have said, the macro environment remains challenging as the industry continues to navigate inflation, higher interest rates, global conflicts, political instability, and the Inflation Reduction Act. There is also some disruption in large enterprises as they work through their plans for AI…

…A little more than a year ago, AI really burst upon the scene with GenAI…

…That caused a lot of pressure in our larger enterprises, on the IT department, “Hey, what are we going to do about GenAI? What’s our strategy as a large pharmaceutical company, biotech about AI?” And that we would land in the IT department of these companies. Now for the smaller — our smaller SMB customers, doesn’t land so much. They have other things to think about, other more pertinent, very stressful things. But in the large companies, with tens of thousands of people, they’re looking for these operational efficiencies that they could potentially get through AI and they have a budget to kind of get ahead of that game. So that — by the word disruption, I meant that through a competing priority into our customers, hey, we had some existing plans. Now this AI, we have to plan for what we’re going to do on that. Where are we going to spend on innovation, on experimentation? Who’s going to do that? What budget would we use, that type of thing. So some of that would take an impact onto us, which is core systems. Now those core systems, when we get that type of impact, it will delay a project, but it won’t stop it because these core systems are things you need. You can delay them, but all that does is create somewhat of a pent-up demand.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

What We’re Reading (Week Ending 16 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 16 June 2024:

1. Saying Goodbye: 30 Investing Lessons After 19% CAGR Over 7 Years – Eugene Ng

I had a near-death/paralysis accident over 10 years ago where I broke my neck. Thankfully, I survived it, but my neck still remains broken to this very day. Life is extremely precious, and I want to live my remaining life to the fullest, and positively impact as many people as I can…

…With a degree in economics and finance, and despite working in banking for over 11 years, I was ill-equipped from the onset to invest well. I decided to start from first principles, asking basic questions? What are stocks? Which they are part ownership stakes in business. Why stock prices rise, and eventually how much?

Eventually I came to realise that growth of revenues, profits and free cash flows matter the most over 5-10 years and beyond, not changes in valuation multiples. That’s why my favourite investing saying is where revenues, profits and free cash flows flow, the stock price eventually goes.

Could investing in this stock generate sufficient returns? Once you take the red pill, once the eyes see what truly matters, you can no longer un-see…

…Most investors are focused in not making errors of commission, or a Type I error, which is making a bad investment when you think it is a good one.

Instead, I am focused making less errors of omission, or Type II errors, rejecting a good investment when I think it is a bad one. Because the maximum a loser can lose is theoretically limited at 100%, but the maximum upside a missed winner can go higher is theoretically infinite…

…Ultimately, your investing strategy and style is unique to you. It must be comfortable to you, it must suit your personality and your strengths. Everyone’s investment portfolio is going to look different.

Most importantly, you must be able to sleep well at night. After some time, you will come to realise if your strategy is truly repeatable and scalable over the long-term…

…Investing in stocks is investing in businesses, and having some of the best CEOs running some of the best companies in the world with their employees working for you 24/7. When you view it that way, it changes your perspective in life…

…Wanted to share a personal story where we recently had a pair of olive-backed sunbirds building their hanging nest on our olive tree, at our balcony in our home in Singapore. We were delighted to welcome them to our home. It was an untidy nest, and our balcony floor was littered with fallen nest materials, but we didn’t mind.

Eggs have been laid, and the female sunbird has been incubating on and off during the day and full time at night over the last week. We are looking forward to see the eggs hatch in the coming week, hear the chicks chirp for the first time, watch them get older and fledge, and then get ready to take flight and leave the nest.

It was amazing to see how timely and beautiful this was, as it reminded me deeply of the journey that I am going to embark on with a new beginning. 

2. A Revolution in Biology – Kasra

Our conventional picture of biology is that everything happens in a bottom-up manner: molecular mechanisms dictate the functions of cells, which dictate the functions of your organs and which ultimately control your body. What is the thing at the very bottom of this hierarchy—the foundation for everything else in life? The genome. Genes are considered the fundamental code of life, so when it comes to figuring out questions of how the body develops, or how to cure diseases or change specific biological traits, we tend to look there…

…That is, until Michael Levin (and many others) entered the scene. They came in and said: genes are great, and they do contain much of the necessary information for building our bodies. But they don’t contain all of it, and they are not always a useful level of abstraction of understanding how the body develops, and consequently they are not always the best way to intervene with biology (e.g. to regenerate damaged organs, or to cure diseases like cancer). If you’ve ever done any programming, you know that there are many levels of abstraction—higher-level and lower-level programming languages, higher-level and lower-level API’s—at which you can try to understand or manipulate the software that runs in your computer. Levin’s point is that genes are like machine code, and modern-day programmers never think about machine code—they think about higher-level software constructs like objects, modules, and applications. The bold claim embedded in his work—the real revolution here—is that higher levels of abstraction and control meaningfully exist in biology. And one of the ways in which this higher level of abstraction manifests is in something called the bioelectric network of the organism.

We usually think of neurons as the only cells in our body that produce intelligent behavior by communicating in large networks. Neurons are constantly communicating with each other in the form of electrical patterns on their membrane and neurotransmitters, which are chemicals that transfer messages between cells. But it turns out that cells throughout the body have the exact same building blocks for such communication. They do the same communication, but slower. Levin and company call this the bioelectric network, as distinguished from a neural network.

In the past few decades we’ve discovered all the ways in which bioelectric networks distributed through the body do the same kinds of things that brains do: store memories, solve problems, and guide development. To get a sense of the bioelectric network in action, we have to talk about a mind-blowing creature called the planarian. This little critter (about 2cm in length) is a developmental “genius” of sorts: it doesn’t age, it doesn’t get cancer, and it is extremely regenerative, capable of regenerating any part of its body that gets cut off, even if it’s cut up into more than 250 pieces…

…Imagine taking one of these worms and splitting it into two. You now have two half-worms, and each of those half-worms is tasked with rebuilding the rest of its body. There’s a crucial decision here that the cells have to make: what part of the body do we already have, and what part do we need to build? One of the half-worms needs to produce a tail, and the other half-worm needs to produce a head. But the cells are at the very middle of the body, extremely far (from a cell’s perspective) from both the head and the tail. How do the cells have any idea what they should generate?

The answer, at least in part, is that all along the body the cells of the worm have a gradient of “resting membrane potentials”, which is effectively a stable electrical state. The cells keep track of their “position” in the body in this way, and experiments have demonstrated that the cell’s electrical state relative to the rest of the body is what determines whether it will proliferate into a head or a tail…

…Levin’s team was able to induce the worm to generate two heads instead of one head, by putting it into a solution of drugs that blocked specific ion channels (which in turn altered the electrical state of the cells). They’ve also induced the worm to generate no heads at all, or to generate the head of a different worm species. All of these are living, functional worms, just with a very different body structure…

…Keep in mind a crucial point: in all these experiments, the genes of the worms are never edited. You get a wildly different functional worm with the same genes. And what’s even wilder is that some of these changes are enduring: without any further drugs or modifications, the two-headed worm produces offspring that are also two-headed, indefinitely…

…Levin’s lab and others have already demonstrated an astonishing level of control over development by modulating bioelectric networks. They’ve done things like getting frogs to develop extra limbs, and getting them to develop an eye in their gut, or an eye in their tail that they can actually see out of. The end goal that Levin dreams of is an “anatomical compiler” – a program which takes as input a specification for an arbitrary organ or body plan, and outputs the specific set of chemical and electrical signals needed to generate that organ. Imagine 3-d printing entire synthetic organs and organisms, except instead of having to specify all the micro-level details, you can just give a high-level description like “an extra eye at the tail.” This is Dall-E but for biology. And in the very long run, it could be the answer to virtually all of biomedicine, including traumatic injury, birth defects, degenerative disease, cancer, and aging.

3. The Investing Boom That’s Squeezing Some People Dry – Jason Zweig

The idea is that when you lock your money up for months or years, you’re less likely to panic in a downturn, enabling the managers to amass a portfolio that will pay off in the long run…

…That bumps up against a basic law of financial physics: Eliminating one risk creates another.

An investment that doesn’t trade may have some advantages, but once you buy it, how do you sell it? How deep a haircut, or discount from the reported price, will you take?

Many funds have so far been able to cash out investors at what seems like a fair price. Many haven’t…

…Highlands REIT, a private Chicago-based real-estate fund, is a more-extreme case. The company bought back about 19% of its stock in December at 14 cents a share. For the sellers, that was like getting a haircut with a lawn mower: Highlands’ annual report estimates net asset value at 32 cents per share as of Dec. 15, 2023.

Outsiders are offering an even harsher haircut. On May 20, MacKenzie Capital Management, an investment firm in Orinda, Calif., opened a mini-tender for Highlands’ stock at 4 cents a share, minus a $25 transfer fee. On Lodas, the latest sale was at 10 cents…

…Institutions can sell big blocks of their alternatives, like hedge funds or private equity, to what are called secondary funds at discounts that might run 10% to 30% below net asset value.

In many cases, you should be so lucky.

Often, if you can find a broker willing to buy your alternative investment, the commission can run up to or even exceed 5%. Your haircut could be as deep as 30% to 50%. Depending on the buyer, weeks may go by before you get paid.

Other electronic marketplaces besides Lodas, including Central Trade & Transfer and 1st Trade, also match buyers and sellers of alternatives—typically at gaping discounts to net asset value.

4. Book Summary Part 2: “Our Investing Strategy, who does the market smile upon” – Made In Japan

Right before launching his fund, Hokkaido Takushoku Bank went bankrupt and was undergoing liquidation. He immediately decided to use that opportunity. He went to Sapporo to buy the shares of a specific company from them, which was Nitori a company with almost zero liquidity at the time. Some readers may recognize the name today as the largest furniture retail chain in Japan oft compared to Ikea. They’re known for their value-for-money proposition, providing quality products at an affordable price point, and has been a huge success.

You might not believe this if you look at Nitori’s stock price today but it was an unpopular company back then. According to Kiyohara-san, it was trading at 750 Yen per share at the time. One of the main reasons it seems, was that the furniture market was in decline, making it an unattractive industry to invest in. His thesis was that the market was extremely fragmented. The largest furniture retailer Ootsuka, only had a 5% market share. Nitori was the only vertically integrated manufacturer (others were distributors) and believed this could help them gain share as a cost-effective producer of home furnishings. Nitori was listed on the Sapporo Exchange so no institutional investor would touch it (since it would be impossible to sell). However, when he spoke to IR, he picked up on a key insight. While the Hokkaido economy, which was their main market, was not doing well and they saw a decline in same-store sales in the region, the 3 stores open in Kanto were doing very well. Providing a hint to Nitori’s true competitiveness.

And it’s funny because you can immediately tell he was built differently. After the research was done and when the fund launched he bought as much as he could from the failing bank and at launch it became 25% of his NAV. The stock tripled in a year and in 5 years the stock was a six-bagger. A year later it was a ten-bagger at which point he sold out. If he had held it till now stock would been a hundred bagger. But by 2003 Nitori was starting to get more institutional coverage and attention. He believed it was time to exit when. He says “When the party starts,  that’s when we go home.”

So here was the first lesson, which is that investing in an unpopular, shrinking market can still make you a lot of money. In fact, during the time he owned Nitori the market size halved. He also understood the opportunity to buy shares from distressed sellers, especially for stocks that are listed on some regional exchange that no one looks at…

…2007 Dec – 2009 Feb: “A sick patient getting hit by a truck, 3 times”

Just as the fund narrowly escaped its “matasaki”, it was followed by the 2008 crisis.

Whilst K1J Fund generated incredible returns from their bet in REITs and Real Estate and successfully exited from these. He still owned a lot of cheap real estate stocks in the fund. 3 holdings filed for bankruptcy and 1 went through an Alternate Dispute Resolution (ADR)  The worse part? he owned 45%, 35%, 10% and 20% of the shares outstanding.

Needless to say, it was distressing and he lost weight.

The goal was no longer for him to generate returns in this period. It was simply to survive.

He never said this himself, but what follows is what you call an absolute shitshow. Or as he would put it, “like a sick patient getting hit by a truck 3 times”.

The fund’s top priority was to reduce its leveraged long and short positions to avoid a margin call.

But to add insult to injury, their prime broker Goldman decided to change its margin policy to save themselves. (from 50% to 30%) Which could have been fatal for the fund. Fortunately, Goldman eventually agreed to only implement this in steps, which helped the fund bide some time.

The issue is that in a crisis like this it’s not just one kind of risk that materializes, there are second-order and third-order effects which, in isolation might have a low probability. I believe, however, the odds of secondary and tertiary events no matter how unlikely will increase when the first ‘highly improbable event’ occurs. (You can also apply this to the Livedoor example).

Although not a surprise, the clients that entered in excitement when the fund was killing it in 2005 started redeeming (mainly pensions) and the fund lost half of its clients.

This created a new risk which forced him to reduce his longs which were mainly in small, illiquid companies. A forced selling driven by client redemptions would in effect make you dig your own grave.

So how does he try to solve this problem? He asks these companies to buy back their shares.

From its peak in October 2005 to its trough in February 2009 the fund’s NAV was -72% and its AUM -89%.

This is when you realize most people won’t be able to replicate what he did. I wrote this in part 1. He decides to put almost all of his net worth in the fund to try and save it. He adds “Because that is the responsibility of the manager”. Like a captain being the last to leave a sinking ship, an honorable and brave decision.

I want to reflect here because this is not something most of us could do. It’s really easy to read this as a brave story and just say “wow, awesome”, but never really understand the extent of how hard it was. (This is called the empathy gap in psychology where we underestimate our psychology to make decisions in a certain situation). If your fund is already down heavily, you have clients threatening to leave or have left, your prime broker changing the rules, and you’re being forced to exit your positions at ridiculous valuations, are you ready to risk going broke to save it? Remember your morale at this point is probably at an all-time low. In a world where limited liability corporations are the norm (i.e. the damage to your personal wealth can be legally limited where, at the very worst moment most of us would use to escape) he decided to go all in.

Also don’t forget, he’s had to tell his wife he did just that! (which might’ve been the scariest part!). Apparently, her response to him telling her was “Didn’t you also say that last week?” lol.

But the question begs, why did he do that? His confidence was far from crushed, and he was convinced if he closed his shorts and be as long as possible, that he would make alot of money. Why? because he knew a sudden decline will almost always result in a V-shape recovery. His game was to just survive until then. That is SOME confidence he had.

What’s amazing is that he went to clients telling them it would be foolish to leave now, “the fund can probably double from here”.

In the end, from its trough through Feb 2018 his fund 12x ed…

…Shorts are the most dangerous in a bear market, in this scenario, his game was to maximize his long positions. Maintaining a short book means, your prime broker will usually give you a hard time in these moments and pssibly reduce your margin which also limits your long exposure. The other is that when the market turns and your shorts also move up, this might also force you to reduce your long position (to cover). Understanding this helped him avoid a forced error of omission. Imagine having no choice but to sell your longs which could have multiplied but you were forced to sell them after a little move up to cover your shorts…

…Lasertec (Circa 2020)

  • This was not a fundamental idea, though it did fit the typical target for his shorts: expensive-looking large-cap.
  • He simply saw an opportunity through the lens of Japan’s inherent tax rules.
  • The fourth largest shareholder was the widow of the founder who owned 4.24%.
  • So he thought, what happens if she passes away too?
  • Japan’s inheritance tax is the highest in the world, and her children will have to pay for it by selling shares.
  • In the end, this is really what happened.

This is an important theme for owner-operated businesses, in which inheritance can play an outsized impact on the stock price.

5. An Interview with AMD CEO Lisa Su About Solving Hard Problems
– Ben Thompson and Lisa Su

What was your response in November 2022 when ChatGPT shows up?

LS: Well, it was really the crystallization of what AI is all about.

Obviously you’ve been in the graphics game for a long time, you’ve been thinking about high-performance computing, so the idea that GPUs would be important was not foreign to you. But were you surprised the extent to which it changed the perception of everyone else around you and what happened after that?

LS: We were very much on this path of GPUs for high-performance computing and AI. Actually, it was probably a very significant arc that we started, let’s call it back in the 2017 plus timeframe. We’ve always been in GPUs, but really focusing on-

What was it in 2017 that made you realize that, “Wait, we have these, we thought we bought ATI for gaming, suddenly, there’s this completely different application”?

LS: It was the next big opportunity, we knew it was the next big opportunity. It was something that Mark and I discussed, which was, by putting CPUs and GPUs together in systems and designing them together, we’re going to get a better answer and the first near-term applications were around super-computing. We were very focused on these large machines that would reside at national laboratories and deep research facilities and we knew that we could build these massively parallel GPU machines to do that. The AI portion, we always also thought about it as clearly a HPC plus AI play.

You said before that AI is the killer application for HPC.

LS: Yes.

But you will talk to people in HPC, they’re like, “Well, it’s a little bit different”, to what extent is that the same category versus adjacent categories?

LS: It’s adjacent but highly-related categories, and it all depends on the accuracy that you want in your calculations, whether you’re using the full accuracy or you want to use some of these other data formats. But I think the real key though, and the thing that really we had good foresight on is, because of our chiplet strategy, we could build a highly modular system that could be, let’s call it, an integrated CPU and GPU, or it could be just incredible GPU capability that people needed.

And so, the ChatGPT moment for me was the clarity around, now everybody knew what AI was for. Before, it was only the scientists and the engineers who thought about AI, now everybody could use AI. These models are not perfect, but they’re amazingly good, and with that, I think the clarity around how do we get more AI compute in people’s hands as soon as possible was clear. Because of the way we had built our design system, we could really have two flavors. We had HPC-only flavor, which is what we would call our MI300A and we had AI only flavor, which was the MI300X…

One of the things that does strike me about the contrast is, and one of Nvidia’s really brilliant moves was the acquisition of Mellanox and their portfolio in networking, and to the extent it matters to tie all these chips together, particularly for training.

In your Computex keynote, you talked about the new Ultra Accelerator Link and Ultra Ethernet Link standards, and this idea of bringing lots of companies together, kind of calling back to the Open Compute Project back in the day as far as data centers. Makes perfect sense, particularly given Nvidia’s proprietary solutions have the same high margins, we all know and love, as the rest of their products.

But I guess this is my question about your long-term run — do you think it’s fair to say that, from a theoretical Clayton Christensen perspective, because we’re early in AI, maybe it’s not a surprise, the more proprietary integrated solution is the belle of the ball in many respects? There’s a bit where, yes, being open and modular all makes sense, but maybe that’s not going to be good enough for a while.

LS: I would say it this way. When you look at what the market will look like five years from now, what I see is a world where you have multiple solutions. I’m not a believer in one-size-fits-all, and from that standpoint, the beauty of open and modular is that you are able to, I don’t want to use the word customize here because they may not all be custom, but you are able to tailor.

Customize in the broad sense.

LS: That’s right.

Tailor is a good word.

LS: Tailor is the right word — you are able to tailor the solutions for different workloads, and my belief is that there’s no one company who’s going to come up with every possible solution for every possible workload. So, I think we’re going to get there in different ways.

By the way, I am a big believer that these big GPUs that we’re going to build are going to continue to be the center of the universe for a while, and yes, you’re going to need the entire network system and reference system together. The point of what we’re doing is, all of those pieces are going to be in reference architectures going forward, so I think architecturally that’s going to be very important.

My only point is, there is no one size that’s going to fit all and so the modularity and the openness will allow the ecosystem to innovate in the places that they want to innovate. The solution that you want for hyperscaler 1 may not be the same as a solution you want for hyperscaler 2, or 3.

Where do you think the balance is going to be then, between there being a standard approach versus, “This is the Microsoft approach”, “This is the Meta approach”? There’s some commonality there, but it is actually fairly customized to their use cases and needs. Again, not next year, but in the long run.

LS: I think as you get out three, four or five years, I think you’re going to see more tailoring for different workloads, and what happens is, the algorithms are going to — right now, we’re going through a period of time where the algorithms are just changing so, so quickly. At some point, you’re going to get to the place where, “Hey, it’s a bit more stable, it’s a little bit more clear”, and at the types of volumes that we’re talking about, there is significant benefit you can get not just from a cost standpoint, but from a power standpoint. People talk about chip efficiency, system efficiency now being as important if not more important than performance, and for all of those reasons, I think you’re going to see multiple solutions…

How much inference do you see actually going back to the CPU?

LS: I think a good amount of inference will be done on the CPU, and even as you think about what we’re talking about is the very large models obviously need to be on GPUs, but how many companies can really afford to be on the largest of models? And so, you can see now already that for smaller models, they’re more fine-tuning for those kinds of things, the CPU is quite capable of it, and especially if you go to the edge.

Right. You noted on the last earnings call that the MI300, it’s been supply-constrained, your fastest ramp ever, but is maybe from the expectations of some investors, a little disappointing in the projections for the end of the year. How much do you feel that shift to being demand-constrained is about the 325 coming along, which you talked about this week, versus the fact that just generally Nvidia supply has gone up, as everyone’s trying to figure this stuff out? Yes, your long-term opportunity is being this sort of customized supplier — tailored supplier, sorry, is the word that we’re going for — versus, “Look, I don’t want to say picking up but just we need GPUs, we’ll buy them from anyone”. Where do you feel your demand curves are relative to the competition and the rapid progression of the space?

LS: Again, let me take a step back and make sure we frame the conversation. The demand for AI compute has been off the charts, I think nobody would have predicted this type of demand, and so when I say that there is tightness in the supply chain, that’s to be expected, because nobody expected that you would need this many GPUs in this timeframe. The fact is the semiconductor industry is really good at building capacity, and so that is really what we’ve seen. As we’ve started to forecast-

And so you feel it’s more a function of there’s just so much supply coming online?

LS: Absolutely, and that’s our job. Our job is to make it to a place where you’re not constrained by manufacturing capacity.

Really, for us, it is about ensuring that customers are really ramping their workloads and that is a lot of deep work, deep partnerships that we’re doing with our customers. So honestly, I feel really good about the opportunities here. We’ve been through this before where it’s very similar to what we saw when we did the initial data center server CPU ramps, which is our customers work very closely with us, they get their software optimized, and then they add new workloads, and add more volumes, and that’s what I would expect to happen here, too.

The difference in AI is that I think customers are willing to take more risk, because there’s a desire to get as much, as fast as possible.

Is there a challenge for you, because that desire to take more risks means they’re more accepting of say, high margins to get the leading GPUs or whatever it might be, or the GPU with the largest ecosystem, developer ecosystem?

LS: What I will say is I’m super happy with the progress we’ve made on software.

Fair enough.

LS: What we’re seeing is excellent out-of-box performance. The fact is things just run, the fact is that much of the developer ecosystem wants to move up the abstraction layer, because everybody wants choice.

And you feel you’re going to get to a stage where that move up the abstraction layer is a common layer across companies, as opposed to getting one company internally moves up the abstraction layer, and so they can buy any CPU, but that doesn’t necessarily benefit you going into another company, or do you feel that’s going to be-

LS: I absolutely believe that it’ll be across the industry. Things like PyTorch, I think PyTorch is extremely widely adopted, OpenAI Triton, similar. These are larger industry things where frankly, part of the desire is it takes a long time to program down to the hardware. Everyone wants to innovate quickly, and so the abstraction layer is good from the standpoint of just rapid innovation.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Meta Platforms, Microsoft, and Tencent. Holdings are subject to change at any time.

Identifying Value Traps

Value traps are stocks that may look cheap but are actually expensive.

Investors often use valuation metrics to screen for “cheap” or “undervalued” companies. Identified correctly, undervalued companies will provide better returns than the broader market.

However, we can often fall into the trap of believing a company is cheap if we are overly reliant on using common valuation metrics that may be misinterpreted.

Since I started investing, I have come across numerous value traps and thought it would be useful to put together a short list of common value traps to be avoided.

Low earnings multiple but unstable earnings

This is probably the most obvious value trap. A company that trades at a low earnings multiple – i.e. a company with a market capitalisation that is relatively low compared to its past earnings – may look cheap. But if its earnings are not sustainable, it may become a really bad investment.

Moderna and BioNTech both had massive earnings boosts from selling COVID vaccines in 2021 and 2022. But both have also seen their earnings plummet since. Investors who looked at their historical earnings multiples during the COVID-boom may get the idea that they were cheap, but the COVID-induced profits were not likely to be repeated.

Looking at a company’s historical earnings can give investors some idea of its profitability. But the past does not always mirror the future and investors need to think about a company’s future earnings too.

High but unsustainable dividend yield

A company with a high trailing-twelve-month dividend yield can seem enticing. But it could still end up as a value trap if its dividend is not sustainable. What makes a dividend sustainable? Some good questions to ask include:

  • Is the dividend supported by a regular profit stream?
  • Is the dividend a one-off special dividend that will not recur?
  • Is the dividend payout ratio above 100%?
  • Does the company have a predictable and recurring revenue stream?

All of these questions help us to identify if a company is a sustainable dividend payer or simply a dividend value trap.

Lots of cash, but with a cash-burning business

Investors may get attracted to a company that has lots of cash on the balance sheet. The company is even more enticing if its net cash positive is a large percentage of its market capitalisation. But the balance sheet is not an indicator of the quality of a business. 

A company with a lot of cash on the balance sheet can still end up as a value trap if it has a weak business that constantly burns cash. For instance, there are numerous biotech companies in the US stock market that look promising with high net cash balances but that are burning lots of cash to research potential drugs. Although some of these biotechs may end up getting FDA approval for their drugs and become winners, a vast majority of them will end up with unfeasible drugs and a cash balance that has been wiped out after years of unfruitful research.

A management that won’t return cash to shareholders

What’s a company with lots of cash on the balance sheet, a stable and profitable business, trades at low earnings multiple, but management refuses to return cash to shareholders? It is a potential value trap.

There is no point in having a stable and recurring business generating lots of cash but shareholders will never see that cash. This is a common situation in companies listed in Singapore and Japan, where corporations retain too much cash on the balance sheet.

This phenomenon may happen because a company is majority family-owned and the family does not need the cash as dividends. Or the management team may be ultra-conservative and retain cash unnecessarily. Either way, shareholders are left with nothing or have to wait decades to see their cash.

Final word

It does not make sense to invest in a company at a price that’s significantly higher than its intrinsic value. But just searching for companies with low valuation metrics does not mean you will end up with bargains. It pays to recognise the existence of value traps.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 09 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 09 June 2024:

1. Google CEO Sundar Pichai on AI-powered search and the future of the web – Nilay Patel and Sundar Pichai

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more…

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing…

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

2. Five Moat Myths (transcript here)- Robert Vinall

So we’re now on to Moat Myth number three, which is execution doesn’t matter. So there’s this idea that, like I mentioned the quote earlier, on “when a management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact.” So this is a bit of a callback to my presentation on management and it implies that as long as the moat is there, nothing can go wrong and vice versa – if the mode isn’t there, then nothing is basically going to go right. I really strongly disagree with that. Some of the best businesses, some of the best investments I’ve seen, in the companies which have really great execution and that execution tends over time to lead to a moat. So I think people get it backwards a little bit. It’s not that the moat trumps execution, it’s that the moat is the output of execution…

…So this one won’t be a surprise to you. I kind of talked about it in the summary on the management presentation but there’s this idea that management doesn’t matter. And I have two examples. So one is a crook and this is the easiest argument to make. Anyone who says management doesn’t matter, all that counts is the business and the financials, well clearly a crook can destroy a business. There’s thousands of examples of that. One springs to mind is an Indian brewer, Kingfisher where the guy effectively sells the business and buys an airline with it, which goes bust. His family went from being very wealthy to zero. So clearly management can destroy a business. I don’t think that’s a hard argument to make.

But on the positive side, clearly management can also be the difference between a great business and a failing business. And of course the most famous example of that ever is Berkshire Hathaway, the company we’re all here to see tomorrow. As many of you will know, Berkshire Hathaway was a failing textile mill and would have almost certainly gone bankrupt and is today I think one of the top 10 largest companies in the US, if not in the world. And that’s thanks to the investment decisions and the investing acumen of Warren Buffett. So clearly management does matter.

3. Getting materials out of the lab – Benjamin Reinhardt

Inventing a new material is the beginning of a long process.

Take carbon fiber composites. You’re almost certainly familiar with these, particularly if you’ve ridden a surprisingly light bike or seen its distinctive crosshatched weave pattern on a car dashboard or phone case.

Looking at carbon fiber composites through an electron microscope, you observe strands of carbon atoms arranged in a hexagonal pattern, woven into mats and layered with a resin such as epoxy. Carbon fiber’s tensile strength (the amount of load it can bear under tension before it breaks) is similar to steel, but the material is much less dense. So if you care about both weight and strength – as you do when you’re designing vehicles from a supercar to a Boeing 787 – carbon fiber is the material for you.

Modern materials like these carbon fiber composites are born in laboratories. Researchers at universities or industrial research labs do test tube–scale experiments, which can produce mind-blowing results. Carbon fiber first showed great promise in 1960 when Richard Millington patented a process to create fibers made of 99 percent carbon.

However, at lab scale, materials don’t do anything. Most people wouldn’t want a rope that is a centimeter long, or a battery that lasts three minutes. Leaving the lab requires bridging many orders of magnitude: from producing less than 0.001 kilograms (one gram) per day in a lab to more than 1,000 kilograms (one tonne) per day in a factory.

You can think of lab-scale materials as the most artisanal products in the world, painstakingly handcrafted by people with advanced degrees. Like any artisanal product, lab-scale materials are expensive. Trying to mass-produce these materials by simply increasing the number of fume hoods, test tubes, and pipette wielders would make them cost billions of dollars per kilogram. After a material is invented, we need to discover cheaper ways to produce it, since price per quantity has a dramatic effect on how much it can be used.

We call this process ‘scaling’, but to me that word is frustratingly vague. It bundles together many different problems that need to be solved to decrease cost and increase yield. The three key ones are:

Consistency. A lab can declare success if a small fraction of their material has an impressive property, but a factory needs that fraction to be much higher. A more consistent yield means less waste, and a lower price.

Standardization. Figuring out how to produce a material using conventional, industry-standard equipment avoids the cost of custom tools and enables you to make more material in an easily replicable way.

Streamlining. Moving a product through a continuous manufacturing process, as opposed to applying each of the manufacturing steps to a small, static batch drastically reduces costs. Henry Ford did this with his moving assembly line, passing cars from worker to worker rather than moving workers from car to car…

…Building an industrial-scale factory requires money – a lot of it. To justify the expense to investors, you need to answer the questions, ‘What is your material good for?’, and more importantly, ‘Who will buy it?’

The answer is far from obvious, even for great materials: carbon fiber went through a decades-long journey before it became the star it is today. At first, manufacturers sold it as low-margin home insulation material because of its low thermal conductivity. It was key to several failed products, from turbine blades to a replacement for fiberglass. It eventually found its first iconic use case when Gay Brewer won the first annual Taiheiyo Club Masters using a golf club with a carbon fiber shaft.

The search for a cost-effective use case leaves many new materials in a chicken-and-egg situation: entrepreneurs and companies can’t justify the expense of scaling because there isn’t an obviously valuable application – but that application can’t emerge without a cost-effective material that can be experimented with.

Even applications that do seem obvious can take a long time to realize. In 1968, Rolls-Royce attempted to use carbon fiber in airplane propellers, which failed spectacularly. The propellers were extremely vulnerable to impacts – the whole project became such a boondoggle that it was a significant factor in the company’s collapse into receivership in 1971. Another 40 years would pass before the first majority–carbon fiber airplane, the Boeing 787, took flight…

…Scientists, mostly working in universities, have strong incentives to focus on novelty and one-off demonstrations because these can lead to publications and positive media attention. That work can be valuable, but the search for novelty alone creates mismatches with efforts to produce useful materials at scale. Essentially, the system of discovery sets up scaling for failure by not creating materials without any consideration of their ability to scale.

The drive to focus on new discoveries over improving old ones’ capacity to scale, combined with the difficulty of mimicking real-world conditions in a lab, creates initial experiments that bear little resemblance to how people use a material in the real world.

Take the development of lithium-ion battery anodes. Researchers can demonstrate exciting leaps in power density from a new anode material using a half-cell reaction that provides functionally infinite lithium. But in a real battery with finite lithium, these anodes would reduce battery lifetimes to the point of unusability.

Similarly, carbon nanotubes have incredible tensile strength for their weight, but it’s hard to make them longer than a few centimeters. This length limit comes from carbon nanotubes’ tendency to tangle and become more susceptible to impurities as they get longer. Cable makers in the real world don’t just care about strength-to-weight ratios, but also the length over which the material maintains that strength. Yet scientists can take their headline of ‘superstrong carbon nanotubes’ and move on to the next project…

…Materials start-ups often struggle to raise venture capital financing. Venture isn’t a good fit for the capital costs and timescales of the material industry: the size, scale, and expectations of venture capital funds are well-suited to invest in software and pharmaceuticals whose revenues can skyrocket once they hit the market. Venture capital also prefers high-margin businesses that can get to market quickly, but materials often face a trade-off between margins and speed: while it’s faster and cheaper to innovate on one component of a larger production line or one material in an existing product, most of the margins come from new products…

…The long road from the lab to the material world might make the future of new materials seem bleak.

One reason for optimism is that new materials might already be on the horizon. There is a shockingly consistent timescale for materials to become useful beyond their initial niches. It took roughly 50 years between Roger Bacon’s discovery in 1958 and the flight of the first majority–carbon fiber airplane in 2009. The first lithium-ion battery was created by NASA in 1965, but most people didn’t start interacting with them until the mid 2000s. The properties of pure carbon nanotubes weren’t isolated until 1991. If there is indeed a 40- to 50-year timescale for lab-based materials to be useful in high-impact applications, we don’t need to despair about a carbon nanotube space elevator being overdue until somewhere around 2040.

4. High-Yield Was Oxy. Private Credit Is Fentanyl – Greg Obenshain and Daniel Rasmussen

Private equity assets have increased sevenfold since 2002, with annual deal activity now averaging well over $500 billion per year. The average leveraged buyout is 65 percent debt-financed, creating a massive increase in demand for corporate debt financing.

Yet just as private equity fueled a massive increase in demand for corporate debt, banks sharply limited their exposure to the riskier parts of the corporate credit market. Not only had the banks found this type of lending to be unprofitable, but government regulators were warning that it posed a systemic risk to the economy.

The rise of private equity and limits to bank lending created a gaping hole in the market. Private credit funds have stepped in to fill the gap. This hot asset class grew from $37 billion in dry powder in 2004 to $109 billion in 2010, then to a whopping $261 billion in 2019, according to data from Preqin. There are currently 436 private credit funds raising money, up from 261 only five years ago. The majority of this capital is allocated to private credit funds specializing in direct lending and mezzanine debt, which focus almost exclusively on lending to private equity buyouts.

Institutional investors love this new asset class. In an era when investment-grade corporate bonds yield just over 3 percent — well below most institutions’ target rate of return — private credit funds are offering targeted high-single-digit to low-double-digit net returns. And not only are the current yields much higher, but the loans are going to fund private equity deals, which are the apple of investors’ eyes…

…Banks and government regulators have expressed concerns that this type of lending is a bad idea. Banks found the delinquency rates and deterioration in credit quality, especially of sub-investment-grade corporate debt, to have been unexpectedly high in both the 2000 and 2008 recessions and have reduced their share of corporate lending from about 40 percent in the 1990s to about 20 percent today. Regulators, too, learned from this experience, and have warned lenders that a leverage level in excess of 6x debt/EBITDA “raises concerns for most industries” and should be avoided. According to Pitchbook data, the majority of private equity deals exceed this dangerous threshold…

…Empirical research into lending markets has typically found that, beyond a certain point, higher-yielding loans tend not to lead to higher returns — in fact, the further lenders step out on the risk spectrum, the less they make as losses increase more than yields…

…The historical experience does not make a compelling case for private credit. Public business development companies are the original direct lenders, specializing in mezzanine and middle-market lending. BDCs are Securities and Exchange Commission–regulated and publicly traded companies that provide retail investors access to private market platforms. Many of the largest private credit firms have public BDCs that directly fund their lending. BDCs have offered 8 to 11 percent yield, or more, on their vehicles since 2004 — yet returned an average of 6.2 percent, according to the S&P BDC index. BDCs underperformed high-yield over the same 15 years, with significant drawdowns that came at the worst possible times..

…Central to every private credit marketing pitch is the idea that these high-yield loans have historically experienced about 30 percent fewer defaults than high-yield bonds, specifically highlighting the seemingly strong performance during the financial crisis…

…But Cambridge Associates has raised some pointed questions about whether default rates are really lower for private credit funds. The firm points out that comparing default rates on private credit to those on high-yield bonds isn’t an apples-to-apples comparison. A large percentage of private credit loans are renegotiated before maturity, meaning that private credit firms that advertise lower default rates are obfuscating the true risks of the asset class — material renegotiations that essentially “extend and pretend” loans that would otherwise default. Including these material renegotiations, private credit default rates look virtually identical to publicly rated single-B issuers…

… If this analysis is correct and private credit deals perform roughly in line with single-B-rated debt, then historical experience would suggest significant loss ratios in the next recession. According to Moody’s Investors Service, about 30 percent of B-rated issuers default in a typical recession (versus fewer than 5 percent of investment-grade issuers and only 12 percent of BB-rated issuers)…

…Private equity firms discovered that private credit funds represented an understanding, permissive set of lenders willing to offer debt packages so large and on such terrible terms that no bank would keep them on its balance sheet. If high-yield bonds were the OxyContin of private equity’s debt binge, private credit is its fentanyl. Rising deal prices, dividend recaps, and roll-up strategies are all bad behaviors fueled by private credit…

…Lender protections have been getting progressively weaker. After analyzing just how weak these covenants have become since the financial crisis, Moody’s recently adjusted its estimate of average recovery in the event of default from the historical average of 77 cents on the dollar to 61 cents…

…Today private equity deals represent the riskiest and worst-quality loans in the market. Banks and regulators are growing increasingly worried. Yet massive investor interest in private credit has sent yields on this type of loan lower, rather than higher, as the deteriorating quality might predict. As yields have fallen, direct lenders have cooked up leveraged structures to bring their funds back to the magical return targets that investors demand. Currently, we suspect that a significant number of private equity deals are so leveraged that they can’t pay interest out of cash flow without increasing borrowing. Yet defaults have been limited because private credit funds are so desperate to deploy capital (and not acknowledge defaults). Massive inflows of capital have enabled private lenders to paper over problems with more debt and easier terms.

But that game can’t go on forever.

5. How Does the Stock Market Perform in an Election Year? – Nick Maggiulli

With the U.S. Presidential election set for a rematch in November, many investors are wondering how the U.S. stock market might perform in the months that follow. While predicting the future is never easy, using history as a guide can be useful for understanding how markets might react to a Biden or Trump victory…

…In the seven or so weeks following an election there can be lots of uncertainty around how the future might unfold. But, if we look at how markets actually perform after an election, they are typically pretty average. To start, let’s consider how U.S. stocks (i.e. the S&P 500) have performed from “election day” until the end of the year for each year since 1950. Note that when I say “election day” I mean from the Tuesday after the first Monday in November to year end, regardless of whether there was an actual election…

…while stock performance has varied quite a bit since 1950, U.S. stocks tend to rise slightly following an election (or in the same time period during a non-election year). The biggest exceptions to this were in 2008, when markets declined by nearly 11% from election day to year end, and in 1998, when they increased by almost 10% as the DotCom bubble continued to inflate.

However, if we look at the average performance in election years versus non-election years, all these differences wash out. Plotting the average performance of the 18 election years and 56 non-election years in the data, we see basically no long-term difference in performance:

While the S&P 500 tends to perform worse (on average) in the first few days following the election, there seems to be no lasting impact on stocks through year end. In fact, the average return following election day through December 31 is 2.3% in an Election Year compared to 2.4% in a Non-election Year. In other words, their returns on average are basically the same. The median (50th percentile) return is similar as well with a 2.9% return in an Election Year compared to 2.4% during a Non-election year…

…When Trump won the 2016 election to almost everyone’s surprise, many believed that U.S. stocks would crash as a result. Jane Street, a prominent quantitative trading firm, was one of them. After finding a way to get the 2016 election results minutes before the rest of the mainstream media, Jane Street still ended up losing money because they got the market’s reaction wrong. As Michael Lewis recalls in Going Infinite:

What had been a three-hundred-million-dollar profit for Jane Street was now a three-hundred-million-dollar loss. It went from single most profitable to single worst trade in Jane Street history.

This illustrates how difficult it can be to predict the reaction of markets, even for the smartest people in the room…

…Overall, U.S. stocks performed better than average after both Trump and Biden’s election victories. However, with the market increasing by 4% in 2016 and 7% in 2020, Biden is the clear post-election winner.

However, if we look at how U.S. stocks performed throughout the rest of their presidency, it seems like Trump will be the clear winner when all is said and done…

…One of the reasons I love this chart is because it illustrates that U.S. stocks tend to rise regardless of which political party is in office. This suggests that the factors that impact stock prices have less to do with who’s in office than we might initially believe.

Some of you will see the chart above and point out how the only two negative periods occurred when Republican presidents were in office. That is technically correct. However, it is also true that these negative periods occurred immediately after Democratic presidencies. So who’s to blame? The Republicans? The Democrats? Neither? No one knows…

…While the outcome of the 2024 U.S. Presidential election remains uncertain, history suggests that the stock market is likely to perform similarly regardless of who wins. In the short term, markets may react positively or negatively to the election results, but those effects tend to even out over time…

…Ultimately, the key to navigating the uncertainty of an election year is to stay informed and avoid making emotional decisions based on short-term political events. The U.S. economy and stock market have made it through countless political cycles before and will make it through this one as well. So no matter who wins in November, history suggests that staying the course is often the best course of action. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

The Expensive Weighing Machine

Stocks and business fundamentals can diverge wildly in the short run, only to then converge in the long run.

In Pain Before Gain, I shared Walmart’s past business growth and corresponding stock price movement (emphases are new):

From 1971 to 1980, Walmart produced breath-taking business growth. The table below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return… Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the S&P 500 was down by ‘only’ 40%. But by the end of 1979 (when inflation in the USA peaked during the 1970s), Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972 (when inflation was at a low in the 1970s)…

…At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year…

It turns out that in late-August 1972, when its stock price was less than US$0.04, Walmart’s price-to-earnings (P/E) ratio was between 42 and 68… This is a high valuation… at Walmart’s stock price in December 1974, after it had sunk by 75% to a low of around US$0.01 to carry a P/E ratio of between 6 and 7 the easy conclusion is that it was a mistake to invest in Walmart in August 1972 because of its high valuation. But as can be seen above, Walmart’s business continued to grow and its stock price eventually soared to around US$3.70 near the end of 1989. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972.”

In When Genius Failed (temporarily)*, I explored a little-discussed aspect of Teledyne’s history (emphasis is from the original passage) :

Warren Buffett once said that Singleton “has the best operating and capital deployment record in American business… if one took the 100 top business school graduates and made a composite of their triumphs, their record would not be as good.”

Singleton co-founded Teledyne in 1960 and stepped down as chairman in 1990… According to The Outsiders, a book on eight idiosyncratic CEOs who generated tremendous long-term returns for their shareholders, Teledyne produced a 20.4% annual return from 1963 to 1990, far ahead of the S&P 500’s 8.0% return. Distant Force, a hard-to-obtain memoir on Singleton, mentioned that a Teledyne shareholder who invested in 1966 “was rewarded with an annual return of 17.9 percent over 25 years, or a return of 53 times his invested capital.” In contrast, the S&P 500’s return was just 6.7 times in the same time frame… 

based on what I could gather from Distant Force, Teledyne’s stock price sunk by more than 80% from 1967 to 1974. That’s a huge and demoralising decline for shareholders after holding on for seven years, and was significantly worse than the 11% fall in the S&P 500 in that period. But even an investor who bought Teledyne shares in 1967 would still have earned an annualised return of 12% by 1990, outstripping the S&P 500’s comparable annualised gain of 10%. And of course, an investor who bought Teledyne in 1963 or 1966 would have earned an even better return… 

But for the 1963-1989 time frame, based on data from Distant Force, it appears that the compound annual growth rates (CAGRs) for the conglomerate’s revenue, net income, and earnings per share were 19.8%, 25.3%, and 20.5%, respectively; the self-same CAGRs for the 1966-1989 time frame were 12.1%, 14.3%, and 16.0%. These numbers roughly match Teledyne’s returns cited by The Outsiders and Distant Force

My article The Need For Patience contained one of my favourite investing stories and it involves Warren Buffett and his investment in The Washington Post Company (emphasis is from the original passage):

Through Berkshire Hathaway, he invested US$11 million in WPC [The Washington Post Company] in 1973. By the end of 2007, Berkshire’s stake in WPC had swelled to nearly US$1.4 billion, which is a gain of over 10,000%. But the percentage gain is not the most interesting part of the story. What’s interesting is that, first, WPC’s share price fell by more than 20% shortly after Buffett invested, and then stayed in the red for three years

Buffett first invested in WPC in mid-1973, after which he never bought more after promising Katherine Graham (the then-leader of the company and whose family was a major shareholder) that he would not do so without her permission. The paragraph above showed that Berkshire’s investment in WPC had gains of over 10,000% by 2007. But by 1983, Berkshire’s WPC stake had already increased in value by nearly 1,200%, or 28% annually. From 1973 to 1983, WPC delivered CAGRs in revenue, net income, and EPS of 10%, 15%, and 20%, respectively (EPS grew faster than net income because of buybacks). 

Walmart, Teledyne, and WPC’s experience are all cases of an important phenomenon in the stock market: Their stock price movements were initially detached from their underlying business fundamentals in the short run, before eventually aligning with the passage of time, even when some of them began with very high valuations. They are also not idiosyncratic instances.

Renowned Wharton finance professor Jeremy Siegel – of Stocks for the Long Run fame – penned an article in late-1998 titled Valuing Growth Stocks: Revisiting The Nifty-Fifty. In his piece, Siegel explored the business and stock price performances from December 1972 to August 1998 for a group of US-listed stocks called the Nifty-Fifty. The group was perceived to have bright business-growth prospects in the early 1970s and thus carried high valuations. As Siegel explained, these stocks “had proven growth records” and “many investors did not seem to find 50, 80 or even 100 times earnings at all an unreasonable price to pay for the world’s preeminent growth companies [in the early 1970s].” But in the brutal 1973-1974 bear market for US stocks, when the S&P 500 fell by 45%, the Nifty-Fifty did even worse. For perspective, here’s Howard Marks’ description of the episode in his book The Most Important Thing (emphasis is mine):

In the early 1970s, the stock market cooled off, exogenous factors like the oil embargo and rising inflation clouded the picture and the Nifty Fifty stocks collapsed. Within a few years, those price/earnings ratios of 80 or 90 had fallen to 8 or 9, meaning investors in America’s best companies had lost 90 percent of their money.”

Not every member of the Nifty-Fifty saw their businesses prosper in the decades that followed after the 1970s. But of those that did, Siegel showed in Valuing Growth Stocks that their stock prices eventually tracked their business growth, and had also beaten the performance of the S&P 500. These are displayed in the table below. There are a few important things to note about the table’s information:

  • It shows the stock price returns from December 1972 to August 1998 for the S&P 500 and five of the Nifty-Fifty identified by Siegel as having the highest annualised stock price returns; December 1972 was the peak for US stocks before the 1973-1974 bear market
  • It shows the annualised earnings per share (EPS) growth for the S&P 500 and the five aforementioned members of the Nifty-Fifty
  • Despite suffering a major decline in their stock prices in the 1973-1974 bear market, members of the Nifty-Fifty whose businesses continued to thrive saw their stock prices beat the S&P 500 and effectively match their underlying business growth in the long run even when using the market-peak in December 1972 as the starting point.
Source: Jeremy Siegel

You may have noticed that all of the examples of stock prices first collapsing then eventually reflecting their underlying business growth that were shared above – Walmart, Teledyne, WPC, and members of the Nifty-Fifty – were from the 1970s. What if this relationship between stock prices and business fundamentals no longer holds now? It’s a legitimate concern. Economies change over time. Financial markets do too.

But I believe the underlying driver for the initial divergence and eventual convergence in the paths that the companies’ businesses and stock prices had taken in the past are alive and well today. This is because the driver was, in my opinion, the simple but important nature of the stock market: It is a place to buy and sell pieces of a business. This understanding leads to a logical conclusion that a stock’s price movement over the long run depends on the performance of its underlying business. The stock market, today, is still a place to buy and sell pieces of a business, which means the market is still a weighing machine in the long run. This also means that if you had invested a few years ago in a stock with an expensive valuation and have seen its stock price fall, it will likely still be appropriately appraised by the weighing machine in the fullness of time, if its fundamentals do remain strong in the years ahead. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 02 June 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 02 June 2024:

1. Pierre Andurand on a Shortage of Cocoa, Surging Copper and the Outlook for Oil – Tracy Alloway, Joe Weisenthal, and Pierre Andurand

Tracy (03:29):

So maybe to begin with, talk to us about how you got interested in cocoa because my understanding is you put a big trade on a long position earlier this year, it paid off massively. But this isn’t a sort of normal type of trade for you. This is something that was a little bit different.

Pierre (03:48):

Yes. Well, generally my background is more in energy trading, but I’ve traded quite a bit of metals as well, a little bit of agricultural products.

But I have one analyst who was very good and told me in January, ‘Pierre, you should look at cocoa.’ So I’m like ‘Okay, I don’t know anything about it, tell me.’

And he gave me a really good presentation that was really interesting. So then we really digged in really deep together to really understand the fundamental market. And basically we have a massive supply shortage this year.

I mean, we see production down 17% relative to last year. Most analysts out there have it down 11%, but that’s because they tend to be very conservative. They have lots of clients and they don’t want to worry the world. So they come with relatively conservative estimates.

But really tracking the export from the main exporters, mainly Ivory Coast and Ghana, that represent together about 60% of [the] world’s production. We see basically Ivory Coast exports down 30% year to date, I mean season to date and Ghana down 41%.

So just those two countries together since the start of the season, which is the 1st of October, are down 800,000 tons. And now we have what we call the mid-crop that is starting, but that represents only 20% of the balance of the season for West Africa.

And that’s not going to be enough to really change the deficit that we have this year. So we have a deficit of 800,000 tons from those two countries. And then looking at all the other countries we have, I think there some slightly positive, some are slightly negative, but basically we get to a deficit of 800,000 tons this year. And so that’s the first time we have such, you know, a decline in supply and that’s very hard to make it fit.

So at first you eat into current inventories until you run out of inventories and then the price can go anywhere.

So when we look at, okay, what makes the price of cocoa, right? It’s always about supply versus demand. But what has been capping the price between $2,500 a ton and $3,000 a ton, it was not demand because demand is extremely inelastic. I mean you can study that historically when you have a recession or not, when prices go up a lot or not. I mean demand generally goes up.

And that’s because the amount of in dollar terms that people consume in cocoa is very small. I mean, I did a back of the envelope calculation the other day. I mean at basically $10,000 a ton, even though it’s four times the more recent historical prices, out of a market of 5 million tons of demand per year, you have like 8 billion people on the planet, so on average it means that people consume 1.7 grams of cocoa per day, which at $10,000 a ton represents 1.70 cents per day. Okay, that’s the average person. Many people eat nothing and a few eat 10 times that amount…

…Pierre (06:56):

But let’s say if you eat even one full tablet, so 125 grams a day every single day for the whole year, which is quite a lot, of a high content real cocoa because your milk and milk chocolate, you have less than 10% cocoa in it.

So the price can go up 10 times, your tablet is only going to double in price. It’s not going to react very much to the cocoa price. But if you take a high content, high chocolate content bar, like a tablet, 125 grams, that means that you probably have [a] maximum of 50 grams of cocoa beans equivalent in it. I mean it’s probably a lot less.

Then you get to an expense of $14 per month at current prices, which is an increase of $10 per month relative to when we had a more normal price. So it means that demand, like for more reasonable chocolate lovers, that increase in [the] price in cocoa just corresponds to $2 to $5 per month.

So people are not going to eat less chocolate because of that. So it means that prices really are capped by the amount of supply you get. So if you can’t get enough supply, the price can go up a lot until we get more supply.

And when do we get more supply? Well, that in part [is] due to the weather, if you have much better weather then you might get more supply of cocoa beans the next year. But we have some issues that are also structural.

So when we look at the reasons for this large decline in production this year, I mean a lot of the reasons are actually structural. I mean we can look at four reasons why cocoa bean production has gone down a lot this year.

First I should give a little bit of background of why cocoa is so concentrated in West Africa. I mean it’s mainly because it requires very specific temperature, rainfall and humidity conditions. And that’s why most of the production is concentrated around a certain latitude — so 70% in West Africa and then you have 21% in mainly Latin America and 5% in Asia and Oceania.

So the main reasons why we lost a lot of production this year is number one weather. So some of it [is] due to El Nino, we had basically a period of time when it was too hot and a period of time when we had way too much rain.

Second is climate change. So climate change is every year shifting the weather patterns generally unfavorably for cocoa productions. Then you have two diseases, you have one called the Black Spot disease that comes from the fungus and it occurs mainly during the rainy season. It’s spread by rain splash, so basically it can’t grow when it’s dry.

And then you have a virus called the Swollen Shoot disease. It’s not a new disease. It was discovered in 1936. It’s transmitted by mealybugs, but it decreases cocoa yields a lot. So basically a tree that has that Swollen Shoot disease loses 25% yield within the first year and 50% within two years, and the tree dies within three to four years. And we’ve had actually a spread of that disease over the last year.

And then also we had less usage of fertilizers, mainly in the Ivory Coast, due to high fertilizer prices and also shortages due to the Russian invasion of Ukraine. So everything is linked. So some of it might be solved if we get better weather. I mean for next year we should have La Nina and not El Nino, so that should help at the margin.

But we still have issues with climate change. We still have issues with Black Spot disease and Swollen Shoot disease and there’s no indication that we get more usage of fertilizers in [the] Ivory Coast because actually the farmers are still getting relatively low prices and they’re still struggling to make ends meet. So a lot of those supply issues are actually structural…

…Joe (12:09):

So what did your analyst see? Or how was your analyst able to see something in the supply and demand situation that he felt, and you felt, was not being identified by the analysts who cover this closely?

Pierre (12:23):

I think it’s mainly an understanding of how much prices have to move to balance the market. You know, sometimes people can trade that market for like 20 years. They’ve been used to a range of prices and they believe, okay, the top of the range is the high price for example.

But they don’t really ask themselves what makes that price, right?. And sometimes taking a step back can help. I mean what makes the price is mainly the fact that in the past you would have the supply response if prices were going up. But if now you don’t get the supply response, or the supply response takes four or five years, then you need to have a demand response.

And a lot of people look at prices in nominal terms. So you hear people saying ‘Oh, we are at all time high prices in cocoa, but that’s because they look at prices in nominal terms. [The] previous high in 1977 was $5,500 something dollars a ton of 1977 dollars, which is equivalent to $28,000 a ton of today’s dollars.

So we are still very far from previous highs. And so you have to look at a bit more history and understand in the past how prices reacted to a shortage, how long it took to recover the product shortage to actually solve itself. And what’s different today.

So there’s a ratio that we look at that most people look at, it’s actually the inventory to grindings ratio. So it’s a measure of inventory to demand, what we call grinding is basically industrial companies that take the cocoa beans and they want to make chocolate with it. So it’s a process and some of them make the end product chocolate directly. Some of them sell back the product to other chocolate makers.

And so basically a typical grinder would take cocoa beans and make cocoa butter and powder with it. And the prices of both those elements also went up even more than cocoa beans, which means that actually we probably had some destocking everywhere in the chain.

So it looks like demand, when we look at the chocolate makers, the end demand for chocolate didn’t go down at all, it looks to be flat on the year. Grindings look to be down three, three and half percent this year, despite the fact that the end demand is the same in volume, which means that they’ve been destocking cocoa beans actually.

And so we had destocking everywhere — at the end chocolate level, at the cocoa beans, at the cocoa butter and cocoa powder level. So we had this destocking everywhere on the chain and now we have the largest deficit ever on top of two previous years of deficit. And it looks like next year we will have a deficit.

So we’re in a situation where we might actually run out of inventories completely. I mean this year we think we will end up with an inventory to grinding ratio — so inventory at the end of the season — of 21%. For the last 10 years we’ve been between 35% and 40% roughly. At the previous peak in 1977 we were at 19% and that’s what drove us to $28,000 a ton, of todays’s dollars.

If we have another deficit next year, then we might go down to 13%. So I don’t think it’s actually possible. That’s when you really have real shortage of cocoa beans, you can’t get it and that’s when the price can really explode. And so understanding that you have to slow down demand and we know that demand can’t really be slowed.

So that’s when you can have an explosion [in price]. And remember that these commodity futures, you need to have, they’re actually physically settled. So if somebody wants to take delivery, they have to converge with the price of the physical. If you have no physical, somebody wants to take delivery, the price can go anywhere.

So it’s a dangerous commodity too short, right? If you have no physical against it. And actually sometimes we read news that the funds have been pushing cocoa prices. It’s actually completely untrue because the funds have been selling since February. They actually went from a length of 175,000 lots, so that’s 1.75 million tons of cocoa lengths, I think it was around like September last year in average, or a bit earlier, to 28,000 lots to 280,000 tons at the moment.

So they sold more than 80% of their length actually. And the people who’ve been buying the futures from the funds, it’s producers because they’re producing a lot less than they expected.

So what has been happening in the cocoa market is that you had a reduction of what we call the open interest, where both the longs would use their length and the shorts would use their shorts. And then we get into a market where you have less liquidity because you have less exposure, you have less longs and less shorts, and then the volatility increases.

So in the past when people were comfortable being, let’s say, having a 100 lots position now because it moves more than 10 times more than in the past, we’re going to have like a 10 lots position, right? So the market became more — due to the fact that we had a massive move and we have a massive deficit, so everybody’s reducing their positions and because of the increased volatility, we have less activity. And that’s what makes the point more volatile as well.

2. The World’s Most Secret Book About Making a Fortune in Mining – Swen Lorenz

For years, I have been trying to find a copy of an ultra-rare book published in 2008.

It told the inside story of a few mining entrepreneurs who built a company from zero and sold it for a staggering USD 2,500,000,000 (2.5 BILLION) in cash just 674 days later. That’s a quick fortune earned, if ever there was one!

The company was listed on the stock market, and public investors were taken along for some of the ride. In 2006/07, this was the single best British company to own stock of.

Somehow, though, the company’s insiders seem to have regretted publishing their detailed account. The book strangely disappeared from the market shortly after it was published. Curiously, there is ample evidence that an active effort was made to erase the book from history…

…The book in question is “UraMin – A team enriched. How to build a junior uranium mining company“.

Junior mines are companies that are still looking for resources, rather than producing the resource. As most of my readers will know, they are among the most speculative ventures one can invest in. About 99% of them make most investors lose their entire investment. The remaining 1% regularly end up making investors 10, 20, or even 100 times their money.

UraMin was primarily the brainchild of Stephen Dattels, a Canadian mining entrepreneur with a decade-long track record. The book describes the genesis of UraMin from his own perspective and that of his two partners, Ian Stalker and Neil Herbert. It was, for all intents and purposes, a real insiders’ account of the incredible success story.

UraMin produced oodles of capital gains. It was a lucrative investment not just for its pre-IPO investors, but also for those who bought into it through shares acquired on the open market post-IPO…

…It’s no surprise that the book starts with describing just how “Down and Out” the market for uranium-related investments was at the time.

At the turn of 2004/05, you would have been hard-pressed to find any investors interested in uranium. The price for the metal had been in a 26-year (!) bear market. From it 1977 peak, it had been downhill ever since. There were barely any publicly listed companies operating in the uranium industry.

You would have struggled to find anyone who even understood what the metal was used for, and how it was used.

Or as Ian Stalkers, the CEO of UraMin, is quoted in the book:

“A meeting with potential investors could literally take hours. … First, it required a full explanation of what uranium is used for (it isn’t used for ‘bombs’), a run-through of the fuel cycle (enrichment and so on), the safety record of nuclear reactors, long-term disposal issues and the balance of supply and demand. We were lucky if we managed to talk for 10 minutes about the company.”

It was not an opportunity that the mass of investors would have jumped at when it was first presented.

  • However, all the clues were there. At the end of 2004/05, three crucial developments had already taken place, all pointing towards an imminent reversal of fortunes:
  • The price of uranium had started to creep up. It went from USD 10/lb in early 2003 to USD 20/lb by the end of the following year (which was still far below the late-1970s high of USD 115/lb).
  • Existing stockpiles of the metal, which had soared during the 1990s because of a decommissioning of Soviet nuclear missiles, had dwindled to virtually zero. The oversupply that had depressed the price for so long was gone.
  • A soaring oil price, which at the time was up more than 10 times compared to its early-2000s low, provided increasing demand for cheap nuclear energy. It was only a matter of time before investment would flow towards the much cheaper source of energy.

Subsequently, the uranium price went through the roof…

…Put more bluntly, there are occasions when a management team has to concede that everyone is better off if it puts the company up for sale – which is difficult because it usually leads to the entire management team and board losing their jobs!

Also, who wants to leave a party when things are the most fun? Making the decision to call it quits and focus on maximising a buyout price for a company is an extraordinarily hard decision to take. However, it is quite regularly the one decision that a board really should have the guts and the sense of realism to take.

I wasn’t surprised to read that Dattels and his colleagues had that rare quality of knowing when to quit:

“The trend towards a smaller group of larger uranium companies had significant repercussions for UraMin, something that its management realised early on. “The sector was not a large one – it had already seen several significant mergers and more were rumoured,” notes Neil Herbert. “Despite the rapid progress we had made, we were in danger of becoming a relatively small operator.

On 19 February 2007, Reuters reported that UraMin was planning a strategic review of its assets in light of the recent consolidation of the sector.

In effect, analysts believed, the company had just put itself up for sale.”

Companies can put themselves up for sale by hiring an investment bank and making a public announcement, or they can de facto put themselves up for sale by feeding information into their industry’s rumour mill.

Steve Dattels decided that “we should take the initiative and evaluate the merger possibilities rather than wait for the telephone call.”

UraMin hired advisors and went through an official process of allowing prospective acquirers access to its internal information.

Following the process of inviting bids, the company came to an agreement with French nuclear power company, AREVA. In June 2007, UraMin’s management team agreed to a takeover offer that valued the company at USD 2.5bn. The entire purchase price was payable in cash.

Investors who had bought in at the bottom of GBp 50 per share made 8 times their money within just 12 months.

One of the earliest institutional backers of the venture reportedly made 22 times their money in just 24 months.

3. Why Utilities Are Lighting Up the Stock Market – Jason Zweig

As Bespoke Investment Group, a research firm, pointed out this week, three of this year’s five best-performing stocks in the S&P 500 are utilities: Vistra, Constellation Energy and NRG Energy. Vistra, up 143%, has even outperformed the king of AI itself, Nvidia; Constellation, up 85%, is barely behind it…

…The business of providing electricity hasn’t grown in the past couple of decades as conservation and more-efficient technology have reduced consumption. The U.S. generated slightly less electricity in 2021 than it had in 2007, according to the federal Energy Information Administration—even though the economy grew more than 3% annually over that period.

Now, however, the need for energy is finally expanding. On their April 23 earnings-announcement call, executives at NextEra estimated that electricity demand from data centers alone would grow 15% a year through the end of the decade.

AI isn’t the only reason utilities have heated up so fast. The rapid increase in demand for electricity nationwide comes from three main sources, says Maria Pope, CEO of Portland General Electric, Oregon’s biggest utility.

One is the revival of domestic manufacturing after decades of moving offshore. Another is the boom in semiconductor production, boosted by government support. But the expansion of data centers, “driven by the insatiable appetite of AI,” is the fastest-growing source of industrial demand, says Pope.

Jay Rhame, chief executive of Reaves Asset Management, which manages about $3 billion in utility stocks, thinks the only historical parallel is the boom in electricity generation that followed the widespread adoption of air conditioning in the 1960s and 1970s.

4. Adobe CEO Shantanu Narayen is confident we’ll all adapt to AI – Nilay Patel and Shantanu Narayen

If you are Microsoft or Google or someone else, one of the reasons this paradigm shift excites you is because it lets you get past some gatekeepers in mobile, it lets you create some new business models, it lets you invent some new products maybe that shift some usage in another way. I look at that for them and I say: Okay, I understand it. I don’t quite see that paradigm shift for Adobe. Do you see that we’re going to have to invent a new business model for Adobe the way that some of the other companies see it?

I think any technology shift has the same profound impact in terms of being a tailwind. If you think about what Microsoft does with productivity, and if you think about what Adobe does with creativity, one can argue that creativity is actually going to be more relevant to every skill moving forward. So I do think it has the same amount of profound implication for Adobe. And we’ve innovated in a dramatic way. We like to break up what we are doing with AI in terms of what we do at the interface layer, which is what people use to accomplish something; what we’re doing with foundation models; and what models are we creating for ourselves that are the underlying brain of the things that we are attempting to do, and what’s the data? I think Adobe has innovated across all three. And in our different clouds — we can touch on this later — Creative Cloud, Document Cloud, and Experience Cloud, we’re actually monetizing in different ways, too. So I am really proud of both the innovation on the product side and the experimentation on the business model side.

The reason I asked that question that way, and right at the top, is generative AI. So much of the excitement around it is letting people who maybe don’t have an affinity for creative tools or an artistic ability make art. It further democratizes the ability to generate culture, however you wish to define culture. For one set of companies, that’s not their business, and you can see that expands their market in some way. The tools can do more things. Their users have more capabilities. The features get added.

For Adobe, that first step has always been serving the creative professional, and that set of customers actually feels under threat. They don’t feel more empowered. I’m just wondering how you see that, in the broadest possible sense. I am the world’s foremost, “What is a photo?” philosophical handwringer, and then I use AI Denoise in Lightroom without a second’s hesitation, and I think it’s magic. There’s something there that is very big, and I’m wondering if you see that as just a moment we’re all going to go through or something that fundamentally changes your business.

Whether you’re a student, whether you’re a business professional, or whether you’re a creative, we like to say at Adobe that you have a story to tell. The reality is that there are way more stories that people want to tell than skills that exist to be able to tell that story with the soul that they want and the emotion that they want. I think generative AI is going to attract a whole new set of people who previously perhaps didn’t invest the time and energy into using the tools to be able to tell that story. So, I think it’s going to be tremendously additive in terms of the number of people who now say, “Wow, it has further democratized the ability for us to tell that story,” and so, on the creative side, whether you’re ideating, whether you’re trying to take some picture and fix it but you don’t quite know how to do it.

When people have looked at things like Generative Fill, their jaws drop. What’s amazing to us is when, despite decades of innovation in Photoshop, something like Generative Fill captures the imagination of the community — and the adoption of that feature has been dramatically higher than any other feature that we’ve introduced in Photoshop. When layers first came out, people looked at it, and their jaws dropped. It just speaks to how much more we can do for our customers to be able to get them to tell their story. I think it’s going to be dramatically expansive…

I want you to talk about the distribution side. This is the part that I think is under the most pressure. Content creation is getting easier and more democratic. However you feel about AI, it is easier to make a picture or a video than it’s ever been before. On the distribution side, the web is being choked by a flood of AI content. The social platforms, which are closed distribution, are also being flooded with AI content. How do you think about Adobe living in that world? How do you think about the distribution problem? Because it seems like the problem we all have to solve.

You’re absolutely right in that, as the internet has evolved, there’s what you might consider open platforms and closed platforms. But we produce content for all of that. You pointed out that, whether it’s YouTube, TikTok, or just the open internet, we can help you create content for all of that. I don’t know that I’d use the word “choked.” I used the word “explosion” of content certainly, and “flooded” also is a word that you used. It’s a consequence. It’s a consequence of the access. And I do think that for all the companies that are in that business, even for companies that are doing commerce, I think there are a couple of key hypotheses that when they do, they become lasting platforms. The first is transparency of optics of what they are doing with that data and how they’re using that data. What’s the monetization model, and how are they sharing whatever content is being distributed through their sites with the people who are making those platforms incredibly successful?

I don’t know that I worry about that a lot, honestly. I think most of the creators I’ve spoken to like a proliferation of channels because they fundamentally believe that their content will be differentiated on those channels, and getting exposure to the broadest set of eyeballs is what they aspire to. So I haven’t had a lot of conversations with creators where they are telling us, as Adobe, that they don’t like the fact that there are more platforms on which they have the ability to create content. They do recognize that it’s harder, then, for them to differentiate themselves and stand out. Ironically, that’s an opportunity for Adobe because the question is, for that piece of content, how do you differentiate yourself in the era of AI if there’s going to be more and more lookalikes, and how do you have that piece of content have soul? And that’s the challenge for a creative.

How do you think about the other tension embedded in that, which is that you can go to a number of image generators, and if someone is distinctive enough, you can say, “Make me an image in the style of X,” and that can be trained upon and immediately lifted, and that distinction goes to zero pretty fast. Is that a tension that you’re thinking about?

Given the role that Adobe plays in the content creation business, I think we take both the innovation angle and the responsibility angle very seriously. And I know you’ve had conversations with Dana [Rao, Adobe counsel] and others about what we are doing with content credentials and what we are doing with the Fair Act. If you look at Photoshop, we’re also taking a very thoughtful approach about saying when you upload a picture for which you want to do a structure match or style match, you bear the responsibility of saying you have access to that IP and license to that IP in order to do that.

So I can interpret your questions in one of two ways. One is: how do we look at all of the different image generators that have happened? In that case, we are both creating our own image generator, but at the NAB Show, we showed how we can support other third parties. It was really critical for us to sequence this by first creating our own image model. Both because we had one that was designed to be commercially safe. It respected the rights of the creative community because we have to champion it. But if others have decided that they are going to use a different model but want to use our interfaces, then with the appropriate permissions and policies, we will support that as well.

And so I interpret your questions in those two ways, which is we’re taking responsibility in terms of when we provide something ourselves, how are we making sure that we recognize IP because it is important, and it’s people’s IP. I think at some point, the courts will opine on this, but we’ve taken a very designed-to-be commercially safe approach where we recognize the creator’s IP. Others have not. And the question might be, well, why are you supporting them in some of our products? And a lot of our customers are saying, “Well, we will take the responsibility, but please integrate this in our interfaces,” and that’s something that we are pushing as third-party models.

It bears mentioning that literally today, as we’re speaking, an additional set of newspapers has sued OpenAI for copyright infringement. And that seems like the thing that is burbling along underneath this entire revolution is, yeah, the courts are going to have to help us figure this out. That seems like the very real answer. I did have a long conversation with Dana [Rao] about that. I don’t want to sit in the weeds of that. I’m just wondering for you as the CEO of Adobe, where is your level of risk? How risky do you think this is right now for your company?

I think the approach that we’ve taken has shown just tremendous leadership by saying … Look at our own content. We have a stock business where we have rights to train the models based on our stock business. We have Behance, and Behance is the creative professional social site for people sharing their images. While that’s owned by Adobe, we did not train our Firefly image models based on that because that was not the agreement that we had with people who do it.

I think we’ve taken a very responsible way, so I feel really good about what we are doing. I feel really good about how we are indemnifying customers. I feel really good about how we are doing custom models where we allow a person in the media business or the CPG business to say, “We will upload our content to you Adobe, and we will create a custom model for us that only we can use, what we have rights for.” So, we have done a great job. I think other companies, to your point, are not completely transparent yet about what data they use and [if] they scrape the internet, and that will play out in the industry. But I like the approach that we’ve taken, and I like the way in which we’ve engaged with our community on this.

It’s an election year. There are a lot of concerns about misinformation and disinformation with AI. The AI systems hallucinate a lot. It’s just real. It’s the reality of the products that exist today. As the CEO of Adobe, is there a red line of capability that you won’t let your AI tools cross right now?

To your point, I think it’s something like 50 percent of the world’s population over a 12-month period is going to the polls, including the US and other major democracies in the world. And so, we’ve been actively working with all these governments. For any piece of content that’s being created, how does somebody put their digital signature on what the provenance of that content was? Where did it get created? Where did it get consumed? We’ve done an amazing job of partnering with so many companies in the camera space, in the distribution of content space, in the PC space to all say we need to do it. We’ve also now, I think, made the switch associated with, how do you visually identify that there is this watermark or this digital signature about where the content came from?

I think the unsolved problem to some degree is how do you, as a society, get consumers to say, “I’m not going to trust any piece of content until I see that content credential”? We’ve had nutrition labels on food for a long time — this is the nutrition label on a piece of content. Not everybody reads the nutrition label before they eat whatever they’re eating, so I think it’s a similar thing, but I think we’ve done a good job of acting responsibly. We’ve done a great job of partnering with other people. The infrastructure is there. Now it’s the change management with society and people saying, “If I’m going to go see a piece of video, I want to know the provenance of that.” The technology exists. Will people want to do that? And I think that’s—

The thing everyone says about this idea is, well, Photoshop existed. You could have done this in Photoshop. What’s the difference? That’s you. You’ve been here through all these debates. I’m going to tell you what you are describing to me sounds a little bit naive. No one’s going to look at the picture of Mark Zuckerberg with the beard and say, “Where’s the nutrition label on that?” They’re going to say, “Look at this cool picture.” And then Zuck is going to lean into the meme and post a picture of his razor. That’s what’s happening. And that’s innocent. A bunch of extremely polarized voters in a superheated election cycle is not going to look at a nutrition label. It just doesn’t seem realistic. Are you saying that because it’s convenient to say, or do you just hope that we can get there?

I actually acknowledge that the last step in this process is getting the consumer to care and getting the consumer to care [about] pieces of information that are important. To your point again, you had a couple of examples where some of them are in fun and in jest and everybody knows they’re in fun and jest and it doesn’t matter. Whereas others are pieces of information. But there is precedence to this. When we all transacted business on the internet, we said we want to see that HTTPS. We want to know that my credit card information is being kept securely. And I agree with you. I think it’s an unsolved problem in terms of when consumers will care and what percentage of consumers will care. So, I think our job is the infrastructure, which we’ve done. Our job is educating, which we are doing. But there is a missing step in all of this. We are going into this with our eyes open, and if there are ideas that you have on what else we can do, we’re all ears…

Let’s talk about PDF. PDF is an open standard. You can make a PDF pretty much anywhere all the time. You’ve built a huge business around managing these documents. And the next turn of it is, as you described, “Let an AI summarize a bunch of documents, have an archive of documents that you can treat almost like a wiki, and pull a bunch of intelligence out of it.” The challenge is that the AI is hallucinating. The future of the PDF seems like training data for an AI. And the thing that makes that really happen is the AIs have to be rock-solid reliable. Do you think we’re there yet?

It’s getting better, but no. Even the fact that we use the word hallucinate. The incredible thing about technology right now is we use these really creative words that become part of the lexicon in terms of what happens. But I think we’ve been thoughtful in Acrobat about how we get customer value, and it’s different because when you’re doing a summary of it and you can point back to the links in that document from which that information was gleaned, I think there are ways in which you provide the right checks and balances. So, this is not about creation when you’re summarizing and you’re trying to provide insight and you’re correlating it with other documents. It will get better, and it’ll get better through customer usage. But it’s a subset of the problem of all hallucinations that we have in images. And so I think in PDF, while we’re doing research fundamentally in all of that, I think the problems that we’re trying to solve immediately are summarization — being able to use that content and then create a presentation or use it in an email or use it in a campaign. And so I think for those use cases, the technology is fairly advanced.

There’s a thing I think about all the time. An AI researcher told you this a few years ago. If you just pull the average document off the average website, the document is useless. It’s machine-generated. It’s a status update for an IoT sensor on top of a light pole. That is the vast majority statistically of all the documents on the internet. When you think about how much machine-generated documentation any business makes, the AI problem amps it up. Now I’m having an AI write an email to you; you’re having an AI summarize the email for you. We might need to do a transaction or get a signature. My lawyer will auto-generate some AI-written form or contract. Your AI will read it and say it’s fine. Is there a part where the PDF just drops out of that because it really is just machines talking to each other to complete a transaction and the document isn’t important anymore?

Well, I think this is so nascent that we’ll have different kinds of experiences. I’ll push back first a little — the world’s information is in PDF. And so if we think about knowledge management of the universe as we know it today, I think the job that Adobe and our partners did to capture the world’s information and archive it [has] been a huge societal benefit that exists. So you’re right in that there are a lot of documents that are transient that perhaps don’t have that fundamental value. But I did want to say that societies and cultures are also represented in PDF documents. And that part is important. I think — to your other question associated with “where do you eliminate people even being part of a process and let your computer talk to my computer to figure out this deal” — you are going to see that for things that don’t matter, and judgment will always be about which ones of those matter. If I’m making a big financial investment, does that matter? If I’m just getting an NDA signed, does that matter? But you are going to see more automation I think in that particular respect. I think you’re right.

The PDF to me represents a classic paradigm of computing. We’re generating documents. We’re signing documents. There are documents. There are files and folders. You move into the mobile era, and the entire concept of a file system gets abstracted. And maybe kids, they don’t even know what file systems are, but they still know what PDFs are. You make the next turn. And this is just to bring things back to where we started. You say AI is a paradigm shift, and now you’re just going to talk to a chatbot and that is the interface for your computer, and we’ve abstracted one whole other set of things away. You don’t even know how the computer is getting the task done. It’s just happening. The computer might be using other computers on your behalf. Does that represent a new application model for you? I’ll give you the example: I think most desktop applications have moved to the web. That’s how we distribute many new applications. Photoshop and Premiere are the big stalwarts of big, heavy desktop applications at this point in time. Does the chatbox represent, “Okay, we need yet another new application model”?

I think you are going to see some fundamental innovation. And the way I would answer that question is first abstracting the entire world’s information. It doesn’t matter whether it was in a file on your machine, whether it was somewhere on the internet, and being able to have access to it and through search, find the information that you want. You’re absolutely right that the power of AI will allow all of this world’s information to come together in one massive repository that you can get insight from. I think there’s always going to be a role though for permanence in that. And I think the role of PDF in that permanence aspect of what you’re trying to share or store or do some action with or conduct business with, I think that role of permanence will also play an important role. And so I think we’re going to innovate in both those spaces, which is how do you allow the world’s information to appear as one big blob on which you can perform queries or do something interesting? But then how do you make it permanent, and what does that permanence look like, and what’s the application of that permanence? Whether it’s for me alone or for a conversation that you and I had, which records that for posterity?

I think both of these will evolve. And it’s areas that — how does that document become intelligent? Instead of just having data, it has process and workflow associated with it. And I think there’s a power associated with that as well. I think we’ll push in both of these areas right now.

Do you think that happens on people’s desktops? Do you think it happens in cloud computing centers? Where does that happen?

Both and on mobile devices. Look at a product like Lightroom. You talked about Denoising and Lightroom earlier. When Lightroom works exactly the same across all these surfaces, that power in terms of people saying, oh my God, it’s exactly the same. So I think the boundaries of what’s on your personal computer and what’s on a mobile device and what’s in the cloud will certainly blur because you don’t want to be tethered to a device or a computer to get access to whatever you want. And we’ve already started to see that power, and I think it’ll increase because you can just describe it. It may not have that permanent structure that we talked about, but it’ll get created for you on the fly, which is, I think, really powerful.

Do you see any limits to desktop chip architectures where you’re saying, “Okay, we want to do inference at scale. We’re going to end up relying on a cloud more because inference at scale on a mobile device will make people’s phones explode”? Do you see any technical limitations?

It’s actually just the opposite. We had a great meeting with Qualcomm the other day, and we talked to Nvidia and AMD and Qualcomm. I think a lot of the training, that’s the focus that’s happening on the cloud. That’s the infrastructure. I think the inference is going to increasingly get offloaded. If you want a model for yourself based on your information, I think even today with a billion parameters, there’s no reason why that just doesn’t get downloaded to your phone or downloaded to your PC. Because otherwise, all that compute power that we have in our hands or on our desktop is really not being used. I think the models are more nascent in terms of how you can download it and offload that processing. But that’s definitely going to happen without a doubt. In fact, it’s already happening, and we’re partnering with the companies that I talked about to figure out how that power of Photoshop can actually then be on your mobile device and on your desktop. But we’re a little early in that because we’re still trying to learn, and the model’s getting on the server.

5. The S&P 500 vs. the U.S. Economy – Ben Carlson

The S&P 500 is a big part of the U.S. economy but there are plenty of differences between the stock market and the economy.

For instance, the technology sector has an outsized impact on S&P 500 earnings growth over time:..

…Depending on the time frame, the tech sector can make up the majority of both earnings gains and losses. The same is true of sales:…

…The BEA estimates tech’s contribution to GDP to be 10%.1 That’s still close to $3 trillion but the economy is far more diversified and spread out than the stock market.

A decent chunk of sales for S&P 500 companies also comes from outside our borders:…

…The S&P 500 is a U.S. index but it is comprised of global corporations…

…S&P 500 companies are enormous but the majority of firms with $100 million or more in sales are private companies:…

…S&P 500 companies account for roughly 1 in 5 jobs in the United States:…

…But these corporations are insanely efficient and profitable, accounting for half of the profits in America:


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  We currently have a vested interest in Adobe, Apple, and Tencent. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2024 Q1)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2024 Q1 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

With the latest earnings season for the US stock market – for the first quarter of 2024 – coming to its tail-end, I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb has been using AI for a long time and has made a lot of progress in the last 12 months, including (1) a computer vision AI model trained with 100 million photos that allows hosts to organise all their photos by room, which leads to higher conversion rates, (2) an AI-powered feature for hosts to reply guests quickly, and (3) a reservation screening technology

We’ve been using AI for a long time. In the last 12 months, we’ve made a lot of progress. I’ll just give you 3 examples of things we’ve done with AI. We made it easier to host. We have a computer vision model that we trained with 100 million photos, and that allows hosts to — like the AI model to organize all their photos by room. Why would you want to do this? Because this increases conversion rate when you do this. We launched last week AI-powered quick replies for hosts. So basically, predicts the right kind of question or answer for a host to pre-generate to provide to guests. And this has been really helpful. And then we’ve made a really big impact on reducing partners in Airbnb with a reservation screening technology.

Airbnb’s management is going much bigger on generative AI; management thinks the biggest near-term impact generative AI can have on Airbnb’s business is in customer service; management thinks that generative AI in the realm of customer service can benefit Airbnb a lot more than hotels and online travel agents (OTAs); AI can solve difficult customer service challenges for Airbnb

So now we’re going much bigger on generative AI. I think I think we’re going to see the biggest impact is going to be on customer service in the near term. I think more than hotels, probably even more than OTA, Airbnb will benefit from generative AI. And the reason why, it’s just a simple structural reason. We have the most like buried inventory. We don’t have any SKUs, and we’re an incredibly global platform. So it’s a very difficult customer service challenge. But imagine an AI agent that can actually like read a corpus of 1,000 pages of policies and be able to help adjudicate and help a customer service agent help a guest from Germany staying with a host in Japan. It’s a very difficult problem that AI can really supplement. 

Airbnb’s management wants to bring AI capabilities from customer service to search and to the broader experience; the end game is to provide an AI-powered concierge

Over time, we’re going to bring the AI capabilities from customer service to search and to the broader experience. And the end game is to provide basically an AI-powered concierge. 

Alphabet (NASDAQ: GOOG)

Alphabet’s management gave a reminder that Alphabet has been an AI-first company since 2016; Alphabet started building TPUs (tensor processing units) in 2016

We’ve been an AI-first company since 2016, pioneering many of the modern breakthroughs that power AI progress for us and for the industry…

… You can imagine we started building TPUs in 2016, so we’ve definitely been gearing up for a long time.

Alphabet’s management rolled out Gemini 1.5 Pro in February, a foundational AI model which has a breakthrough in long context understanding and multimodal capabilities; Gemini 1.5 Pro has been embraced by developers and enterprise customers in a wide range of use cases

In February, we rolled out Gemini 1.5 Pro, which shows dramatic performance enhancements across a number of dimensions. It includes a breakthrough in long context understanding, achieving the longest context window of any large-scale foundation model yet. Combining this with Gemini’s native multimodal understanding across audio, video, text code and more, it’s highly capable. We are already seeing developers and enterprise customers enthusiastically embrace Gemini 1.5 and use it for a wide range of things.

Alphabet’s management thinks that the company has the best infrastructure for AI; Gemini’s training and inference is done with Alphabet’s custom TPU (tensor processing unit) chips; Google Cloud offers the latest generation of Nvidia GPUs (graphics processing units) and Alphabet’s own TPUs

We have the best infrastructure for the AI era… Our data centers are some of the most high-performing, secure, reliable, and efficient in the world. They’ve been purpose-built for training cutting-edge AI models and designed to achieve unprecedented improvements in efficiency. We have developed new AI models and algorithms that are more than 100x more efficient than they were 18 months ago. Our custom TPUs, now in their fifth generation, are powering the next generation of ambitious AI projects. Gemini was trained on and is served using TPUs…

…We offer an industry-leading portfolio of NVIDIA GPUs along with our TPUs. This includes TPU v5p, which is now generally available and NVIDIA’s latest generation of Blackwell GPUs. 

Alphabet’s management is seeing generative AI cause a shift in what people can do with Search, and they think this will lead to a new stage of growth, similar to the outcomes of prior shifts in Search; Alphabet has been experimenting with SGE (Search Generative Experience) for over a year and the company is now bringing AI overseas to main Search results; Alphabet has served billions of queries with its generative AI features; people who use the AI overviews in Google Search increase their search usage and report higher satisfaction with search results; ads that are above or below SGE results were found by users to be helpful; management is confident that SGE with ads will remain relevant; management thinks that the use of generative AI can help Google answer more complex questions and expand the type of queries it can serve

We have been through technology shifts before, to the web, to mobile, and even to voice technology. Each shift expanded what people can do with Search and led to new growth. We are seeing a similar shift happening now with generative AI. For nearly a year, we’ve been experimenting with SGE in search labs across a wide range of queries. And now we are starting to bring AI overviews to the main Search results page. We are being measured in how we do this, focusing on areas where gen AI can improve the search experience while also prioritizing traffic to websites and merchants. We have already served billions of queries with our generative AI features. It’s enabling people to access new information, to ask questions in new ways and to ask more complex questions. Most notably, based on our testing, we are encouraged that we are seeing an increase in search usage among people who use the new AI overviews as well as increased user satisfaction with the results…

…We shared in March how folks are finding ads either above or below the SGE results helpful. We’re excited to have a solid baseline to keep innovating on and confident in the role SGE, including Ads, will play in delighting users and expanding opportunities to meet user needs…

… I think with generative AI in Search, with our AI overviews, I think we will expand the type of queries we can serve our users. We can answer more complex questions as well as, in general, that all seems to carry over across query categories. Obviously, it’s still early, and we are going to be measured and put user experience at the front, but we are positive about what this transition means…

…On SGE in Search, we are seeing early confirmation of our thesis that this will expand the universe of queries where we are able to really provide people with a mix of actual answers linked to sources across the Web and bring a variety of perspectives, all in an innovative way. 

The cost of producing SGE responses has decreased by 80% from when SGE was first introduced a year ago because of work Alphabet has done on its Gemini models and TPUs

A number of technical breakthroughs are enhancing machine speed and efficiency, including the new family of Gemini models and a new generation of TPUs. For example, since introducing SGE about a year ago, machine costs associated with SGE responses have decreased 80% from when first introduced in Labs driven by hardware, engineering, and technical breakthroughs.

Alphabet’s immense reach – 6 products with >2 billion monthly users each, and 15 products with 0.5 billion users – is helpful in distributing AI to users; management has brought AI features to many Alphabet products

We have 6 products with more than 2 billion monthly users, including 3 billion Android devices. 15 products have 0.5 billion users, and we operate across 100-plus countries. This gives us a lot of opportunities to bring helpful gen AI features and multimodal capabilities to people everywhere and improve their experiences. We have brought many new AI features to Pixel, Photos, Chrome, Messages and more. We are also pleased with the progress we are seeing with Gemini and Gemini Advanced through the Gemini app on Android and the Google app on iOS.

Alphabet’s management thinks the company has a clear path to monetisation of AI services through ads, cloud, and subscriptions; Alphabet introduced Gemini Advance, a subscription service to access the most advanced Gemini model, was introduced in 2024 Q1

We have clear paths to AI monetization through Ads and Cloud as well as subscriptions…

… Our Cloud business continues to grow as we bring the best of Google AI to enterprise customers and organizations around the world. And Google One now has crossed 100 million paid subscribers, and in Q1, we introduced a new AI premium plan with Gemini Advanced.

Established enterprises are using Google Cloud for their AI needs (For example: (1) Discover Financial has begun deploying generative AI tools to its 10,000 call center agents, (2) McDonald’s is using gen AI to enhance its customer and employee experiences, and (3) WPP is integrating with Gemini models); more than 60% of funded generative AI (gen AI) start-ups and nearly 90% of gen AI unicorns are also using Google Cloud; more than 1 million developers are now using Alphabet’s generative AI tools; customers can now also ground their generative AI with Google Search and their own data 

At Google Cloud Next, more than 300 customers and partners spoke about their generative AI successes with Google Cloud, including global brands like Bayer, Cintas, Mercedes-Benz, Walmart and many more…

Today, more than 60% of funded gen AI start-ups and nearly 90% of gen AI unicorns are Google Cloud customers. And customers like PayPal and Kakao Brain are choosing our infrastructure… 

……On top of our infrastructure, we offer more than 130 models, including our own models, open source models and third-party models. We made Gemini 1.5 Pro available to customers as well as Imagine 2.0 at Cloud Next. And we shared that more than 1 million developers are now using our generative AI across tools, including AI Studio and Vertex AI. We spoke about how customers like Bristol-Myers Squibb and Etsy can quickly and easily build agents and connect them to their existing systems. For example, Discover Financial has begun deploying gen AI-driven tools to its nearly 10,000 call center agents to achieve faster resolution times for customers. Customers can also now ground their gen AI with Google Search and their own data from their enterprise databases and applications. In Workspace, we announced that organizations like Uber, Pepperdine University and PennyMac are using Gemini and Google Workspace, our AI-powered agent that’s built right into Gmail, Docs sheets and more…

…To help McDonald’s build the restaurant of the future, we’re deepening our partnership across cloud and ads. Part of this includes them connecting Google Cloud’s latest hardware and data technologies across restaurants globally and starting to apply Gen AI to enhance its customer and employee experiences. Number two, WPP. At Google Cloud Next, we announced a new collaboration that will redefine marketing through the integration of our Gemini models with WPP Open, WPP’s AI-powered marketing operating system, already used by more than 35,000 of its people and adopted by key clients, including The Coca-Cola Company, L’Oreal and Nestle. We’re just getting started here and excited about the innovation this partnership will unlock. 

Alphabet’s management has AI solutions to help advertisers with predicting ad conversions and to match ads with relevant searches; management thinks Alphabet’s ability to help advertisers find customers and grow their advertising ROI (return on investment) is getting better as the company’s AI models improve

We’ve talked about whole solutions like Smart Bidding use AI to predict future ad conversions and their value in helping businesses stay agile and responsive to rapid shifts in demand and how products like broad match leverage LLMs to match ads to relevant searches and help advertisers respond to what millions of people are searching for…

…As advances accelerate in our underlying AI models, our ability to help businesses find users at speed and scale and drive ROI just keeps getting better.

Alphabet’s management introduced Gemini into Performance Max (PMax) in February and early results show PMax users are 63% more likely to publish a campaign with good or excellent ad strength and those who improve their ad strength on PMax to excellent see a 6% increase in conversions; PMax is available to all US advertisers and is starting to be rolled out internationally

In February, we rolled Gemini into PMax. It’s helping curate and generate text and image assets so businesses can meet PMax asset requirements instantly. This is available to all U.S. advertisers and starting to roll out internationally in English, and early results are encouraging. Advertisers using PMax asset generation are 63% more likely to publish a campaign with good or excellent ad strength. And those who improve their PMX ad strength to excellent see 6% more conversions on average.

Advertisers who use Alphabet’s ACA (automatically created assets) feature that is powered by generative AI see conversions increase by 5%

We’re also driving improved results for businesses opting into automatically created assets, which are supercharged with gen AI. Those adopting ACA see, on average, 5% more conversions at a similar cost per conversion in Search and Performance Max campaigns.

Alphabet’s Demand Gen AI-powered service helps advertisers engage with new and existing customers across Youtube, Shorts, Gmail, and Discover; movie studio Lionsgate tested Demand Gen for a movie’s promotion and saw that it provided an 85% more efficient CPC (cost per click) and 96% more efficient cost per page view compared to social benchmarks; Lionsgate has used Demand Gen for two more films; Alphabet recently introduced new tools in Demand Gen

And then there’s Demand Gen. Advertisers are loving its ability to engage new and existing customers and drive purchase consideration across our most immersive and visual touch points like YouTube, Shorts, Gmail and Discover. Hollywood film and TV studio, Lionsgate, partnered with Horizon Media to test what campaign type will deliver the most ticketing page views for its The Hunger Games: Ballad of Songbirds and Snakes film. Over a 3-week test, demand gen was significantly more efficient versus social benchmarks with an 85% more efficient CPC and 96% more efficient cost per page view. Lionsgate has since rolled out Demand Gen for 2 new titles. We’re also bringing new creative features to demand gen. Earlier this month, we announced new generative image tools to help advertisers create high-quality assets in a few steps with a few simple prompts. This will be a win for up-leveling visual storytelling and testing creative concepts more efficiently.

Google Cloud had 28% revenue growth in 2024 Q1 (was 26% in 2023 Q4), driven by an increasing contribution from AI; management sees the growth of Google Cloud being underpinned by the benefits AI provides for customers, and management wants to invest aggressively in cloud while remaining focused on profitable growth; Alphabet’s big jump capex in 2024 Q1 (was $6.3 billion in 2023 Q1) was mostly for technical infrastructure and reflects management’s confidence in the opportunities offered by AI; management expects Alphabet’s quarterly capex for the rest of 2024 to be similar to what was seen in 2024 Q1; management has no view on 2025 capex at the moment; management sees Google Cloud hitting an inflection point because of AI

Turning to the Google Cloud segment. Revenues were $9.6 billion for the quarter, up 28%, reflecting significant growth in GCP with an increasing contribution from AI and strong Google Workspace growth, primarily driven by increases in average revenue per seat. Google Cloud delivered operating income of $900 million and an operating margin of 9%…

…With respect to Google Cloud, performance in Q1 reflects strong demand for our GCP infrastructure and solutions as well as the contribution from our Workspace productivity tools. The growth we are seeing across Cloud is underpinned by the benefit AI provides for our customers. We continue to invest aggressively while remaining focused on profitable growth…

…With respect to CapEx, our reported CapEx in the first quarter was $12 billion, once again driven overwhelmingly by investment in our technical infrastructure, with the largest component for servers followed by data centers. The significant year-on-year growth in CapEx in recent quarters reflects our confidence in the opportunities offered by AI across our business. Looking ahead, we expect quarterly CapEx throughout the year to be roughly at or above the Q1 level, keeping in mind that the timing of cash payments can cause variability in quarterly reported CapEx…

…And then with respect to 2025, as you said, it’s premature to comment so nothing to add on that…

…On the Cloud side, obviously, it’s definitely a point of inflection overall. I think the AI transformation is making everyone think about their whole stack, and we are engaged in a number of conversations. I think paid AI infrastructure, people are really looking to Vertex AI, given our depth and breadth of model choice, or using Workspace to transform productivity in your workplace, et cetera. So I think the opportunities there are all related to that, both all the work we’ve built up and AI being a point of inflection in terms of driving conversations. I think you’ll see us do it both organically and with a strong partner program as well. So we’ll do it with a combination.

Alphabet’s management thinks the AI transition is a once-in-a-generation opportunity; it’s the first time they think Alphabet can work on AI in a horizontal way

I think the AI transition, I think it’s a once-in-a-generation kind of an opportunity. We’ve definitely been gearing up for this for a long time. You can imagine we started building TPUs in 2016, so we’ve definitely been gearing up for a long time…

… The real opportunities we see is the scale of research and innovation, which we have built up and are going to continue to deliver. I think for the first time, we can work on AI in a horizontal way and it impacts the entire breadth of the company, be it Search, be it YouTube, be it Cloud, be it Waymo and so on. And we see a rapid pace of innovation in that underlying.

Alphabet’s management thinks that, with regards to monetising the opportunity of smartphone-based AI searches, there will be search use-cases that can be fulfilled on-device, but there will be many, many search use-cases that will require the internet

[Question] As users start searching on smartphones and those searches are basically rendered on the model, on the phone, without accessing the web, how do you guys anticipate monetizing some of these smartphone-based behaviors that are kind of run on the edge?

[Answer] If you look at what users are looking for, people are looking for information and an ability to connect with things outside. So I think there will be a set of use cases which you will be able to do on device. But for a lot of what people are looking to do, I think you will need the richness of the cloud, the Web and you have to deliver it to users. So again, to my earlier comments, I think through all these moments, you saw what we have done with Samsung with Circle to Search. I think it gives a new way for people to access Search conveniently wherever they are. And so we view this as a positive way to bring our services to users in a more seamless manner. So I think it’s positive from that perspective. In terms of on-device versus cloud, there will be needs which can be done on-device and we should to help it from a privacy standpoint. But there are many, many things for which people will need to reach out to the cloud. And so I don’t see that as being a big driver in the on-cloud versus off-cloud in any way.

Amazon (NASDAQ: AMZN)

Amazon’s management recently launched a new generative AI tool for third-party sellers to quickly create product detail pages on Amazon using just the sellers’ URL to their websites; more than 100,000 third-party sellers on Amazon are already using at least one of Amazon’s generative AI tools

We’ve recently launched a new generative AI tool that enables sellers to simply provide a URL to their own website, and we automatically create high-quality product detail pages on Amazon. Already, over 100,000 of our selling partners have used one or more of our gen AI tools. 

Amazon’s management is seeing AWS customers being excited about leveraging generative AI to change their customer experiences and businesses; AWS’s AI business is already at a multibillion-dollar revenue rate; AWS AI’s business is driven by a few things, including the fact that many companies are still building their models; management expects more models to be built on AWS over time because of the depth of AI offerings AWS has

Our AWS customers are also quite excited about leveraging gen AI to change the customer experiences and businesses. We see considerable momentum on the AI front where we’ve accumulated a multibillion-dollar revenue run rate already…

… I mentioned we have a multibillion-dollar revenue run rate that we see in AI already, and it’s still relatively early days. And I think that there’s — at a high level, there’s a few things that we’re seeing that’s driving that growth. I think first of all, there are so many companies that are still building their models. And these range from the largest foundational model builders like Anthropic, you mentioned, to every 12 to 18 months or building new models. And those models consume an incredible amount of data with a lot of tokens, and they’re significant to actually go train. And a lot of those are being built on top of AWS, and I expect an increasing amount of those to be built on AWS over time because our operational performance and security and as well as our chips, both what we offer from NVIDIA. But if you take Anthropic, as an example, they’re training their future models on our custom silicon on Trainium. And so I think we’ll have a real opportunity for a lot of those models to run on top of AWS.

Amazon’s management’s framework for thinking about generative AI consists of 3 layers –  the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon continues to add capabilities in all 3

You heard me talk about our approach before, and we continue to add capabilities at all 3 layers of the gen AI stack. At the bottom layer, which is for developers and companies building models themselves, we see excitement about our offerings…

…The middle layer of the stack is for developers and companies who prefer not to build models from scratch but rather seek to leverage an existing large language model, or LLM, customize it with their own data and have the easiest and best features available to deploy secure high-quality, low-latency, cost-effective production gen AI apps…

…The top of the stack are the gen AI applications being built. 

Amazon’s management thinks AWS has the broadest selection of Nvidia compute instances but also sees high demand for Amazon’s custom silicon, Trainium and Inferentia, as they provide favourable price performance benefits; larger quantities of Amazon’s latest Trainium chip, Trainium 2, will arrive in 2024 H2 and early 2025; Anthropic’s future models will be trained on Tranium

We have the broadest selection of NVIDIA compute instances around, but demand for our custom silicon, Trainium and Inferentia, is quite high given its favorable price performance benefits relative to available alternatives. Larger quantities of our latest generation Trainium2 is coming in the second half of 2024 and early 2025…

…But if you take Anthropic, as an example, they’re training their future models on our custom silicon on Trainium. 

SageMaker, AWS’s fully-managed machine learning service, has helped (1) Perplexity AI train models 40% faster, (2) Workday reduce inference latency by 80%, and (3) NatWest reduce time to value for AI from 12-18 months to less than 7 months; management is seeing an increasing number of AI model builders standardising on SageMaker

Companies are also starting to talk about the eye-opening results they’re getting using SageMaker. Our managed end-to-end service has been a game changer for developers in preparing their data for AI, managing experiments, training models faster, lowering inference latency, and improving developer productivity. Perplexity.ai trains models 40% faster than SageMaker. Workday reduces inference latency by 80% with SageMaker, and NatWest reduces its time to value for AI from 12 to 18 months to under 7 months using SageMaker. This change is how challenging it is to build your own models, and we see an increasing number of model builders standardizing on SageMaker.

Amazon’s management thinks Amazon Bedrock, a LLM-as-a-service offering, has the broadest selection of LLMs (large language models) for customers in addition to retrieval augmented generation (RAG) and other features; Bedrock offers high-profile LLMs – such as Anthropic’s Claude 3 and Meta’s Llama 3 – in addition to Amazon’s own Titan models; Custom Model Import is a new feature from Bedrock that satisfies a customer request (the ability to import models from SageMaker or elsewhere into Bedrock in a simple manner) that nobody has yet met; management is seeing customers being excited about Custom Model Import; Bedrock has tens of thousands of customers

 This is why we built Amazon Bedrock, which not only has the broadest selection of LLMs available to customers but also unusually compelling model evaluation, retrieval augmented generation, or RAG, to expand model’s knowledge base, guardrails to safeguard what questions applications will answer, agents to complete multistep tasks, and fine-tuning to keep teaching and refining models. Bedrock already has tens of thousands of customers, including adidas, New York Stock Exchange, Pfizer, Ryanair and Toyota. In the last few months, Bedrock’s added Anthropic’s Claude 3 models, the best-performing models in the planet right now; Meta’s Llama 3 models; Mistral’s various models, Cohere’s new models and new first-party Amazon Titan models.

A week ago, Bedrock launched a series of other features, but perhaps most importantly, Custom Model Import. Custom Model Import is a sneaky big launch as it satisfies a customer request we’ve heard frequently and that nobody has yet met. As increasingly more customers are using SageMaker to build their models, they’re wanting to take advantage of all the Bedrock features I mentioned earlier that make it so much easier to build high-quality production-grade gen AI apps. Bedrock Custom Model Import makes it simple to import models from SageMaker or elsewhere into Bedrock before deploying their applications. Customers are excited about this, and as more companies find they’re employing a mix of custom-built models along with leveraging existing LLMs, the prospect of these 2 linchpin services in SageMaker and Bedrock working well together is quite appealing…

…And the primary example we see there is how many companies, tens of thousands of companies, already are building on top of Amazon Bedrock.

Amazon’s management has announced the general availability of Amazon Q, a highly-capable generative AI-powered assistant; Amazon Q helps developers generate code, test code, debug code, and can save developers months of work when moving from older versions of Java to newer ones; Amazon Q has an Agents capability which can autonomously perform a range of tasks, including (1) implementing application features, and (2) parsing a company’s entire data stock to create summaries and surface insights; Amazon Q also has Q Apps, which lets employees describe in natural language what app they want to build on top of internal data; management believes that Q is the most functionally-capable AI-powered assistant for software development and data, as Q outperforms competitors; many companies are already using Amazon Q

And today, we announced the general availability of Amazon Q, the most capable generative AI-powered assistant for software development and leveraging company’s internal data.

On the software development side, Q doesn’t just generate code. It also tests code, debugs coding conflicts, and transforms code from one form to another. Today, developers can save months using Q to move from older versions of Java to newer, more secure and capable ones. In the near future, Q will help developers transform their .NET code as well, helping them move from Windows to Linux.

Q also has a unique capability called Agents, which can autonomously perform a range of tasks, everything from implementing features, documenting, and refactoring code to performing software upgrades. Developers can simply ask Amazon Q to implement an application feature such as asking it to create an add to favorites feature in a social sharing app, and the agent will analyze their existing application code and generate a step-by-step implementation plan, including code changes across multiple files and suggested new functions. Developers can collaborate with the agent to review and iterate on the plan, and then the agent implements it, connecting multiple steps together and applying updates across multiple files, code blocks and test suites. It’s quite handy. On the internal data side, most companies have large troves of internally relevant data that resides in wikis, Internet pages, Salesforce, storage repositories like Amazon S3 and a bevy of other data stores and SaaS apps that are hard to access. It makes answering straightforward questions about company policies, products, business results, code, people, and many other topics hard and frustrating. Q makes this much simpler. You can point Q at all of your enterprise data repositories and it will search all this data, summarize logically, analyze trends, engage in dialogue with customers about this data.

We also introduced today a powerful new capability called Q Apps, which lets employees describe a natural language what apps they want to build on top of this internal data and Q Apps will quickly generate that app. This is going to make it so much easier for internal teams to build useful apps from their own data.

Q is not only the most functionally capable AI-powered assistant for software development and data but also setting the standard for performance. Q has the highest-known score and acceptance rate for code suggestions, outperforms all other publicly benchmarkable competitors and catching security vulnerabilities, and leads all software development assistants on connecting multiple steps together and applying automatic actions. Customers are gravitating to Q, and we already see companies like Brightcove, British Telecom, Datadog, GitLab, GoDaddy, National Australia Bank, NCS, Netsmart, Slam, Smartsheet, Sun Life, Tata Consultancy Services, Toyota, and Wiz using Q, and we’ve only been in beta until today.

Amazon’s management believes that AWS has a meaningful edge in security elements when it comes to generative AI, and this has led to companies moving their AI focus to AWS

I’d also caution folks not to overlook the security and operational performance elements of these gen AI services. It’s less sexy but critically important. Most companies care deeply about the privacy of the data in their AI applications and the reliability of their training and production apps. If you’ve been paying attention to what’s been happening in the last year or so, you can see there are big differences between providers on these dimensions. AWS has a meaningful edge, which is adding to the number of companies moving their AI focus to AWS.

Amazon’s management sees Amazon’s capex increasing meaningfully in 2024 compared to 2023 ($48.4 billion in 2023) because of AWS’s accelerating growth and high demand for generative AI; the capex in 2024 will go mostly towards technology infrastructure; the capex of $14 billion in 2024 Q1 will be the low quarter for the year;

We expect the combination of AWS’ reaccelerating growth and high demand for gen AI to meaningfully increase year-over-year capital expenditures in 2024, which given the way the AWS business model works is a positive sign of the future growth…

…As a reminder, we define these as the combination of CapEx plus equipment finance leases. In 2023, overall capital investments were $48.4 billion…

…We do see, though, on the CapEx side that we will be meaningfully stepping up our CapEx and the majority of that will be in our — to support AWS infrastructure and specifically generative AI efforts…

…We’re talking about CapEx. Right now, in Q1, we had $14 billion of CapEx. We expect that to be the low quarter for the year.

Amazon’s management is very bullish on AWS, as 85% or more of global IT spend remains on-premise, even though AWS is already at at $100 billion-plus revenue run rate; in addition, there’s demand for generative AI, most of which will be created in the next few decades from scratch and on the cloud

We remain very bullish on AWS. We’re at $100 billion-plus annualized revenue run rate, yet 85% or more of the global IT spend remains on-premises. And this is before you even calculate gen AI, most of which will be created over the next 10 to 20 years from scratch and on the cloud. There is a very large opportunity in front of us. 

Amazon’s management thinks the generative AI opportunity is something they have not seen since the cloud or internet

We have a lot of growth in front of us, and that’s before the generative AI opportunity, which I don’t know if any of us have seen a possibility like this in technology in a really long time, for sure, since the cloud, perhaps since the Internet. 

Amazon’s management thinks much more money will be spent on AI inference than on model training; management sees quite a few companies that are building their generative AI applications to do inference on AWS

I think the thing that people sometimes don’t realize is that while we’re in the stage that so many companies are spending money training models, once you get those models into production, which not that many companies have, but when you think about how many generative AI applications will be out there over time, most will end up being in production when you see the significant run rates. You spend much more in inference than you do in training because you train only periodically, but you’re spinning out predictions and inferences all the time. And so we also see quite a few companies that are building their generative AI applications to do inference on top of AWS.

Amazon’s management sees both training and inference being really big drivers for AWS; this is helped by the fact that these AI models will work with companies’ data and the security surrounding the data is important for companies, and AWS has a meaningful edge in security

We see both training and inference being really big drivers on top of AWS. And then you layer on top of that the fact that so many companies, their models and these generative AI applications are going to have their most sensitive assets and data. And it’s going to matter a lot to them what kind of security they get around those applications. And yes, if you just pay attention to what’s been happening over the last year or 2, not all the providers have the same track record. And we have a meaningful edge on the AWS side so that as companies are now getting into the phase of seriously experimenting and then actually deploying these applications to production, people want to run their generative AI on top of AWS.

Apple (NASDAQ: AAPL)

Apple’s management continues to feel bullish about Apple’s opportunity in generative AI; Apple is making significant investments in the area and will be sharing details soon; management thinks Apple has advantages with AI given its unique combination of hardware, software, services, custom silicon (with industry-leading neural engines), and privacy

We continue to feel very bullish about our opportunity in generative AI. We are making significant investments, and we’re looking forward to sharing some very exciting things with our customers soon. We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create. 

Apple’s management does not expect Apple’s capex to inflect higher, nor the composition of the capex to change much, even as the company leans into AI

[Question] As Apple leans more into AI and generative AI, should we expect any changes to the historical CapEx cadence that we’ve seen in the last few years of about $10 billion to $11 billion per year? Or any changes to how we may have historically thought about the split between tooling, data center, and facilities?

[Answer]  We are obviously very excited about the opportunity with GenAI. We obviously are pushing very hard on innovation on every front, and we’ve been doing that for many, many years. Just during the last 5 years, we spent more than $100 billion in research and development. As you know, on the CapEx front, we have a bit of a hybrid model where we make some of the investments ourselves. In other cases, we share them with our suppliers and partners. On the manufacturing side, we purchase some of the tools and manufacturing equipment. In some of the cases, our suppliers make the investment. And we do something similar on the data center side. We have our own data center capacity and then we use capacity from third parties. It’s a model that has worked well for us historically, and we plan to continue along the same lines going forward.

Apple’s management will soon share their thoughts on how Apple intends to monetise AI on its devices – but not today

[Question] You’ve obviously mentioned your excitement around generative AI multiple times. I’m just curious how Apple is thinking about the different ways in which you can monetize this technology because, historically, software upgrades haven’t been a big factor in driving product cycles. And so could AI be potentially different?

[Answer] I don’t want to get in front of our announcements obviously. I would just say that we see generative AI as a very key opportunity across our products, and we believe that we have advantages that set us apart there. And we’ll be talking more about it in — as we go through the weeks ahead.

Arista Networks (NYSE: ANET)

Arista Networksmanagement sees an addressable market of US$60 billion in client-to-cloud AI networking

Amidst all the network consolidation, Arista is looking to establish ourselves as the pure-play networking innovator, for the next era, addressing at least a $60 billion TAM in data-driven client-to-cloud AI networking.

Arista Networks’ management is pleased with the momentum they are seeing in the company’s customer segments, including the Cloud and AI Titans segment; management is becoming increasingly constructive about hitting their 2025 target of US$750 million in AI revenue; the 2025 target of US$750 million is not a hockey-stick target, but a glide path

We are quite pleased with the momentum across all our 3 sectors: Cloud and AI Titans, Enterprise and Providers. Customer activity is high as Arista continues to impress our customers and prospects with our undeniable focus on quality and innovation…

… A good AI network needs a good data strategy, delivered by our highly differentiated EOS and network data lake architecture. We are, therefore, becoming increasingly constructive about achieving our AI target of $750 million in 2025…

…When you think about the $750 million target that has become more constructive to Jayshree’s prepared remarks, that’s a glide path. So it’s not 0 in ’24, It’s a glide path to ’25. 

Traditional networking discards data as the network changes state, but recent developments in AI show how important it is to gather and store large data sets – this is a problem Arista Networks’ management is solving through the company’s NetDL (Network Data Lake) platform, which streams every piece of network data in real time and archives the full data history

From the inception of networking decades ago, networking has involved rapidly changing data. Data about how the network is operating, which paths through the network our best and how the network is being used. But historically, most of this data was to simply discarded as the network changes state and that which was collected can be difficult to interpret because it lacks context. Network addresses and port numbers by themselves, provide a little insight into what users are doing or experiencing.

Recent developments in AI have proved the value of data. But to take advantage of these breakthroughs, you need to gather and store large data sets, labeled suitably for machine learning. Arista is solving this problem with NetDL, we continually monitor every device, not simply taking snapshots, but rather streaming every network event, every counter, every piece of data in real time, archiving a full history in NetDL. Alongside this device data, we also collect flow data and inbound network telemetry data gathered by our switches. Then we enrich this performance data further with user, service and application layer data from external sources outside the network, enabling us to understand not just how each part of the network is performing, but also which users are using the network for what purposes. And how the network behavior is influencing their experience. NetDL is a foundational part of the EOS stack, enabling advanced functionality across all of our use cases. For example, in AI fabrics, NetDL enables fabric-wide visibility, integrating network data and NIC data to enable operators to identify misconfigurations or misbehaving hosts and pinpoint performance bottlenecks.

Any slowdown in the network when running generative AI training tasks can reduce processor performance by 30% or more

As generative AI training tasks evolve, they are made up of many thousands of individual iterations. Any slowdown due to network and critically impact the application performance, creating inefficient wait stage and idling away processor performance by 30% or more. The time taken to reach coherence known, as job completion time is an important benchmark achieved by building proper scale-out AI networking to improve the utilization of these precious and expensive GPUs. 

A Cloud and AI Titan customer of Arista Networks used the company’s product to build a 24,000 node GPU cluster for complex AI training tasks; Arista Networks’ product offered an improvement of at least 10% on job completion performance across all packet sizes versus InfiniBand; in Arista Networks’ four recent AI Ethernet clusters that was won versus InfiniBand, management is seeing all four projects migrate from trials to pilots; Arista Networks will be connecting thousands of GPUs in the four projects this year and management expects to connect 10,000 to 100,000 GPUs in 2025; ethernet was traditionally considered to have loss properties while InfiniBand was traditionally considered to be lossless, but when ethernet is used in actual GPU clusters, ethernet is 10% faster than Infiniband; management expects improvement in ethernet’s performance relative to Infiniband in the future, driven partly by the Ultra Ethernet Consortium 

In a recent blog from one of our large Cloud and AI Titan customers, Arista was highlighted for building a 24,000 node GPU cluster based on our flagship 7800 AI Spine. This cluster tackles complex AI training tasks that involve a mix of model and data parallelization across thousands of processors and ethernet is proving to offer at least 10% improvement of job completion performance across all packet sizes versus InfiniBand…

…If you recall, in February, I shared with you that we are progressing well in 4 major AI Ethernet clusters, that we won versus InfiniBand recently. In all 4 cases, we are now migrating from trials to pilots, connecting thousands of GPUs this year, and we expect production in the range of 10,000 to 100,000 GPUs in 2025…

…Historically, as you know, when you look at InfiniBand and Ethernet in isolation, there are a lot of advantages of each technology. Traditionally, InfiniBand has been considered lossless and Ethernet is considered to have some loss properties. However, when you actually put a full GPU cluster together along with the optics and everything, and you look at the coherents of the job completion time across all packet sizes, data has shown that and this is data that we have gotten from third parties, including Broadcom, that just about in every packet size in a real-world environment independent of the comparing those technologies, the job completion time of Ethernet was approximately 10% faster. So you can look at these things in silos. You can look at it in a practical cluster and in a practical cluster we are already seeing improvements on Ethernet. Now don’t forget, this is just Ethernet as we know it today. Once we have the Ultra Ethernet Consortium and some of the improvements you’re going to see on packet spring and dynamic load balancing and congestion control, I believe those numbers will get even better. 

Arista Networks’ management is witnessing an inflection of AI networking and expects the trend to continue both in the short and long run; management is seeing ethernet emerging as critical infrastructure for both front-end and back-end AI data centers; AI applications require seamless communication between the front-end (includes CPUs, or central processing units) and back-end (includes GPUs and AI accelerators); management is seeing ethernet at scale becoming the de facto network and premium choice for scaled-out AI training workloads

We are witnessing an inflection of AI networking and expect this to continue throughout the year and decade. Ethernet is emerging as a critical infrastructure across both front-end and back-end AI data centers. AI applications simply cannot work in isolation and demand seamless communication among the compute nodes, consisting of back-end GPUs and AI accelerators and as well as the front end nodes like the CPUs, alongside storage and IP/WAN systems as well…

…Ethernet at scale is becoming the de facto network at premier choice for scale-out AI training workloads.

Arista Networks’ management thinks that visibility on new AI and cloud projects is getting better and has now improved to at least 6 months

In summary, as we continue to set the direction of Arista 2.0 networking, our visibility to new AI and cloud projects is improving and our enterprise and provider activity continues to progress well…

…In the Cloud and AI Titans in November, we were really searching for even 3 months visibility, 6 would have been amazing. Today, I think after a year of tough situations for us where the Cloud Titans were pivoting rather rapidly to AI and not thinking about the Cloud as much. We’re now seeing a more balanced approach where they’re still doing AI, which is exciting, but they’re also expanding their regions on the Cloud. So I would say our visibility has now improved to at least 6 months and maybe it gets longer as time goes by.

Arista Networks’ management still sees Infiniband as the de facto network of choice for AI workloads, but ethernet is gaining ground; management sees ethernet as being the eventual winner against InfiniBand because ethernet has a long history of 50 years that gives it an advantage (Metcalfe’s law) 

And then sometimes we see them, obviously, when they’re pushing InfiniBand, which has been, for most part, the de facto network of choice. You might have heard me say, last year or the year before, I was outside looking into this AI networking. But today, we feel very pleased that we are able to be the scale-out network for NVIDIA’s, GPUs and NICs based on Ethernet.,,

…This InfiniBand topic keeps coming up. And I’d just like to point out that Ethernet is about 50 years old. And over those 50 years, Ethernet has come head-to-head with a bunch of technologies like Token ring, SONET, ATM, FDDI, HIPPI, Scalable Coherent Interconnect, [ Mirrornet ]. And all of these battles have one thing in common. Ethernet won. And the reason why is because of Metcalfe’s law, the value of a network is quadratic in the number of nodes of the interconnect. And so anybody who tries to build something which is not Ethernet, is starting off with a very large quadratic disadvantage. And any temporary advantage they have because of the — some detail of the tech cycle is going to be quickly overwhelmed by the connectivity advantage you have with Ethernet.

Arista Networks’ management does not see Nvidia as a direct competitor for ethernet; management also believes that Arista Networks’ focus and experience are advantages

We don’t see NVIDIA as a direct competitor yet on the Ethernet side. I think it’s 1% of their business. It’s 100% of our business. So we don’t worry about that overlap at all. And we think we’ve got 20 years of founding to now experience to make our Ethernet switching better and better at both on the front end and back end. So we’re very confident that Arista can build the scale up network and work with NVIDIA scale-up GPUs.

Within AI networking, Arista Networks’ management is seeing the first use case emerging to be the build-out of the fastest training workloads and clusters

The first use case that’s emerging for AI networking is, let’s just build the fastest training workloads and clusters. And they’re looking at performance. Power is a huge consideration, the cooling of the GPUs is a huge part of it. You would be surprised to hear a lot of times, it’s just waiting on the facilities and waiting for the infrastructure to be set up, right?

Arista Networks’ management is seeing Tier 2 cloud providers starting to pick up AI initiatives, although the Tier 2 providers are not close to the level the activity as the Cloud Titans

The Tier 2 cloud providers, I want to speak to them for a moment because not only are they strong for us right now, but they are starting to pick up some AI initiatives as well. So they’re not as large as close as the Cloud Titans, but the combination of the Service Providers and the Tier 2 Specialty Providers is also seeing some momentum.

Arista Networks is seeing GPU lead times improve significantly 

The GPU, the number of GPUs, the location of the GPUs, the scale of the GPUs, the locality of these GPUs, should they go with Blackwell should they build with a scale up inside the server or scale out to the network. So the whole center of gravity, what’s nice to watch which is why we’re more constructive on the 2025 numbers is that the GPU lead times have significantly improved, which means more and more of our customers will get more GPUs, which in turn means they can build out to scale our network.

Arista Networks’ management is not seeing any pause in their customers’ investments in GPU clusters and networking just to wait for the delivery of Nvidia’s latest Blackwell AI chips; Arista Networks’ networking products can perform the required networking tasks well regardless of what GPU is used

[Question] I want to go back to AI, the road map and the deployment schedule for Blackwell. So it sounds like it’s a bit slower than maybe initially expected with initial customer delivery late this year. How are you thinking about that in terms of your road map specifically and how that plays into what you’re thinking about ’25 in a little bit more detail. And does that late delivery maybe put a little bit of a pause on maybe some of the cloud spend in the fall of this year as there seems to be somewhat of a technology transition going on towards Blackwell away from the Legacy product?

[Answer] We’re not seeing a pause yet. I don’t think anybody is going to wait for Blackwell necessarily in 2024 because they’re still bringing up their GPU clusters. And how a cluster is divided across multiple tenants, the choice of host, memory, storage architectures, optimizations on the GPU for collective communication, libraries, specific workloads, resilience, visibility, all of that has to be taken into consideration. All this to say, a good scale-out network has to be built, no matter whether you’re connecting to today’s GPUs or future Balckwells. And so they’re not going to pause the network because they’re waiting for Blackwell. they’re going to get ready for the network, whether it connects to a Blackwell or a current H100. So as we see it, the training workloads and the urgency of getting the best job completion time is so important that they’re not going to spare any investments on the network side and the network side can be ready no matter what the GPU is.

ASML (NASDAQ: ASML)

ASML’s management sees no change to the company’s outlook for 2024 from what was mentioned in the 2023 Q4 earnings call, with AI-related applications still driving demand, Memory demand being driven by DRAM technology node transitions to support DDR5 and HBM, and Logic customers digesting capacity additions made in 2023

Looking at the market segments, we see a similar environment as communicated last quarter with demand momentum from AI-related applications. Memory demand is primarily driven by DRAM technology node transitions in support of advanced memories such as DDR5 and HBM. Logic customers continue to digest the significant capacity additions made over the last year — over the past year

ASML’s management sees some macro uncertainties as still being present, but the long-term trends in the company’s business (AI, electrification, energy transition) are intact

There are still some uncertainties. I would say primarily macro uncertainties. That’s still clearly there…

…If you look at the trends in the industry, if you look at, and I’m talking about the cyclicality trends in the industry, so like the utilization going up, inventory downstream being managed to more normal levels. I think it’s pretty clear that the industry is in its upturn and therefore we do believe that by 2024 we’re going to see a recovery. Clearly a recovery of the industry. So then fast forward to 2025. Then what do we find ourselves in? First off, I think we will find ourselves in 2025 in the midst of the upturn. So that’s a positive. Second – and we’ve talked about that many times – the secular trends are really strong. If you look at AI, if you look at electrification, if you look at the energy transition. It’s all very strong, very positive momentum behind it. So the secular trends are very, very strong. That is also something that I think will yield in 2025. Finally, if you just look at all the fab openings that have been indicated by our customers. The recent news on positive outcomes of CHIPS Act money allocation. All of that is very strong, very supportive for new fab openings across the globe. I think by 2025 you will see all three of those coming together. New fab openings, strong secular trends and the industry in the midst of its upturn. So that’s why we’re doing what we’re doing. Which is really preparing for that ramp, for that momentum that we see being built up.

ASML’s management thinks that AI will be driving demand for leading-edge and high-performance compute; AI is itself driven by massive amounts of data and the overlay of smart software over the data; management also thinks that IoT (Internet of Things) will be an area with plenty of AI applications

You’re basically saying what will drive leading-edge, high-performance compute. But you’re absolutely right. I mean, when you think about high-performance compute, and especially in the context of AI, and I’ve said this many, many times before, AI is driven by massive amounts of data and about also understanding the correlation between those data elements and then overlaying that with smart software. But — and I also believe, it’s actually what I’m seeing and what I’m hearing is that IoT in the industrial space will actually be in — will be an area where we will see a lot of AI applications. Well, in order to collect all that data, you need sensors because you’ve got all kinds of examples, whether it’s the car or whether it’s life science, medical equipment, it’s about sensing, and that is really the domain of mainstream semiconductors.

ASML’s management is seeing the software world enjoying 30% to 50% increases in productivity because of the use of AI

And when you think about AI, I mean, some of these examples, and especially in the software space where you see productivity, just the calculated productivity advantages of 30% to 50%, then the value of the next-generation transistor will be huge.

Coupang (NYSE: CPNG)

Coupang’s management is exploring both the company’s own foundational AI models as well as those from third-parties; AI has been a core part of Coupang’s strategy and management has been deploying the technology in many areas of the company’s business; management is excited about the potential of AI, but will be testing its ability to generate returns for the business

On AI, we are exploring, both for us, as you mentioned, foundational models as well as our own. Machine learning and AI continues to be — have been a core part of our strategy. We’ve deployed them in many facets of our business, from supply chain management to same-day logistics. We’re also seeing tremendous potential with large language models in a number of areas from search and ads, to catalog and operations, among others. There’s exciting potential for AI that we see and we see opportunities for it to contribute even more significantly to our business. But like any investment we make, we’ll test and iterate and then invest further only in the cases where we see the greatest potential for return.

Datadog (NASDAQ: DDOG)

Datadog’s management has announced general availability of Bits AI for incident management, where Bits AI can produce auto-generated incident summaries for incident responders

In the MegaGenAI space, we announced general availability of Bits AI for incident management. By using Bits AI for incident management, incident responders get auto-generated incident summaries to quickly understand the context and scope of a complex incident. And users can also enqure Bits AI to ask about related incidents and to form tasks on the fly from incident creation to resolution.

There’s growing interest in AI from Datadog’s customers, and the company’s next-gen AI customers accounted for 3.5% of ARR (was 3% in 2023 Q4); the percentage of ARR from next-gen AI customers is a metric that management thinks will become less relevant over time as AI adoption broadens

We’re also continuing to see more interest in AI from our customers. As a data point, ARR for next GenAI customers was about 3.5% of our total, a strong sign of the growing ecosystem of companies in this area…

…I’m not sure this is a metric we’ll keep bringing up. It was interesting for us to look at this small group of early AI-native companies to get a sense of what might come next in the world of AI. But I think as we — as time goes by and as AI adoption broadens, I think it becomes less and less relevant. 

Datadog has AI integrations that allow customers to pull their AI data into the Datadog platform; around 2,000 customers are already using 1 or more of the AI integrations

To help customers understand AI technologies and bring them into production applications, our AI integrations allow customers to pull their AI data into the Datadog platform. And today, about 2,000 of our customers are using 1 or more of these AI integrations. And we’ve continued to keep up with the rapid innovation in this space. For example, adding a new integration in Q1 with the NVIDIA Triton [indiscernible] server. 

Datadog’s management has announced general availability for Event Management in the cloud service management area; Event Management reduces the volume of alerts and events Datadog’s customers have to deal with; with Event Management, Datadog now has a full AI solution that helps teams automate remediation, proactively prevent outages and reduce the impact of incidents.

In the cloud service management area, we released event management in the general availability. Our customers face increasing complexity at scale, causing the volume of alerts and events to explode, which makes it difficult for teams to identify, prioritize, summarize and route issue to the right responders. Event management addresses this challenge by automatically reducing a massive volume of events and alerts into actionable insights. These are then used to generate tickets, call an incident or trigger an automated remediation. By combining event management with Watchdog, Bits AI and workflow automations, Datadog now provides a full AI solution that helps teams automate remediation, proactively prevent outages and reduce the impact of incidents…

…We just announced in GA, the Event Management product, which is the main missing building block we had for AIOps platform.

Datadog’s Azure business is growing faster than Azure itself, and Datadog’s AI-part of Azure is growing than faster then the AI-part of Azure itself

The hyperscaler that is the most open by is — or transparent by in terms of numbers is Microsoft as they disclose how much of their growth comes from AI more specifically. And I will say that if you compare our business to theirs, the Azure part of our business is growing faster than Azure itself. And the AI-driven part of our Azure business itself is also growing faster than what you see on the on the overall Azure number. So we think we have similar exposure, and we track to the same trends broadly

Datadog’s AI exposure leans toward inferencing and applications in production a lot more than the training of models

I will say also on AI adoption that some of the revenue jumps you might see from the cloud providers might relate to supply of GPUs coming online and a lot of training clusters being provisioned. And those typically won’t generate a lot of new usage for us. We tend to be more correlative with the live applications, production applications and inference workloads that tend to follow after that, and that are more tied to all of these applications going into production. 

Datadog has products for monitoring what AI models are doing, but those products are not in general availability yet; management expects to have more announcements on these monitoring products in the near future; Datadog’s customers that are the most scaled on AI workloads are model providers, and they tend to have their own monitoring infrastructure for the quality of the models; the needs of the model providers for monitoring infrastructure are not representative of the needs of the bulk of the market, but there may still be overlaps in the future if the situation with the cloud hyperscalers is used as a guide

We have products for monitoring, not just the infrastructure, but what the LLMs are doing. Those products are still not in GA, so we’re working with a smaller number of design partners for that. As I think not only these products are maturing, but also the industry around us is maturing and more of these applications are getting into production. You should expect to hear more from us on that topic in the near future. The customers we have that are the most scaled on AI workloads are the model providers themselves, and they tend to have their own infrastructure for monitoring the quality of the models…

…On the tooling, I would say there’s a handful of players that have been building that tooling for a few years for — in a way that’s very specialized to what they do internally. They are not necessarily the representative of the bulk of the market. So in those situations, we’re always careful about overfitting products to a group that might not be the right target customer group in the end in the same way that building infrastructure monitoring for the cloud providers to use internally might not be an exact fit for what the rest of the world needs. That being said, I mean, look, we work a lot with those companies, and they have a number of needs that some of them they can meet internally and some of them, they don’t. And if I go back to the example of hyperscalers, we actually have teams at the hyperscalers that use us for application or infrastructure or logs internally, even though they’ve built a lot of that tooling themselves. So I think everything is possible in the long run. But our focus is really on the vast majority of the customer base that’s going to either use those API-based products or tune and run their own models.  

Datadog’s management is seeing a trend of AI-adopters starting with an API-accessible AI model to build applications, before offloading some of the workload to open-sourced AI models

We think there are good/bad weather in terms of what the adoption of AI is going to be from all the other companies, and we definitely see a trend where customers start with an API-driven or API-accessible model, build applications and then offload some of that application to other models that typically come from the open source and they might train, fine-tune themselves to get to a lower cost and lower time to respond.

Management is seeing a lot of interest in Datadog’s new AI-related products; management thinks its AI-related products are a joy to use

We see a lot of interest in the new products. These are new products so we just announced in GA, the Event Management product, which is the main missing building block we had for AIOps platform. And we also just released into GA, Bits for incident management. So there’s a lot of demand for it. The products are actually, I will say it, for Bits for incident management is a joy to use.

Etsy (NASDAQ: ETSY)

Etsy’s management believes the company’s product team is getting more efficient with machine learning

We had double-digit growth in the number of experiments per product engineer that utilize machine learning as well as in our annualized gross GMS from experiments. And the total number of experiments run per engineer increased 20%. Some of this progress can be directly tied to work we told you about last year to democratize ML. These metrics give me confidence that the bold moves to improve customer experience can build over time and play a key role to get Etsy growing again.

Etsy’s management thinks the application of AI is very useful for the company’s Gift Mode initiative

Large language models were really helpful for Gift Mode. So for example, there are 200 different persona in Gift Mode. And then within each persona, there are 3 to 5 different gift ideas and the ability to ask large language models, what are 200 examples of persona, and it wasn’t quite this simple, but it does give you a head start on that. If I’m a foodie who also loves to travel, what are 3 things I might buy on Etsy, 3 different ideas for gifts on Etsy, like, it does help to come up with a lot of ideas more quickly. The productivity gains, large language models are starting to help us with coding productivity as well. 

Etsy’s management finds the use of machine learning (ML) to be particularly useful in removing products that violate Etsy’s policies

We’re doing more than ever to suppress and remove listings that violate our policies. And advances in ML have been particularly powerful as enablers here. In the first quarter, we removed about 115% more listings for violating our handmade policy than in the prior year…

…For example, does this same item exist also on AliExpress. And we assume right now, if that item exists on AliExpress, we assume it’s mass produced and we take it down. You as a seller can appeal that, you can tell us how you made it yourself, and it still ended up on AliExpress. And by the way, that’s true sometimes. You can appeal that, but our default now is we take that down. And that’s just one example. Gen AI is actually going to be, I think, more and more helpful at understanding how much value did this particular seller truly add to the product.

Etsy’s management has used machine learning to improve the estimation of delivery time for products

In terms of shipping timeliness, I’m pleased to report that our initiative to tighten estimated delivery dates, which we believe are an important effort to improve buyer perceptions of our reliability as well as to grow GMS, are already paying off. Our fulfillment team recently launched a new machine learning model, which reduced our estimate of USPS transit times by greater than 1 day, resulting in a nearly tripling of the percentage of eligible orders for which Etsy is now able to show an estimated delivery date of 7 days or less.

Fiverr (NYSE: FVRR)

Fiverr’s management continues to see AI having a net positive impact on the company’s business; AI-related services saw 95% year-on-year growth in GMV on Fiverr’s platform, with chatbot development being especially popular; a hospitality company and an online learning platform are two examples of companies that have used Fiverr for AI chatbot development

AI continued to have a net positive impact on our business, as complex services continue to grow faster and represent a bigger portion of our business. Demand for AI-related services remained strong, as evidenced by 95% year-over-year growth in GMV from AI service categories. Chatbot development was especially popular this quarter as businesses look for ways to lean into GenAI technology to better engage with customers. For example, we have seen a hospitality company building a conversational tool for customers to manage bookings or an online learning platform creating a personalized learning menu and tutoring sessions for children. 

Fiverr has a pool of 10,000 AI experts and it is growing

With an over 10,000 and growing AI expert pool, Fiverr has become the destination for businesses to get help implementing GenAI and take their business to the next level.

Fiverr’s management is seeing very promising signals on Fiverr Neo, the company’s AI assistant for matching buyers with sellers; one-third of buyers who received seller recommendations from Neo sent a project brief to a seller; overall order conversion with Neo is nearly 3x that of the Fiverr marketplace average; management is excited about the potential of AI matching technology 

We have also seen very promising signals on Fiverr Neo, the AI matching assistant that we launched last year. Neo enables our buyers to have a more natural purchasing path by creating a conversational experience that leverages the catalog data and search algo. Answers and steps are provided based on buyers questions and the stage of the search. As a result, we saw that nearly one-third of the buyers who received seller recommendations from Neo ended up sending a project brief to the seller and the overall order conversion is nearly 3 times that of the marketplace average. This really gives us confidence and excitement in the potential we could unlock by investing in AI matching technology.

Fiverr’s product innovation pace had picked up in recent years; the latest set of product innovations will be focused on deepening trust and leveraging AI

Our product innovation pace picked up even more in recent years as the scale of our marketplace significantly expanded. This includes monetization products, such as Promoted Gigs and Seller Plus; AI innovations such as Logo Maker, AI Audition, to the latest ground-breaking Fiverr Neo; Business Solutions offerings, such as Project Partner and Fiverr Certified; and numerous products and features such as Fiverr Discover, Milestones and Subscriptions that empower our community to work better and smarter. We are always leading the curve of innovation that powers growth not only for us, but for the industry.

As our teams work towards our July product release, we are focusing on deepening trust and leveraging AI to reimagine every aspect of the customer journey. This includes improving our catalog and building new experiences to enable high-stakes, high-trust work to happen on Fiverr. We are strengthening our muscle in knowing our customers better in order to provide them with the better matching, better recommendations and better customer care, all of which leads to more trust for Fiverr as a platform. We are already seeing some of the benefits in unlocking wallet share and driving a mix shift towards complex services on Fiverr, and we are going to see more impact down the road

All the work that Fiverr facilitates happens on Fiverr, so management believes that the company has a lot of data (for perspective, in 2023, 38 million files were exchanged on Fiverr, and 2.5 million messages were sent daily between buyers and sellers) to leverage with generative AI to take the matching experience for buyers and sellers to a new level

Second, data and AI matching. Fiverr is unique in the sense that we are not just a platform that connects businesses with freelancers, the entire work actually happens on Fiverr. And that is really the secret sauce that enables us to do matching in such a simple, accurate and seamless way. With Generative AI, there’s incredible potential to take that experience to a whole new level. Just to give you some idea of the scale we operate. In 2023, over 38 million files were exchanged on our platform, and on average, 2.5 million messages were sent between buyers and sellers on a daily basis. We are experimenting with GenAI technology on how to unlock the potential of that massive data on Fiverr in order to enable buyers and sellers to have more information, search and browse in new ways, ask more complex questions, and ultimately, make better, more informed choices on Fiverr.

Fiverr’s management is seeing the presence of AI having a negative impact on the simple, low-value services on the company’s marketplace, but AI is overall a net-positive for Fiverr; management gave an example of how only simple language translation services are being impacted by AI, but the complex translation services are not

We mentioned in the previous earnings the fact that the negative impact that we’re seeing from AI is mostly around the very simple types of services. Those are normally services that would sell for $10, $15, which is — I mean, we are moving. I mean, the majority of contribution is coming from more complex services anyway. And as I said, we continue to see AI as a net positive. So it’s contributing more than the offsetting factors of simple products.It happens across several categories in several verticals, but there’s nothing specific to call out. Even if you look at the areas that you might think that AI would influence significantly like translation. But what you’re seeing is actually the very simple services around translation are being affected, the more complex types of services are not. I mean, if you would publish a book and then want to translate it into a different language that you don’t command, I would doubt that you would let AI translate it and go publish the outcome without actually verifying it.

Fiverr’s management is sure that many experts use AI as part of their workflow, but they do not rely on the AI blindly

I’m sure many experts actually use AI tools in their process of work, but they don’t rely on blindly letting AI run the work for them, but it is more of the modern tech that they use in order to amplify their creative process.

Mastercard (NYSE: MA)

Scam Protect is a new service launched by Mastercard’s management to protect users against cybercrime; Scam Protect combines Mastercard’s identity biometric AI and open banking capabilities

Cybercrime is a growing concern, last year alone, people in the United States lost over $12 billion to Internet scams. Scam Protect builds on the cybersecurity protections we have delivered for years, combines our identity biometric AI and open banking capabilities to identify and prevent scans before they occur. 

Mastercard is partnering with Verizon to design new AI tools to identify and block scammers

By combining Mastercard’s Identity Insights with Verizon’s robust network technologies, new AI power tools will be designed to more accurately identify and block scammers. 

Mastercard’s management has continued to enhance the company’s solutions with generative AI; Decision Intelligence Pro is a real-time transaction fraud solution for banks that is powered by generative AI to improve scoring and fraud detection by 20%; management sees tremendous opportunity with generative AI and has created a central role for AI

We continue to enhance our solutions with generative AI to deliver even more value, a world-leading real-time fraud solution, Decision Intelligence, has been helping banks score and safely approve billions of transactions, ensuring the safety of consumers and the entire payments networks for years. The next-generation technology, Decision Intelligence Pro is supercharged by generative AI to improve the overall score and boost fraud detection rates on average by 20%…

…We see tremendous opportunity on the AI side, particularly on the generative AI side, and we’ve created a central role for that. 

Meta Platforms (NASDAQ: META)

Meta is building a number of different AI services, including Meta AI (an AI assistant), creator AIs, business AIs, internal coding and development AIs, and hardware for AI interactions

We are building a number of different AI services from Meta AI, our AI assistant that you can ask any question across our apps and glasses, to creator AIs that help creators engage their communities and that fans can interact with, to business AIs that we think every business eventually on our platform will use to help customers buy things and get customer support, to internal coding and development AIs, to hardware like glasses for people to interact with AIs and a lot more.

Meta’s management released the company’s new version of Meta AI recently and it is powered by the company’s latest foundational model, Llama 3; management’s goal is for Meta AI to be the world’s leading AI service; tens of millions of people have tried Meta AI and the user feedback has been very positive; Meta AI is currently in English-speaking countries, but will be rolled out in more languages and countries in the coming months; management believes that the Llama3 version of Meta AI is the most intelligent AI assistant; Meta AI can be used within all of Meta’s major apps; besides being able to answer queries, Meta AI can also create animations as well as generate images while users are typing, which is a magical experience; Meta AI can also be used in Search within Meta’s apps, and Feed and Groups on Facebook

Last week, we had the major release of our new version of Meta AI that is now powered by our latest model, Llama 3. And our goal with Meta AI is to build the world’s leading AI service, both in quality and usage. The initial rollout of Meta AI is going well. Tens of millions of people have already tried it. The feedback is very positive. And when I first checked in with our teams, the majority of feedback we were getting was people asking us to release Meta AI for them wherever they are. So we’ve started launching Meta AI in some English speaking countries, and we’ll roll out in more languages and countries over the coming months…

…We believe that Meta AI with Llama 3 is now the most intelligent AI assistant that you can freely use. And now that we have the superior quality product, we’re making it easier for lots of people to use it within WhatsApp, Messenger, Instagram, and Facebook…

…In addition to answering more complex queries, a few other notable and unique features from this

release: Meta AI now creates animations from still images, and now generates high quality images so

fast that it can create and update them as you’re typing, which is pretty awesome. I’ve seen a lot of people commenting about this experience online and how they’ve never seen or experienced anything like it before…

…Along with using Meta AI within our chat surfaces, people will now be able to use Meta AI in Search within our apps, as well as Feed and Groups on Facebook. We expect these integrations will complement our social discovery strategy as our recommendation systems help people to discover and explore their interests while Meta AI enables them to dive deeper on topics they’re interested in. 

Meta’s foundational AI model, Llama3, has three versions with different number of parameters; management thinks the two smaller versions are both best-in-class for their scale; the 400+ billion parameter version of Llama3 is still undergoing training and is on track to be industry-leading; management thinks the Llama3 models will improve from further open source contributions

I’m very pleased with how Llama 3 has come together so far. The 8B and 70B parameter models that we released are best-in-class for their scale. The 400+B parameter model that we’re still training seems on track to be industry-leading on several benchmarks. And I expect that our models are just going to improve further from open source contributions. 

Meta’s management wants the company to invest significantly more in the coming years to build more advanced AI models and the largest scale AI services in the world, but the AI investments will come ahead of any meaningful revenue-generation from these new AI products

This leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world. As we’re scaling capex and energy expenses for AI, we’ll continue focusing on operating the rest of our company efficiently. But realistically, even with shifting many of our existing resources to focus on AI, we’ll still grow our investment envelope meaningfully before we make much revenue from some of these new products…

… …We anticipate our full-year 2024 capital expenditures will be in the range of $35-40 billion, increased from our prior range of $30-37 billion as we continue to accelerate our infrastructure investments to support our AI roadmap. While we are not providing guidance for years beyond 2024, we expect capex will continue to increase next year as we invest aggressively to support our ambitious AI research and product development efforts.

Meta’s management thinks there are a few ways to build a massive AI business for Meta – these include business messaging, introducing ads and paid content in AI interactions, and selling access to powerful AI models and AI compute – in addition to the benefits to Meta’s current digital advertising business through the use of AI; management thinks business messaging is one of Meta’s nearer-term opportunities; management’s long-term vision for business messaging is to have AI agents that can accomplish goals rather than merely be a chatbot that replies to messages; management thinks that the capabilities of Meta’s business messaging AI technology will see massive improvements in as short as a year’s time

There are several ways to build a massive business here, including scaling business messaging, introducing ads or paid content into AI interactions, and enabling people to pay to use bigger AI models and access more compute. And on top of those, AI is already helping us improve app engagement which naturally leads to seeing more ads, and improving ads directly to deliver more value…

… The cost of engaging with people in messaging is still very high. But AI should bring that down just dramatically for businesses and creators. And I think that, that has the potential. That’s probably the — beyond just increasing engagement and increasing the quality of the ads, I think that, that’s probably one of the nearer-term opportunities, even though that will — it’s not like next quarter or the quarter after that scaling thing, but it’s — but that’s not like a 5-year opportunity either…

…I think that the next phase for a lot of these things are handling more complex tasks and becoming more like agents rather than just chat bots, right? So when I say chatbot, what I mean is if you send a message and it replies to your message, right? So it’s almost like almost a 1:1 correspondence. Whereas what an agent is going to do is you give it an intent or a goal, then it goes off and probably actually performs many queries on its own in the background in order to help accomplish your goal, whether that goal is researching something online or eventually finding the right thing that you’re looking to buy…  I think basically, the larger models and then the more advanced future versions that will be smaller as well are just going to enable much more interesting interactions like that. So I mean if you think about this, I mean, even some of the business use cases that we talked about, you don’t really just want like sales or customer support chatbot that can just respond to what you say. If you’re a business, you have a goal, right? You’re trying to support your customers well and you’re trying to position your products in a certain way and encourage people to buy certain things that map to their interests and would they be interested in? And that’s more of like a multiturn interaction, right?

So the type of business agent that you’re going to be able to enable with just a chatbot is going to be very naive compared to what we’re going to have in a year even, but beyond that, too, is just the reasoning and planning abilities if these things grow to be able to just help guide people through the business process of engaging with whatever your goals are as a creator of a business. So I think that that’s going to be extremely powerful. 

Meta’s AI recommendation system is currently delivering 30% of posts on the Facebook feed (up 2x over the last few years) and more than 50% of the content people see on Instagram (the first time this threshold is reached)

Right now, about 30% of the posts on Facebook feed are delivered by our AI recommendation system. That’s up 2x over the last couple of years. And for the first time ever, more than 50% of the content people see on Instagram is now AI recommended.

Revenue from two of Meta’s end-to-end AI-powered advertising tools, Advantage+ Shopping and Advantage+ App Campaigns, have more than doubled since last year; test results for the single-step automation feature of Advantage+ has resulted in a 28% decrease in cost per click or per objective for advertisers; Meta has significant runway to broaden adoption of the end-to-end automation features of Advantage+ and the company has enabled more conversion types

If you look at our two end-to-end AI-powered tools, Advantage+ Shopping and Advantage+ App Campaigns, revenue flowing through those has more than doubled since last year…

…So on the single-step automation, Advantage Plus audience, for example, has seen significant growth in adoption since we made it the default audience creation experience for most advertisers in Q4, and that enables advertisers to increase campaign performance by just using audience inputs as a suggestion rather than a hard constraint. And based on tests that we ran, campaigns using Advantage Plus audience targeting saw on average, a 28% decrease in cost per click or per objective compared to using our regular targeting.

On the end-to-end automation products like Advantage Plus shopping and Advantage Plus app campaigns, we’re also seeing very strong growth…  We think there’s still significant runway to broaden adoption, so we’re trying to enable more conversion types for Advantage Plus shopping. In Q1, we began expanding the list of conversions that businesses could optimize for. So previously, it only supported purchase events, and now we’ve added 10 additional conversion types. And we’re continuing to see strong adoption now across verticals.

Meta’s management continues to develop Meta’s own AI chips; Meta’s Training and Inference Accelerator chip is less expensive for Meta and has already been running some of Meta’s recommendation workloads

We’ll also keep making progress on building more of our own silicon. Our Meta Training and Inference Accelerator chip has successfully enabled us to run some of our recommendations-related workloads on this less expensive stack, and as this program matures over the coming years we plan to expand this to more of our workloads as well.

Meta’s management sees a market for a fashionable pair of AI glasses without holographic displays; management thinks that glasses are the ideal device for an AI assistant because the glasses can see what you see and hear what you hear; management recently launched Meta AI with Vision on its AI glasses; Meta’s AI glasses continue to do well and are sold out in many styles and colours

I used to think that AR glasses wouldn’t really be a mainstream product until we had full holographic displays — and I still think that will be awesome and is mature state of the product. But now it seems pretty clear that there’s also a meaningful market for fashionable AI glasses without a display. Glasses are the ideal device for an AI assistant because you can let them see what you see and hear what you hear, so they have full context on what’s going on around you as they help you with whatever you’re trying to do. Our launch this week of Meta AI with Vision on the glasses is a good example where you can now ask questions about things you’re looking at…

…The Ray-Ban Meta glasses that we built with Essilor Luxottica continue to do well and are sold out in many styles and colors, so we’re working to make more and release additional styles as quickly as we can.

Meta’s management is improving the monetisation efficiency of the company’s products partly by using larger AI models in its new ads ranking architecture, Meta Lattice (which was rolled out last year) in place of smaller models, as well as using AI to provide more automation – ranging from point-automation to end-to-end automation – for advertisers through its Advantage+ portfolio; Meta Lattice drove improved ad performance over the course of 2023 when it was deployed across Facebook and Instagram

The second part of improving monetization efficiency is enhancing marketing performance. Similar to our work with organic recommendations, AI is playing an increasing role in these efforts. First, we are making ongoing ads modeling improvements that are delivering better performance for advertisers. One example is our new ads ranking architecture, Meta Lattice, which we began rolling out more broadly last year. This new architecture allows us to run significantly larger models that generalize learnings across objectives and surfaces in place of numerous, smaller ads models that have historically been optimized for individual objectives and surfaces. This is not only leading to increased efficiency as we operate fewer models, but also improving ad performance. Another way we’re leveraging AI is to provide increased automation for advertisers. Through our Advantage+ portfolio, advertisers can automate one step of the campaign set up process – such as selecting which ad creative to show – or automate their campaign completely using our end-to-end automation tools, Advantage+ Shopping and Advantage+ App ads. We’re seeing growing use of these solutions, and we expect to drive further adoption over the course of the year while applying what we learn to our broader ads investments…

…We’ve talked a little bit about the new model architecture at Meta Lattice that we deployed last year that consolidates smaller and more specialized models into larger models that can better learn what characteristics improve ad performance across multiple services, like Feed and Reels and multiple types of ads and objectives at the same time. And that’s driven improved ad performance over the course of 2023 as we deployed it across Facebook and Instagram to support multiple objectives.

Meta’s recommendation products historically each had their own AI models, and a new model architecture to power multiple recommendation products was being developed recently; the new model architecture was tested last year on Facebook Reels and generated 8%-10% increases in watch time; the new model architecture has been extended beyond Reels and management is hopeful that the new architecture will unlock better video recommendations over time

Historically, each of our recommendation products, including Reels, in-feed recommendations, et cetera, has had their own AI model. And recently, we’ve been developing a new model architecture with the aim for it to power multiple recommendations products. We started partially validating this model last year by using it to power Facebook Reels. And we saw meaningful performance gains, 8% to 10% increases in watch time as a result of deploying this. This year, we’re actually planning to extend the singular model architecture to recommend content across not just Facebook Reels, but also Facebook’s video tab as well. So while it’s still too early to share specific results, we’re optimistic that the new model architecture will unlock increasingly relevant video recommendations over time. And if it’s successful, we’ll explore using it to power other recommendations.

Meta’s management is seeing adoption of Meta’s generative AI (GenAI) ad creative features across verticals and different advertiser sizes; some of these features are enjoying outsized adoption; Meta expects improvements to its underlying foundational AI models to improve the output quality of its GenAI ad creative features

The more near-term version is around the GenAI ad creative features that we have put into our ads creation tools. And it’s early, but we’re seeing adoption of these features across verticals and different advertiser sizes. In particular, we’ve seen outsized adoption of image expansion with small businesses, and this will remain a big area of focus for us in 2024, and I expect that improvements to our underlying foundation models will enhance the quality of the outputs that are generated and support new features on the road map. But right now, we have features supporting text variations, image expansion and background generation, and we’re continuing to work to make those more performance for advertisers to create more personalized ads at scale.

In early tests of using business AIs for business messaging, Meta’s management is receiving positive feedback from users

The longer-term piece here is around business AIs. We have been testing the ability for businesses to set up AIs for business messaging that represent them in chats with customers starting by supporting shopping use cases such as responding to people asking for more information on a product or its availability. So this is very, very early. We’ve been testing this with a handful of businesses on Messenger and WhatsApp, and we’re hearing good feedback with businesses saying that the AIs have saved them significant time while customer — consumers noted more timely response times. And we’re also learning a lot from these tests to make these AIs more performant over time as well.

Meta’s management has gotten more optimistic and ambitious on AI compared to just 3 months ago because of the company’s work with Llama3 and Meta AI

[Question] Can you just talk about what’s changed most in your view in the business and the opportunity now versus 3 months ago? 

[Answer]  I think we’ve gotten more optimistic and ambitious on AI. So previously, I think that our work in this — I mean when you were looking at last year, when we released Llama 2, we were very excited about the model and thought that, that was going to be the basis to be able to build a number of things that were valuable that integrated into our social products. But now I think we’re in a pretty different place. So with the latest models, we’re not just building good AI models that are going to be capable of building some new good social and commerce products. I actually think we’re in a place where we’ve shown that we can build leading models and be the leading AI company in the world. And that opens up a lot of additional opportunities beyond just ones that are the most obvious ones for us. So that’s — this is what I was trying to refer to in my opening remarks where I just view the success that we’ve seen with the way that Lama 3 and Meta AI have come together as a real validation technically that we have the talent, the data and the ability to scale infrastructure to do leading work here.

Meta’s AI capex can be categorised into 2 buckets, with one being core AI work that has a very ROI-driven (return on investment driven) approach and which still generates very strong returns, and the other being generative AI and other advanced research work that has tremendous potential but has yet to produce returns; Meta’s AI capex for the 2 buckets are in capacity that is fungible

We’ve broadly categorized our AI investments into 2 buckets. I think of them as sort of core AI work and then strategic bets, which would include Gen AI and the advanced research efforts to support that. And those are just really at different stages as it relates to being able to measure the return and drive revenue for our business.

So with our core AI work, we continue to have a very ROI-driven approach to investment, and we’re still seeing strong returns as improvements to both engagement and ad performance have translated into revenue gains.

Now the second area, strategic bets, is where we are much earlier. Mark has talked about the potential that we believe we have to create significant value for our business in a number of areas, including opportunities to build businesses that don’t exist on us today. But we’ll need to invest ahead of that opportunity to develop more advanced models and to grow the usage of our products before they drive meaningful revenue. So while there is tremendous long-term potential, we’re just much earlier on the return curve than with our core AI work.

What I’ll say though is we’re also building our systems in a way that gives us fungibility in how we use our capacity, so we can flex it across different use cases as we identify what are the best opportunities to put that infrastructure toward.

Meta is already shifting a lot of resources from other parts of the company into its AI efforts

I would say broadly, we actually are doing that in a lot of places in terms of shifting resources from other areas, whether it’s compute resources or different things in order to advance the AI efforts. 

Meta has partnered with Google and Bing for Meta AI’s search citations, but management has no intention to build a search ads business

[Question] You partnered with Google and Bing for Meta AI organic search citations. So I guess stepping back, do you think that Meta AI longer term could bring in search advertising dollars at some point?

[Answer] On the Google and Microsoft partnerships, yes, I mean we work with them to have real-time information in Meta AI. It’s useful. I think it’s pretty different from search. We’re not working on search ads or anything like that. I think this will end up being a pretty different business.

Microsoft (NASDAQ: MSFT)

Azure took market share again in 2024 Q1; Microsoft’s management thinks that (1) Azure offers the most diverse selection of AI accelerators, including those from Nvidia, AMD, and Microsoft’s own custom chips, (2) Azure offers the best selection of foundational AI models, including LLMs and SLMs (small language models), and (3) Azure’s Models as a Service offering makes it easy for developers to work with LLMs and SLMs without having to worry about technical infrastructure; >65% of Fortune 500 use Azure OpenAI service; hundreds of paid customers are using Azure’s Models as a Service to access third-party AI models including those from Cohere, Meta, and Mistral; Azure grew revenue by 31% in 2024 Q1 (was 30% in 2023 Q4), with 7 points of growth from AI services (was 6 points in 2023 Q4); Azure’s non-AI consumption business also saw broad greater-than-expected demand 

Azure again took share as customers use our platforms and tools to build their own AI solutions. We offer the most diverse selection of AI accelerators, including the latest from NVIDIA, AMD as well as our own first-party silicon…

…More than 65% of the Fortune 500 now use Azure OpenAI service. We also continue to innovate and partner broadly to bring customers the best selection of frontier models and open source models, LLMs and SLMs…

…Our Models as a Service offering makes it easy for developers to use LLMs and SLMs without having to manage any underlying infrastructure. Hundreds of paid customers from Accenture and EY to Schneider Electric are using it to take advantage of API access to third-party models including, as of this quarter, the latest from Cohere, Meta and Mistral…

… Azure and other cloud services revenue grew 31% ahead of expectations, while our AI services contributed 7 points of growth as expected. In the non-AI portion of our consumption business, we saw greater-than-expected demand broadly across industries and customer segments as well as some benefit from a greater-than-expected mix of contracts with higher in-period recognition. 

Microsoft’s management continues to build on the company’s partnership with OpenAI for AI work

Our AI innovation continues to build on our strategic partnership with OpenAI. 

Microsoft’s management thinks that Phi-3, announced by Microsoft recently, is the most capable and cost-effective SLM and it’s being trialed by a number of companies

With Phi-3, which we announced earlier this week, we offer the most capable and cost-effective SLM available. It’s already being trialed by companies like CallMiner, LTIMindtree, PwC and TCS.

Azure AI customers are growing and spending more with Microsoft; over half of Azure AI customers use Microsoft’s data and analytics tools and they are building applications with deep integration between these tools and Azure AI

All up, the number of Azure AI customers continues to grow and average spend continues to increase…

… Over half of our Azure AI customers also use our data and analytics tools. Customers are building intelligent applications running on Azure, PostgreSQL and Cosmos DB with deep integrations with Azure AI. TomTom is a great example. They’ve used Cosmos DB along with Azure OpenAI service to build their own immersive in-car infotainment system. 

GitHub Copilot now has 1.8 million paid subscribers, up 35% sequentially; even established enterprises are using GitHub Copilot; >90% of Fortune 100 companies are GitHub customers; GitHub’s revenue was up 45% year-on-year

GitHub Copilot is bending the productivity curve for developers. We now have 1.8 million paid subscribers with growth accelerating to over 35% quarter-over-quarter and continues to see increased adoption from businesses in every industry, including Itau, Lufthansa Systems, Nokia, Pinterest and Volvo Cars. Copilot is driving growth across the broader GitHub platform, too. AT&T, Citigroup and Honeywell all increased their overall GitHub usage after seeing productivity and code quality increases with Copilot. All up, more than 90% of the Fortune 100 are now GitHub customers, and revenue accelerated over 45% year-over-year.

Microsoft has new AI-powered features within its low-code and no-code tools for building applications; 30,000 organisations – up 175% sequentially – across all industries have used Copilot Studio to customise or build their own copilot; Cineplex used Copilot Studio to build a copilot for customer service agents to significantly reduce the time needed to handle queries; Copilot Studio can be really useful for enterprises to ground their AIs with enterprise data, and people are really excited about it

Anyone can be a developer with new AI-powered features across our low-code, no-code tools, which makes it easier to build an app, automate workflow or create a Copilot using natural language. 30,000 organizations across every industry have used Copilot Studio to customize Copilot for Microsoft 365 or build their own, up 175% quarter-over-quarter. Cineplex, for example, built a Copilot for customer service agents, reducing query handling time from as much as 15 minutes to 30 seconds…

…Copilot Studio is really off to the races in terms of the product that most people are excited because one of the things in the enterprise is you want to ground your copilot with the enterprise data, which is in all of these SaaS applications, and Copilot Studio is the tool to use there to make that happen.

More than 330,000 organisations, including half of the Fortune 100, have used AI-features within Microsoft’s Power Platform

All up, over 330,000 organizations, including over half of Fortune 100, have used AI-powered capabilities in Power Platform, and Power Apps now has over 25 million monthly active users, up over 40% year-over-year.

In 2024 Q1, Microsoft’s management made Copilot available to all organisations; nearly 60% of Fortune 500 are using Copilot; many large companies have purchased more than 10,000 Copilot seats each; management is seeing higher usage of Copilot from early adopters, including a 50% jump in Copilot-assisted interactions per user in Teams; Microsoft has added more than 150 Copilot capabilities since the start of the year, including Copilot for Service, Copilot for Sales, Copilot for Finance, and Copilot for Security

This quarter, we made Copilot available to organizations of all types and sizes from enterprises to small businesses. Nearly 60% of the Fortune 500 now use Copilot, and we have seen accelerated adoption across industries and geographies with companies like Amgen, BP, Cognizant, Koch Industries, Moody’s, Novo Nordisk, NVIDIA and Tech Mahindra purchasing over 10,000 seats. We’re also seeing increased usage intensity from early adopters, including a nearly 50% increase in the number of Copilot-assisted interactions per user in Teams, bridging group activity with business process workflows and enterprise knowledge…

…We’re accelerating our innovation, adding over 150 Copilot capabilities since the start of the year…

… This quarter, we made our Copilot for Service and Copilot for Sales broadly available, helping customer service agents and sellers at companies like Land O’Lakes, Northern Trust, Rockwell Automation and Toyota Group generate role-specific insights and recommendations from across Dynamics 365 and Microsoft 365 as well as third-party platforms like Salesforce, ServiceNow and Zendesk. And with our Copilot for Finance, we are drawing context from Dynamics as well as ERP systems like SAP to reduce labor-intensive processes like collections and contract and invoice capture for companies like Dentsu and IDC…

…A great example is Copilot for Security, which we made generally available earlier this month, bringing together LLMs with domain-specific skills informed by our threat intelligence and 78 trillion daily security signals to provide security teams with actionable insights.

Microsoft’s management is seeing ISVs (independent software vendors) build their own Copilot integrations, with Adobe being an example

ISVs are also building their own Copilot integrations. For example, new integrations between Adobe Experience Cloud and Copilot will help marketeers access campaign insights in the flow of their work. 

Copilot in Windows is now available on 225 million PCs, up 2x sequentially; Microsoft’s largest PC partners have announced AI PCs in recent months; management recently introduced new Surface devices that comes with NPUs (neural processing units) that can power on-device AI experiences; management thinks that the presence of Copilot can help Microsoft create a new device-category for AI

When it comes to devices, Copilot in Windows is now available on nearly 225 million Windows 10 and Windows 11 PCs, up 2x quarter-over-quarter. With Copilot, we have an opportunity to create an entirely new category of devices purpose built for this new generation of AI. All of our largest OEM partners have announced AI PCs in recent months. And this quarter, we introduced new Surface devices, which includes integrated NPUs to power on device AI experiences like auto framing and live captions. And there’s much more to come. In just a few weeks, we’ll hold a special event to talk about our AI vision across Windows and devices.

More than 200 healthcare organisations are using Microsoft’s DAX Copilot

In health care, DAX Copilot is being used by more than 200 health care organizations, including Providence, Stanford Health care and WellSpan Health. 

Established auto manufacturers are using Microsoft’s AI solutions to improve their factory operations

And in manufacturing, this week at Hannover Messe, customers like BMW, Siemens and Volvo Penta shared how they’re using our cloud and AI solutions to transform factory operations.

LinkedIn AI-assisted messages have a 40% higher acceptance rate and are accepted >10% faster by job seekers; LinkedIn’s AI-powered collaborative articles now have more than 12 million contributions and helped engagement on LinkedIn reach a new record in 2024 Q1; LinkedIn Premium’s revenue was up 29% year-on-year in 2024 Q1, with AI features helping to produce the growth

Features like LinkedIn AI-assisted messages are seeing a 40% higher acceptance rate and accepted over 10% faster by job seekers saving hirers time and making it easier to connect them to candidates. Our AI-powered collaborative articles, which has reached over 12 million contributions are helping increase engagement on the platform, which reached a new record this quarter. New AI features are also helping accelerate LinkedIn Premium growth with revenue up 29% year-over-year. 

Microsoft’s management expects capex to increase materially sequentially in 2024 Q2 (FY2024 Q4) because of cloud and AI infrastructure investments; management sees near-term AI demand as being higher than available capacity; capex in FY2025 is expected to be higher than in FY2024, but this will be driven ultimately by the amount of AI inference demand; operating margin in FY2025 is expected to be down by only 1 point compared to FY2024

We expect capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-outs and the timing of finance leases. We continue to bring capacity online as we scale our AI investments with growing demand. Currently, near-term AI demand is a bit higher than our available capacity…

…In FY ’25, that focus on execution should again lead to double-digit revenue and operating income growth. To scale to meet the growing demand signal for our cloud and AI products, we expect FY ’25 capital expenditures to be higher than FY ’24. These expenditures over the course of the next year are dependent on demand signals and adoption of our services, so we will manage that signal through the year. We will also continue to prioritize operating leverage. And therefore, we expect FY ’25 operating margins to be down only about 1 point year-over-year, even with our significant cloud and AI investments as well as a full year of impact from the Activision acquisition…

… Then, Amy referenced what we also do on the inference side, which is, one, we first innovate and build products. And of course, we have an infrastructure business that’s also dependent on a lot of ISVs building products that run on our infrastructure. And it’s all going to be demand driven. In other words, we track very closely what’s happening with inference demand, and that’s something that we will manage, as Amy said in her remarks, very, very closely.

Microsoft’s management expects Azure to grow revenue by 30%-31% in constant currency, similar to stronger-than-expected 2024 Q1 results, driven by AI

For Intelligent Cloud, we expect revenue to grow between 19% and 20% in constant currency or USD 28.4 billion to USD 28.7 billion. Revenue will continue to be driven by Azure, which, as a reminder, can have quarterly variability primarily from our per user business and in-period revenue recognition depending on the mix of contracts. In Azure, we expect Q4 revenue growth to be 30% to 31% in constant currency or similar to our stronger-than-expected Q3 results. Growth will be driven by our Azure consumption business and continued contribution from AI with some impact from the AI capacity availability noted earlier.

Management’s AI-related capital expenditure plans for Microsoft has two layers to it, namely, training and inference; for training, management wants Microsoft to have capacity to train large foundation models and stay a leader in that area; for inference, management is watching inference demand

[Question] It looks like Microsoft is on track to ramp CapEx over 50% year-on-year this year to over $50 billion. And there’s media speculation of more spending ahead with some reports talking about like $100 billion data center. So obviously, investments are coming well ahead of the revenue contribution, but what I was hoping for is that you could give us some color on how you as the management team try to quantify the potential opportunities that underlie these investments because they are getting very big. 

[Answer]  At a high level, the way we, as a management team, talk about it is there are 2 sides to this, right? There is training and there’s inference. What — given that we want to be a leader in this big generational shift and paradigm shift in technology, that’s on the training side. We want to be able to allocate the capital required to essentially be training these large foundation models and stay on the leadership position there. And we’ve done that successfully all the way today, and you’ve seen it flow through our P&L, and you can continue to see that going forward. Then, Amy referenced what we also do on the inference side, which is, one, we first innovate and build products. And of course, we have an infrastructure business that’s also dependent on a lot of ISVs building products that run on our infrastructure. And it’s all going to be demand driven. In other words, we track very closely what’s happening with inference demand, and that’s something that we will manage, as Amy said in her remarks, very, very closely.

Microsoft’s management feels good about demand for Azure, because (1) they think Azure is a market-share taker since it has become the go-to choice for anybody who is working on an AI project, (2) they are seeing that AI projects on Azure do not stop with just calling AI models and there are many other cloud computing services in Azure that are required, (3), there’s migration to Azure, and (4) the optimisation cycle from the recent past has given more budget for people to start new workloads

[Question] How would you characterize the demand environment? On one hand, you have bookings in Azure both accelerating year-over-year in the quarter, but we’re seeing a lot of future concern, hesitation from other vendors we all cover. So I think everyone would love to get your sense of budget health for customers this year.

[Answer] On the Azure side, which I think is what you specifically asked, we feel very, very good about the — we’re fundamentally a share taker there because if you look at it from our perspective, at this point, Azure has become a port of call for pretty much anybody who is doing an AI project. And so that’s sort of been a significant help for us in terms of acquiring even new customers…

…The second thing that we’re also seeing is AI just doesn’t sit on its own. So AI projects obviously start with calls to AI models, but they also use a vector database. In fact, Azure Search, which is really used by even ChatGPT, is one of the fastest growing services for us. We have Fabric integration to Azure AI and so — Cosmos DB integration. So the data tier, the dev tools is another place where we are seeing great traction. So we are seeing adjacent services in Azure that get attached to AI…

… lastly, I would say, migration to Azure as well. So this is not just all an AI story. 

We are also looking at customers — I mean, this is something that we have talked about in the past, which is there’s always an optimization cycle. But there’s also — as people optimize, they spend money on new project starts, which will grow and then they’ll optimize. So it’s a continuous side of it. So these are the 3 trends that are playing out on Azure in terms of what at least we see on demand side.

Microsoft’s management thinks that a good place to watch for the level of maturation for AI will be what’s happening in terms of standard issues for software teams; they are seeing Copilots increasingly becoming “standard issue” for software teams; they think companies will need to undergo a cultural shift to fully embrace AI tools and it will take some time, but the rate of adoption of Copilot is also faster than anything they have seen in the past

[Question] We’re seeing companies shifting their IT spending to invest in and learn about AI rather than receiving additional budgets for AI. At some point for AI to be transformative, as everyone expects, it needs to be accretive to spending. Satya, when do you believe AI will hit the maturity level?

[Answer] A good place to start is to watch what’s happening in terms of standard issues for software teams, right? I mean if you think about it, they bought tools in the past. Now you basically buy tools plus Copilot, right? So you could even say that this is characterized as perhaps shift of what is OpEx dollars into effectively tool spend because it gives operating leverage to all of the OpEx dollars you’re spending today, right? That’s really a good example of, I think, what’s going to happen across the board. We see that in customer service. We see that in sales. We see that in marketing, anywhere there’s operations…

…one of the interesting rate limiters is culture change inside of organizations. When I say culture change, that means process change…  That requires not just technology but in fact, companies to go do the hard work of culturally changing how they adopt technology to drive that operating leverage. And this is where we’re going to see firm-level performance differences…

…And so yes, it will take time to — for it to percolate through the economy. But this is faster diffusion, faster rate of adoption than anything we have seen in the past. As evidenced even by Copilot, right, it’s faster than any suite we have sold in the past.

Netflix (NASDAQ: NFLX)

Netflix has been working with machine learning (ML) for almost two decades, with ML being foundational for the company’s recommendation systems; management thinks that generative AI can be used to help creators improve their story-telling, and there will always be a place for creators

[Question]  What is the opportunity for Netflix to leverage generative AI technology in the near and long term? What do you think great storytellers should be focused on as this technology continues to emerge quickly? 

[Answer] Worth noting, I think, that we’ve been leveraging advanced technologies like ML for almost 2 decades. These technologies are the foundation for our recommendation systems that help us find these largest audiences for our titles and deliver the most satisfaction for members. So we’re excited to continue to involve and improve those systems as new technologies emerge and are developed.

And we also think we’re well positioned to be in the vanguard of adoption and application of those new approaches from our just general capabilities that we’ve developed and how we’ve already developed systems that do all these things.

We also think that we have the opportunity to develop and deliver new tools to creators to allow them to tell their stories in even more compelling ways. That’s great for them, it’s great for the stories, and it’s great for our members. 

And what should storytellers be focused on? I think storytellers should be focused on great storytelling. It is incredibly hard and incredibly complex to deliver thrilling stories through film, through series, through games. And storytellers have a unique and critical role in making that happen, and we don’t see that changing.

Nvidia (NASDAQ: NVDA)

Nvidia’s Data Center revenue had incredibly strong growth in 2024 Q1, driven by demand for the Hopper GPU computing platform; compute revenue was up by 5x while networking revenue was up by 3x

Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year.

Nvidia’s management thinks that cloud providers are getting a 5x return on spending on Nvidia’s AI products over 4 years; management also thinks that cloud providers serving LLMs (large language models) via APIs (application programming interfaces) can earn $7 in revenue for every $1 spent on Nvidia’s H200 servers through running inference 

Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud providers’ investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over 4 years…

…H200 nearly doubles the inference performance of H100, delivering significant value for production deployments. For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over 4 years.

Nvidia’s management sees Nvidia GPUs as offering the best time-to-train AI models, the lowest cost to train AI models, and the lowest cost to run inference on AI models

For cloud rental customers, NVIDIA GPUs offer the best time-to-train models, the lowest cost to train models and the lowest cost to inference large language models.

Leading LLM (large language model) providers are building on Nvidia’s AI infrastructure in the cloud

Leading LLM companies such as OpenAI, Adept, Anthropic, Character.ai, Cohere, Databricks, DeepMind, Meta, Mistral, XAi, and many others are building on NVIDIA AI in the cloud.

Tesla is using Nvidia’s GPUs for its FSD (Full Self Driving) version 12 software for AI-powered autonomous driving; Nvidia’s management sees automotive as the largest enterprise vertical within its Data Center business this year

We supported Tesla’s expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD version 12, their latest autonomous driving software based on Vision. NVIDIA Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption.

Meta Platform’s Llama3 LLM was trained on a large cluster of Nvidia GPUs

A big highlight this quarter was Meta’s announcement of Llama 3, their latest large language model, which was trained on a cluster of 24,000 H100 GPUs. Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp, and Messenger. Llama 3 is openly available and has kickstarted a wave of AI development across industries.

Nvidia’s management sees inferencing of AI models growing as generative AI makes its way into more consumer internet applications

As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute.

Nvidia’s management sees inferencing accounting for 40% of Data Center revenue over the last 4 quarters

In our trailing 4 quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly.

Nvidia’s management is seeing companies build AI factories (large clusters of AI chips); Nvidia worked with more than 100 customers in 2024 Q1 to build AI factories that range in size from hundreds to tens of thousands of GPUs

Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out.  In Q1, we worked with over 100 customers building AI factories ranging in size from hundreds to tens of thousands of GPUs, with some reaching 100,000 GPUs.

Nvidia’s management is seeing growing demand from nations for AI infrastructure and they see revenue from sovereign AI reaching high single-digit billions in 2024

From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in sovereign AI. Sovereign AI refers to a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce, and business networks. Nations are building up domestic computing capacity through various models. Some are procuring and operating sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use. For example, Japan plans to invest more than $740 million in key digital infrastructure providers, including KDDI, Sakura Internet, and SoftBank to build out the nation’s sovereign AI infrastructure. France-based Scaleway, a subsidiary of the Iliad Group, is building Europe’s most powerful cloud native AI supercomputer. In Italy, Swisscom Group will build the nation’s first and most powerful NVIDIA DGX-powered supercomputer to develop the first LLM natively trained in the Italian language. And in Singapore, the National Supercomputer Centre is getting upgraded with NVIDIA Hopper GPUs, while Singtel is building NVIDIA’s accelerated AI factories across Southeast Asia…

…From nothing the previous year, we believe sovereign AI revenue can approach the high single-digit billions this year.

Nvidia’s revenue in China is down significantly in 2024 Q1 because of export restrictions for leading AI chips; management expects to see strong competitive forces in China going forward

We ramped new products designed specifically for China that don’t require export control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward.

Because of improvements in CUDA algorithms, Nvidia’s management has been able to drive a 3x improvement in LLM inference speed on the H100 chips, which translates to a 3x cost reduction when serving AI models

Thanks to CUDA algorithm innovations, we’ve been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3.

Nvidia’s management sees the demand for the company’s latest AI chips to well exceed supply into 2025

We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply, and we expect demand may exceed supply well into next year.

Nvidia’s strong networking growth in 2024 Q1 was driven by Infiniband

Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2.

Nvidia’s management has started shipping its own Ethernet solution for AI networking called Spectrum-X Ethernet; management believes that Spectrum-X is optimised for AI from the ground-up, and delivers 1.6x higher networking performance for AI workloads compared with traditional ethernet; Spectrum-X is already ramping with multiple customers, including in a GPU cluster with 100,000 GPUs; Spectrum-X opens a new AI networking market for Nvidia and management thinks it can be a multi-billion product within a year; management is going all-in on Ethernet for AI networking, but they still see Infiniband as the superior solution; Infiniband started as a computing fabric and became a network, whereas Ethernet was a network that is becoming a computing fabric

In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up. It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet-only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year…

…But we’re all in on Ethernet, and we have a really exciting road map coming for Ethernet. We have a rich ecosystem of partners. Dell announced that they’re taking Spectrum-X to market. We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market.

And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet to network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we’re going to make it a much better computing fabric. And we’re committed, fully committed, to all 3 links, NVLink computing fabric for single computing domain, to InfiniBand computing fabric, to Ethernet networking computing fabric. And so we’re going to take all 3 of them forward at a very fast clip. 

Nvidia’s latest AI chip-platform, Blackwell, delivers 4x faster training speeds, 30x faster inference speeds, and 25x lower total cost of ownership, compared to the H100 chip and enables real-time generative AI on trillion-parameter LLMs; the Blackwell platform includes Nvidia’s Inifiniband and Ethernet switches; management has built Blackwell to be compatible with all kinds of data centers; the earliest deployers of Blackwell include Amazonn, Google, Meta, and Microsoft; Nvidia’s management is on a 1-year development rhythm with the Blackwell platform-family, so there will be a new version of Blackwell in the next 12 months

At GTC in March, we launched our next-generation AI factory platform, Blackwell. The Blackwell GPU architecture delivers up to 4x faster training and 30x faster inference than the H100 and enables real-time generative AI on trillion-parameter large language models. Blackwell is a giant leap with up to 25x lower TCO and energy consumption than Hopper. The Blackwell platform includes the fifth-generation NVLink with a multi-GPU spine and new InfiniBand and Ethernet switches, the X800 series designed for a trillion-parameter scale AI. Blackwell is designed to support data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand networking, and air cooling to liquid cooling. Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hoppers launched and representing every major computer maker in the world… 

…Blackwell time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and XAi…

…I can announce that after Blackwell, there’s another chip. And we are on a 1-year rhythm.

Nvidia’s management has introduced AI software called Nvidia Inference Microservices that allow developers to quickly build and deploy generative AI applications across a broad range of use cases including text, speech, imaging, vision, robotics, genomics, and digital biology

We announced a new software product with the introduction of NVIDIA Inference Microservices, or NIM. NIM provides secure and performance-optimized containers powered by NVIDIA CUDA acceleration in network computing and inference software, including Triton and PrintServer and TensorRT-LLM with industry-standard APIs for a broad range of use cases, including large language models for text, speech, imaging, vision, robotics, genomics, and digital biology. They enable developers to quickly build and deploy generative AI applications using leading models from NVIDIA, AI21, Adept, Cohere, Getty Images, and Shutterstock, and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Snowflake and Stability AI. NIMs will be offered as part of our NVIDIA AI enterprise software platform for production deployment in the cloud or on-prem.

Nvidia’s GPUs that are meant for gaming on personal computers (PCs) can also be used for running generative AI applications on PCs; Nvidia and Microsoft has a partnership that help Windows to run LLMs up to 3x faster on PCs equipped with Nvidia’s GeForce RTX GPU

From the very start of our AI journey, we equipped GeForce RTX GPUs with CUDA Tensor cores. Now with over 100 million of an installed base, GeForce RTX GPUs are perfect for gamers, creators, AI enthusiasts, and offer unmatched performance for running generative AI applications on PCs. NVIDIA has full technology stack for deploying and running fast and efficient generative AI inference on GeForce RTX PCs…

…Yesterday, NVIDIA and Microsoft announced AI performance optimizations for Windows to help run LLMs up to 3x faster on NVIDIA GeForce RTX AI PCs.

Nvidia’s management is seeing game developers using the company’s AI services to create non-playable life-like characters in games

Top game developers, including NetEase Games, Tencent and Ubisoft are embracing NVIDIA Avatar Character Engine (sic) [ Avatar Cloud Engine ] to create lifelike avatars to transform interactions between gamers and non-playable characters.

Nvidia’s management thinks that the combination of generative AI and the Omniverse can drive the next wave of professional visualisation growth; the Ominverse has helped Wistron to reduce production cycle times by 50% and defect rates by 40%

We believe generative AI and Omniverse industrial digitalization will drive the next wave of professional visualization growth…

…Companies are using Omniverse to digitalize their workflows. Omniverse power digital twins enable Wistron, one of our manufacturing partners, to reduce end-to-end production cycle times by 50% and defect rates by 40%. 

Nvidia’s management sees generative AI driving a platform shift in the full computing stack

With generative AI, inference, which is now about fast token generation at massive scale, has become incredibly complex. Generative AI is driving a from-foundation-up full stack computing platform shift that will transform every computer interaction. From today’s information retrieval model, we are shifting to an answers and skills generation model of computing. AI will understand context and our intentions, be knowledgeable, reason, plan and perform tasks. We are fundamentally changing how computing works and what computers can do, from general purpose CPU to GPU accelerated computing, from instruction-driven software to intention-understanding models, from retrieving information to performing skills and, at the industrial level, from producing software to generating tokens, manufacturing digital intelligence.

Nvidia’s management sees token generation from LLMs driving multi-year build out of AI factories

Token generation will drive a multiyear build-out of AI factories…

… Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out.

Nvidia’s management does not think that the demand they are seeing for the company’s AI chips is a pull-ahead of demand, because the the chips are being consumed

[Question] How are you ensuring that there is enough utilization of your products and that there isn’t a pull-ahead or a holding behavior because of tight supply, competition or other factors? 

[Answer] The demand for GPUs in all the data centers is incredible. We’re racing every single day. And the reason for that is because applications like ChatGPT and GPT-4o, and now it’s going to be multi-modality, Gemini and its ramp and Anthropic, and all of the work that’s being done at all the CSPs are consuming every GPU that’s out there. There’s also a long line of generative AI startups, some 15,000, 20,000 startups that are in all different fields, from multimedia to digital characters, of course, all kinds of design tool application, productivity applications, digital biology, the moving of the AV industry to video so that they can train end-to-end models to expand the operating domain of self-driving cars, the list is just quite extraordinary. We’re racing actually. Customers are putting a lot of pressure on us to deliver the systems and stand those up as quickly as possible. And of course, I haven’t even mentioned all of the sovereign AIs who would like to train all of their regional natural resource of their country, which is their data, to train their regional models. And there’s a lot of pressure to stand those systems up. So anyhow, the demand, I think, is really, really high and it outstrips our supply.

Nvidia’s management thinks that AI is not merely a chips problem – it is a system problem

The third reason has to do with the fact that we build AI factories. And this is becoming more apparent to people that AI is not a chip problem only. It starts, of course, with very good chips and we build a whole bunch of chips for our AI factories, but it’s a systems problem. In fact, even AI is now a systems problem. It’s not just one large language model. It’s a complex system of a whole bunch of large language models that are working together. And so the fact that NVIDIA builds this system causes us to optimize all of our chips to work together as a system, to be able to have software that operates as a system, and to be able to optimize across the system.

Nvidia’s management sees the highest performing AI chip as having the lowest total cost of ownership (TCO)

Today, performance matters in everything. This is at a time when the highest performance is also the lowest cost because the infrastructure cost of carrying all of these chips cost a lot of money. And it takes a lot of money to fund the data center, to operate the data center, the people that goes along with it, the power that goes along with it, the real estate that goes along with it, and all of it adds up. And so the highest performance is also the lowest TCO.

From the point of view of Nvidia’s management, customers do not mind buying Nvidia’s AI chips today even though better ones are going to come out tomorrow because they are still very early in their build-out of their AI infrastructure, and they want to ship AI advancements fast

[Question]  I’ve never seen the velocity that you guys are introducing new platforms at the same combination of the performance jumps that you’re getting…  it’s an amazing thing to watch but it also creates an interesting juxtaposition where the current generation of product that your customers are spending billions of dollars on is going to be not as competitive with your new stuff very, very much more quickly than the depreciation cycle of that product. So I’d like you to, if you wouldn’t mind, speak a little bit about how you’re seeing that situation evolve itself with customers. 

[Answer]  If you’re 5% into the build-out versus if you’re 95% into the build-out, you’re going to feel very differently. And because you’re only 5% into the build-out anyhow, you build as fast as you can… there’s going to be a whole bunch of chips coming at them, and they just got to keep on building and just, if you will, performance-average your way into it. So that’s the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them. Let me give you an example of time being really valuable, why this idea of standing up a data center instantaneously is so valuable and getting this thing called time-to-train is so valuable. The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that’s 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better?

All of Nvidia’s AI-related hardware products runs on its CUDA software; management thinks that AI performance for Nvidia AI-hardware users can improve over time simply from improvements that the company will be making to CUDA in the future

And all of it — the beautiful thing is all of it runs CUDA. And all of it runs our entire software stack. So if you invest today on our software stack, without doing anything at all, it’s just going to get faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs. 

Shopify (NASDAQ: SHOP)

Shopify Magic is Shopify’s suite of AI products and management’s focus is on providing AI tools for merchants to simplify business operations and enhance productivity

Touching briefly on AI. Our unique position enables us to tap into the immense potential of AI for entrepreneurship and our merchants. Currently, the most practical applications of AI are found in tools that simplify business operations and enhance productivity, all of which we’ve been developing deeper capabilities with our AI product suite, Shopify Magic. 

Shopify’s management is using AI tools for precision marketing, and drove a 130% increase in merchant ads within its primary marketing channel from 2023 Q4 to 2024 Q1 while still being within payback guardrails

Our goal is to always get the most out of every existing channel up to our guardrail limits and continuingly find and experiment with new channels. That is what we build our tools and our AI models to do, and we’re using them to create some incredibly compelling opportunities. Let me give you a very recent example. At the end of last year and early into January, we drove significant efficiency improvements in one of our primary channels in performance marketing, where teams have created and leveraged advanced models using AI and machine learning, which now allows us to target our audiences with unprecedented precision. Using these models and strategies, we drove nearly 130% increase in merchant ads within our primary marketing channel from Q4 to Q1, while still remaining squarely within our payback guardrails.

Shopify has produced good revenue growth despite its headcount remaining flat for 3 quarters; management thinks Shopify can keep headcount growth low while the business continues to grow; the use of AI internally is an important element of how Shopify can continue to drive growth while keeping headcount growth low; an example of an internal use-case of AI is merchant support, where Shopify has (1) seen more than half of support interactions being assisted, and often fully-resolved, by AI, (2) been able to provide 24/7 live support in 8 additional languages that previously were offered only for certain hours, (3) decreased the duration of support interactions, (4) reduce the reluctance of merchants to ask questions, and (4) reduced the amount of toil on support staff

We know our team is one of our most valuable assets. And given that it makes up over half of our cost base, we believe we’ve architected ourselves to be faster and more agile, which has enabled us to consistently deliver 25% revenue growth, excluding logistics, all while keeping our headcount flat for 3 straight quarters. More importantly, because of the structure and the automation we have worked to put in place, we think we can continue to operate against very limited headcount growth while achieving a continued combination of consistent top line growth and profitability…

…We continue to remain disciplined on headcount with total headcount remaining essentially flat for the past 3 quarters, all while maintaining and, in fact, accelerating our product innovation capabilities and continuing the top line momentum of our business. How we leverage AI internally is an important element of how we are able to do that…

During Q1, over half of our merchant support interactions were assisted with AI and often fully resolved with the help of AI. AI has enabled 24/7 live support in 8 additional languages that previously were offered only certain hours of the day. We have significantly enhanced the merchant experience. The average duration of support interactions has decreased. And the introduction of AI has helped reduce the reluctance that some merchants previously had towards asking questions that they might perceive as trivial or naive. Additionally, our support staff has experienced a significant reduction in the amount of toil that is part of their jobs. We are improving the merchant support process and achieving much greater efficiency than ever before.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management confirmed that there are no major damages to the company’s fabs and major operations from the recent earthquake in Taiwan – the largest in the region in 25 years – so there are no major disruptions to the supply of AI chips

On April 3, an earthquake of 7.2 magnitude struck Taiwan, and the maximum magnitude of our fabs was 5. Safety systems and protocols at our fabs were initiated immediately and all TSMC personnel are safe. Based on TSMC’s deep experience and capabilities in earthquake response and damage prevention as well as regular disasters trials, the overall tool recovery in our fabs reached more than 70% within the first 10 hours and were fully recovered by the end of the third day. There were no power outages, no structural damage to our fabs, and there’s no damage to our critical tools, including all our EUV lithography tools. That being said, a certain number of wafers in process were impacted and had to be scrapped, but we expect most of the lost production to be recovered in the second quarter and thus, minimum impact to our second quarter revenue. We expect the total impact from the earthquake to reduce our second quarter gross margin by about 50 basis points, mainly due to the losses associated with wafer scraps and material loss…

…Although it was largest earthquake in Taiwan in the last 25 years, we worked together tirelessly and were able to resume for operation at all our fab within 3 days with minimal disruptions, demonstrating the resilience of our operation in Taiwan.

TSMC’s management is seeing a strong surge in AI-related demand, and thinks that this supports their view of a structural acceleration in demand for energy-efficient computing

The continued surge in AI-related demand supports our already strong conviction that structural demand for energy-efficient computing is accelerating in an intelligent and connected world. 

TSMC’s management sees the company as a key enabler of AI; the increase in complexity of AI models, regardless of the approaches taken, requires increasingly powerful semiconductors, and this is where TSMC’s value increases, because the company excels at manufacturing the most advanced semiconductors

TSMC is a key enabler of AI applications. AI technology is evolving to use our increasingly complex AI models, which needs to be supported by more powerful semiconductor hardware. No matter what approach is taken, it requires use of the most advanced semiconductor process technologies. Thus, the value of our technology position is increasing as customers rely on TSMC to provide the most advanced process and packaging technology at scale, with a dependable and predictable cadence of technology offering. In summary, our technology leadership enable TSMC to win business and enables our customer to win business in the AI market.

TSMC’s management is seeing nearly every AI innovator working with the company

Almost all the AI innovators are working with TSMC to address the insatiable AI-related demand for energy-efficient computing power. 

TSMC’s management is forecasting the company’s revenue from AI processors to more than double in 2024 and account for low-teens percentage of total revenue; management expects AI processor revenue to grow at 50% annually over the next 5 years and account for more than 20% of TSMC’s total revenue by 2028; management has a narrow definition of AI processors and expect them to be the strongest growth driver for TSMC’s overall HPC (high performance computing) platform and overall revenue over the next few years

We forecast the revenue contribution from several AI processors to more than double this year and account for low teens percent of our total revenue in 2024. For the next 5 years, we forecast to grow at 50% CAGR and increase to higher than 20% of our revenue by 2028. Several AI processes are narrowly defined as GPUs, AI accelerators, and CPUs performing training and inference functions, and do not improve the networking, edge or on-device AI. We expect several AI processor to be the strongest driver of our HPC platform for growth and the largest contributor in terms of our overall incremental revenue growth in the next several years.

TSMC’s management thinks that strong HPC and AI demand means that it is strategically important for the company to expand its global manufacturing footprint

Given the strong HPC and AI-related demand, it is strategically important for TSMC to expand our global manufacturing footprint to continue to support our U.S. customers, increased customer trust, and expand our future growth potential.

TSMC has received strong support from the US government for its Arizona fabs and one of them has been upgraded to be a fab for 2nm process technology to support AI-demand, and it is scheduled for volume production in 2028; management is confident that the Arizona fabs will have the same quality as TSMC’s Taiwan fabs

In Arizona, we have received a strong commitment and support from our U.S. customers and plan to build 3 fabs, which help to create greater economies of scale..

…Our second fab has been upgraded to utilize 2-nanometer technologies to support a strong AI-related demand in addition to the previously announced 3-nanometer. We recently completed the taping of in which the last construction beam was raised into place and volume production is scheduled to begin in 2028…

…We are confident that once we begin volume production, we will be able to deliver the same level of manufacturing quality and reliability in each of our fab in Arizona as from our fab in Taiwan.

TSMC’s management believes the company’s 2nm technology is industry-leading and nearly every AI innovator is working with the company on its 2nm technology; management thinks 2nm will enable TSMC to capture AI-related growth opportunities in the years ahead

Finally, I will talk about our N2 status. Our N2 technology leads industry in addressing the industry’s insatiable need for energy-efficient computing, and almost all AI innovators are working with TSMC…

… With our strategy of continuous enhancement, N2 and its derivative will further extend our technology leadership position and enable TSMC to capture the AI-related growth opportunities well into future.

TSMC’s management is seeing very, very strong AI-related data center demand, while traditional server demand is slow; there is a shift in wallet-share from hyperscalers from traditional servers to AI servers and that is favourable for TSMC because TSMC has a lower presence in the traditional CPU-centric server space; TSMC is doubling its production capacity for AI-related data centre chips, but it’s still not enough to meet its customers’ demand

However, AI-related data center demand is very, very strong. And traditional server demand is slow, lukewarm…

…The budget for hyperscale player, their wallet-share shift from traditional server to AI server is favorable for TSMC. And we are able to capture most of the semiconductor content in an AI servers area as we defined GPU, ACA networking processor, et cetera. Well, we have a lower presence in those CPU-only, CPU-centric traditional server. So we expect our growth will be very healthy…

…Let me say it again, the demand is very, very strong, and we have done our best we put all the effort to increase the capacity. It probably more than double this year as compared with last year. However, not enough to meet the customers’ demand, and we leverage our OSAT partners that to complement of TSMC’s capacity to fulfill our customers need. Still not enough, of course. 

TSMC’s management is working on selling TSMC’s value in the manufacture of AI chips

[Question] I think it’s clear that AI is producing a large profit pool at your owners. And the HBM is also driving super normal returns for memory players. So my question is, does TSMC believe they’re getting their fair share of the returns in the AI value chain today? And is there a scope for TSMC to raise pricing for AI chips in the future?

[Answer] We always say that we want to sell our value, but it is a continuous process for TSMC. And let me tell you that we are working on it. We are happy that our customers are doing well. And if customers do well, TSMC does well.

TSMC’s management still expects the company’s capex intensity (capex as a percentage of revenue) to level off somewhere around the mid-30s range in the next several years even with the AI-boom, but they are ready to increase capex if necessary

[Question] My second question is just relating to the upward expectations you gave for the AI accelerators. Curious how that time, how you’re looking at the CapEx, if you say that we’re entering either higher growth or investment cycle, where capital intensity could need to rise up above that mid-30s range that you set 

[Answer] We work with our customers closely and our CapEx and capacity planning are always based on the long-term structural market demand profile that is underpinned by the multiyear megatrends….  The capital intensity, in the past few years, it was high as we invested heavily to meet the strong customer demand. Now the increase — the rate of increase for the capex is leveling off, so this year and the next several years, we are expecting that the capital intensity is somewhere at the mid-30s level. But as I just said, if there are opportunities in the future years, then we will invest accordingly.

TSMC’s management wants to support all of TSMC’s AI customers’ needs, and not just the needs of its major AI customer (presumably Nvidia)

 We want to make sure that all our customers get supported, probably not enough this year. But for next year, we try. We try very hard. And you mentioned about giving up some market share, that’s not my consideration. My consideration is to help our customers to be successful in their market…

…[Question] So since your major customers said there’s no room for other type of AI computing chips, but it seems like TSMC is happy to assist some similar customers, right? So is that right interpretation about your comments.

[Answer] Yes.

Most of TSMC’s AI customers are using the 5nm or 4nm technologies, but they are working with TSMC on even more advanced nodes – such as 3nm and 2nm – because the advanced nodes are more energy-efficient, and energy efficiency in AI data centres is really important; in the past, TSMC’s then-leading edge chips only see smartphone demand, but with 2nm, TSMC will see demand from smartphones and HPC, so the early-revenue from 2nm is expected to be even larger than 3nm’s early-revenue 

[Question] I think currently, most of the AI accelerator, mostly in 5-nanometers, which is N minus 1 comparing to a smartphone for now. So when do we expect them to catch up or surpass in terms of technology node? Do we see them to be the technology driver in 2 nanometers or above?

[Answer] Today, all the AI accelerators, most of them are in the 5- or 4-nanometer technology. My customers are working with TSMC for the next node, even for the next, next node, they have to move fast because, as I said, the power consumption has to be considered in the AI data center. So the energy-efficient is very important. So our 3-nanometer is much better than the 5-nanometer. And again, it will be improved in the 2-nanometer. So all I can say is all my customers are working on this kind of trend from 4-nanometer to 3 to 2…

…[Question] Do we see a bigger revenue in the first 2 years of the 2 nanometers because in the past, it’s only smartphone, but in 2-nanometer, it would be both smartphone and HPC customers.

[Answer] With the demand that we’re seeing, we do expect N2 revenue contribution to be even larger than N3, just like 3 is a larger contribution or larger node than 5, et cetera, et cetera.

TSMC’s management is seeing die sizes increase with edge-AI or on-device AI; management thinks that the replacement cycle for smartphones and PCs will be a little accelerated in the future and the edge-AI trend will be very positive for TSMC

Let me mention the edge-AI or the on-device AI, the first order of magnitude is the die size. We saw with AI for neuro processor inside, the die size will be increased, okay? That’s the first we observed. And it’s happening. And then for the future, I would think that replacement cycle for smartphone and kind of a PC will be accelerated a little bit in the future, at least. It’s not happening yet, but we do expect that will happen soon. And all in all, I would say that on-device AI will be very positive for TSMC because we kept the larger share of the market. 

Tencent (NASDAQ: TCEHY)

Engagement of Weixin users is increasingly supplemented by consumption of content in chat at moments and recommended content on video accounts and mini programs; this was driven by AI recommendations 

For Weixin, users are increasingly supplementing their stable consumption of social graph supply content in chat at moments with consumption of algorithmically recommended content in official accounts and video accounts and engagement with Mini Programs diverse range of services.  This trend benefits from our heavy investment in AI, which makes the recommendation better and better over time.

Official accounts achieved healthy year-on-year pageview growth, driven AI-powered recommendation algorithms

For official accounts, which enable creators to share text and images and chosen topics with interested followers, it achieved healthy year-on-year pageview growth. As AI-powered recommendation algorithms allow us to provide targeted high-quality content more effectively.

Tencent’s online advertising revenue was up 26% in 2024 Q1 because of increased engagements from AI-powered ad targeting; ad spend from all major categories increased in 2024 Q1 except for automotives; during the quarter, Tencent upgraded its ad tech platform and made generative AI-powered ad creation tools available to boost ad creation efficiency and better targeting

For online advertising, our revenue was RMB 26.5 billion in the quarter up 26% year-on-year, benefiting from increased engagements in AI-powered ad targeting. Ad spend from all major categories except automotive increased year-on-year, particularly from games, internet services and consumer goods sectors. During the quarter, we upgraded our ad tech platform to help advertisers manage ad campaigns more effectively, and we made generative AI-powered ad creation tools available to all advertisers. These initiatives enable advertisers to create ads more efficiently and to deliver better targeting.

Hunyuan (Tencent’s foundational LLM) was scaled up using the mixture of experts approach; management is deploying Hunyuan in more Tencent services; management is open-sourcing a version of Hunyuan that provides text-image generative AI

And for Hunyuan, the main model achieved significant progress as we’ve scaled up using the mixture of experts approach, and we’re deploying Hunyuan in more of our services. Today, we announced that we’re making a version of Hunyuan providing text image generative AI available on an open source basis.

Tencent’s operating capex was RMB6.6b in 2024 Q1, up massively from a low base in 2023 Q1 but down slightly sequentially, because of spending on GPUs and servers to support Hunyuan and the AI ad recommendation algo

Operating CapEx was RMB 6.6 billion, up 557% year-on-year from a low base quarter last year, mainly driven by investment in GPUs and servers to support our Hunyuan and AI ad recommendation algorithms.

Tencent’s management expects advertising revenue growth to decelerate from 2024 Q1’s level, but still expects to outpace the broader industry because (1) Tencent’s ad load is still small relative to the advertising real estate available, and (2) AI will help the advertising business and can easily double or even triple Tencent’s currently low click-through rates; management thinks Tencent’s advertising business will benefit from AI disproportionately vis-a-vis competitors who also use AI because Tencent has been under-monetising and has lower click-through rates, so any AI-driven improvements will have a bigger impact; Hunyuan is part of the AI technologies that management has deployed for the advertising business

Around advertising, I’d say that, as you would expect, given the economies mix, advertiser sentiment is also quite mixed and it’s certainly a challenging environment in which to set advertising. The first quarter for us is a slightly unusual quarter because it’s a small quarter for advertising due to the Chinese New Year effect. And so sometimes the accelerations or the decelerations get magnified as a result. So we would expect our advertising growth to be less rapid in subsequent quarters of the year than it was in the first quarter and more similar to consensus expectations for our advertising revenue growth for the rest of the year. But that said, we think that we are in a good position to continue taking share of the market at a rapid rate, given we’re very early in increasing our ad load on video accounts, which is currently around 1/4 of the ad loads of our major competitors with short video products. 

And also given we’re early in capturing the benefits of deploying AI to our ad tech stack. And we think that we will — we are benefiting and will continue to benefit disproportionately from applying AI to our ad tech because historically, as a social media platform, our click-through rates were low. And so starting from that lower base, we can — we have seen we can double or triple click-through rates in a way that’s not possible for ad services that are starting from much higher click through rates…

… [Question] In the future, do you think like under the AI developments like our competitors such as like ByteDance or Alibaba, they also applies AI to their ad business so how do you think that AI will drive to add market share to change in the longer term?

[Answer] Your question around a number of competitors are obviously applying AI as well. And we believe that all of them will benefit from AI, too. But we think that the biggest beneficiaries will be those companies, of which we are one that have very substantial under monetized time spent and now able to monetize that time spend more effectively by deploying AI because the deployment of AI enables an upward structural shift in click-through rates, and that shift is most pronounced for those inventories where the click-through rates were lower to begin with, such as the social media inventory. Those tools also allow advertisers who previously were able to create advertisements for search, which are text in nature, but not to create advertisements for social media, which are image and video in nature, to now use generative AI to create advertisements to social media. So in general, we think there’ll be a reallocation of advertising spend toward those services, which have high time spent, high engagement and are now able to deliver increasing click through rates, increasing transaction volume more commensurate with the time spent and engagement superiority…

…  So on ad tech, we’re innovating around the process of targeting the ads using artificial intelligence. We’re innovating around helping advertisers manage their advertising campaigns. And then most recently, we’ve been — we are now deploying Hunyuan to facilitate advertisers, creating the advertising content.

Tencent’s management thinks that WeChat will be a great distribution channel for AI products, but they are still figuring out the best use case for AI (including Tencent’s own Hunyuan LLM); management is actively testing, and they will roll out the products they think are the best over time

I think we do believe that with the right product than our WeChat platform and our other products, which have a lot of user engagement would be great — will be great distribution channels for these AI products. But I think at this point in time, everybody is actually trying out different products that may work. No one has really come up with a killer application yet with the exception of probably OpenAI, that question and answer from it so I think you should be confident that we have been developing the technology, and we are having a best-in-class technology in Hunyuan and at the same time, we are actively creating and testing out different products to see what would make sense for our existing products and as the time comes, these products will be rolled out on our platform.

Tencent’s management thinks that Hunyuan is currently best being deployed in Tencent’s gaming business for customer service purposes; management has been deploying AI in Tencent’s games, but not necessarily generative AI; Hunyuan will be useful for developing games when it gains multi-modal capabilities, especially in creating high-quality videos, but it will be some time before Hunyuan reaches that level

I think for Hunyuan — it can be assisting game business in multiple ways. Right now, the best the best contributor is actually on the customer service front. When Hunyuan is actually deployed to answer questions and the customer service bought for a lot of our games is actually achieving very high customer satisfaction level. And AI, in general, has already been deployed in our games, but not necessarily the generative AI technology yet. In terms of Hunyuan and, I think, over time, when we actually sort of can move Hunyuan into a multi-modal and especially if we can start creating really high-quality, high fidelity videos, then that would actually be helpful. Before that happens, Hunyuan can actually sort of be using MPCs and create a certain sort of interactive experiences but it’s not going to be able to take over the very heavy growth of content creation in gaming yet. I think you’ll probably be a couple more generations before it can be for game production.

Tesla (NASDAQ: TSLA)

Tesla’s FSD v12 is a pure AI-based self driving technology; FSD v12 is now turned on for all North American Tesla vehicles – around 1.8 million vehicles – that are running on Hardware 3 or later and it is used on around half of the vehicles, with the percentage of users increasing each week; more than 300 billion miles have been driven with FSD v12; management thinks that it’s only a matter of time before Tesla’s autonomous driving capabilities exceeds human-reliability

Regarding FSD V12, which is the pure AI-based self-driving, if you haven’t experienced this, I strongly urge you to try it out. It’s profound and the rate of improvement is rapid. And we’ve now turned that on for all cars, with the cameras and inference computer, everything from Hardware 3 on, in North America. So it’s been pushed out to, I think, around 1.8 million vehicles, and we’re seeing about half of people use it so far and that percentage is increasing with each passing week. So we now have over 300 billion miles that have been driven with FSD V12…

…I think it should be obvious to anyone who’s driving V12 in a Tesla that it is only a matter of time before we exceed the reliability of humans and we’ve not much time with that. 

Tesla’s management believes that the company’s vision-based approach with end-to-end neural networks for full self driving is better than other approaches, because it mimics the way humans drive, and the global road networks are designed for biological neural nets and eyes

Since the launch of Full Self-Driving — Supervised Full Self-Driving, it’s become very clear that the vision-based approach with end-to-end neural networks is the right solution for scalable autonomy. And it’s really how humans drive. Our entire road network is designed for biological neural nets and eyes. So naturally, cameras and digital neural nets are the solution to our current road system…

… I think we just need to — it just needs to be obvious that our approach is the right approach. And I think it is. I think now with 12.3, if you just have the car drive you around, it is obvious that our solution with a relatively low-cost inference computer and standard cameras can achieve self-driving. No LiDARs, no radars, no ultrasonic, nothing.

Tesla has reduced the subscription price of FSD to US$99 a month; management is talking to one major auto manufacturer on licensing Tesla’s FSD software; it will take time for third-party automakers to use Tesla’s autonomous driving technology as a massive design change is needed for the vehicles even though all that is needed is for cameras and an inference computer to be installed

To make it more accessible, we’ve reduced the subscription price to $99 a month, so it’s easy to try out…

…We’re in conversations with one major automaker regarding licensing FSD…

…I think we just need to — it just needs to be obvious that our approach is the right approach. And I think it is. I think now with 12.3, if you just have the car drive you around, it is obvious that our solution with a relatively low-cost inference computer and standard cameras can achieve self-driving. No LiDARs, no radars, no ultrasonic, nothing… No heavy integration work for vehicle manufacturers…

… So I wouldn’t be surprised if we do sign a deal. I think we have a good chance we do sign a deal this year, maybe more than one. But yes, it would be probably 3 years before it’s integrated with a car, even though all you need is cameras and our inference computer. So just talking about a massive design change.

Tesla’s management has been expanding the company’s core AI infrastructure and the company is no longer training-constrained; Tesla has 35,000 H100 GPUs that are currently working, and management expects to have 85,000 H100 GPUs by end-2024 for AI training

Over the past few months, we’ve been actively working on expanding Tesla’s core AI infrastructure. For a while there, we were training-constrained in our progress. We are, at this point, no longer training-constrained, and so we’re making rapid progress. We’ve installed and commissioned, meaning they’re actually working, 35,000 H100 computers or GPUs. GPU is a wrong word, they need a new word. I always feel like a [ wentz ] when I say GPU because it’s not. GPU stands — G stands for graphics. Roughly 35,000 H100S are active, and we expect that to be probably 85,000 or thereabouts by the end of this year in training, just for training. 

Tesla’s AI robot, Optimus, is able to do simple factory tasks and management thinks it can do useful tasks by the end of this year; management thinks Tesla can sell Optimus by the end of next year; management still thinks that Optimus will be an incredibly valuable product if it comes to fruition; management thinks that Tesla is the best-positioned manufacturer of humanoid robots with efficient AI inference to be able to reach production at scale

[Question] What is the current status of Optimus? Are they currently performing any factory tasks? When do you expect to start mass production?

[Answer] We are able to do simple factory tasks or at least, I should say, factory tasks in the lab. In terms of actually — we do think we will have Optimus in limited production in the factory — in natural factory itself, doing useful tasks before the end of this year. And then I think we may be able to sell it externally by the end of next year. These are just guesses. As I’ve said before, I think Optimus will be more valuable than everything else combined. Because if you’ve got a sentient humanoid robots that is able to navigate reality and do tasks at request, there is no meaningful limit to the size of the economy. So that’s what’s going to happen. And I think Tesla is best positioned of any humanoid robot maker to be able to reach volume production with efficient inference on the robot itself.

The vision of Tesla’s management for autonomous vehicles is for the company to own and operate some autonomous vehicles within a Tesla fleet, and for the company to be an Airbnb- or Uber-like platform for other third-party owners to put their vehicles into the fleet; management thinks Tesla’s fleet can be tens of millions of cars worldwide – even more than 100 million – and as the fleet grows, it will act as a positive flywheel for Tesla in terms of producing data for training

And something I should clarify is that Tesla will be operating the fleet. So you can think of like how Tesla — you think of Tesla like some combination of Airbnb and Uber, meaning that there will be some number of cars that Tesla owns itself and operates in the fleet. There will be some number of cars — and then there’ll be a bunch of cars where they’re owned by the end user. That end user can add or subtract their car to the fleet whenever they want, and they can decide if they want to only let the car be used by friends and family or only by 5-star users or by anyone. At any time, they could have the car come back to them and be exclusively theirs, like an Airbnb. You could rent out your guestroom or not any time you want. 

So as our fleet grows, we have 7 million cars — 9 million cars, going to eventually tens of millions of cars worldwide. With a constant feedback loop, every time something goes wrong, that gets added to the training data and you get this training flywheel happening in the same way that Google Search has the sort of flywheel. It’s very difficult to compete with Google because people are constantly doing searches and clicking and Google is getting that feedback loop. So same with Tesla, but at a scale that is maybe difficult to comprehend. But ultimately, it will be tens of millions…

… And then I mean if you get like to the 100 million vehicle level, which I think we will, at some point, get to, then — and you’ve got a kilowatt of useable compute and maybe your own Hardware 6 or 7 by that time, then you really — I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone more than any company, probably more than any company.

Tesla’s management thinks that the company can sell AI inference compute capacity that’s sitting in Tesla vehicles when they are not in use; Tesla cars are running Hardware 3 and Hardware 4 now, while Hardware 5 is coming; unlike smartphones or computers, the computing capacity of Tesla vehicles is entirely within Tesla’s control, and the company has skills on deploying compute workloads to each individual vehicle

I think there’s also some potential here for an AWS element down the road where if we’ve got very powerful inference because we’ve got a Hardware 3 in the cars, but now all cars are being made with Hardware 4. Hardware 5 is pretty much designed and should be in cars hopefully towards the end of next year. And there’s a potential to run — when the car is not moving, to actually run distributed inference. So kind of like AWS, but distributed inference. Like it takes a lot of computers to train an AI model, but many orders of magnitude less compute to run it. So if you can imagine a future [ path ] where there’s a fleet of 100 million Teslas, and on average, they’ve got like maybe a kilowatt of inference compute, that’s 100 gigawatts of inference compute distributed all around the world. It’s pretty hard to put together 100 gigawatts of AI compute. And even in an autonomous future where the car is perhaps used instead of being used 10 hours a week, it is used 50 hours a week. That still leaves over 100 hours a week where the car inference computer could be doing something else. And it seems like it will be a waste not to use it…

…And then I mean if you get like to the 100 million vehicle level, which I think we will, at some point, get to, then — and you’ve got a kilowatt of useable compute and maybe your own Hardware 6 or 7 by that time, then you really — I think you could have on the order of 100 gigawatts of useful compute, which might be more than anyone more than any company, probably more than any company…

…Yes, probably because it takes a lot of intelligence to drive the car anyway. And when it’s not driving the car, you just put this intelligence to other uses, solving scientific problems or answer in terms of [ this horse ] or something else… We’ve already learned about deploying workloads to these nodes… And unlike laptops and our cell phones, it is totally under Tesla’s control. So it’s easier to see the road products plus different nodes as opposed to asking users for permission on their own cell phones would be very tedious… 

… So like technically, yes, I suppose like Apple would have the most amount of distributed compute, but you can’t use it because you can’t get the — you can’t just run the phone at full power and drain the battery. So whereas for the car, even if you’re a kilowatt-level inference computer, which is crazy power compared to a phone, if you’ve got 50 or 60 kilowatt hour pack, it’s still not a big deal. Whether you plug it or not, you could run for 10 hours and use 10 kilowatt hours of your kilowatt of compute power.

Safety is very important for Tesla; management has been conducting safety-training for Tesla’s AI-powered self driving technology through the use of millions of clips of critical safety events collected from Tesla vehicles; the company runs simulations for safety purposes before pushing out a new software version to early users and before it gets pushed to external users; once the new software is with external users, it’s constantly monitored by Tesla; FSD v12’s feedback loop of issues, fixes, and evaluations happens automatically because the AI model learns on its own based on data it is getting

Yes, we have multiple years of validating the safety. In any given week, we train hundreds of neural networks that can produce different trajectories for how to drive the car, replay them through the millions of clips that we have already collected from our users and our own QA. Those are like critical events, like someone jumping out in front or like other critical events that we have gathered database over many, many years, and we replay through all of them to make sure that we are net improving safety. 

And then we have simulation systems. We also try to recreate this and test this in close to fashion. And some of this is validated, we give it to our QA networks. We have hundreds of them in different cities, in San Francisco, Los Angeles, Austin, New York, a lot of different locations. They are also driving this and collecting real-world miles, and we have an estimate of what are the critical events, are they net improvement compared to the previous week builds. And once we have confidence that the build is a net improvement, then we start shipping to early users, like 2,000 employees initially that they would like it to build. They will give feedback on like if it’s an improvement or they’re noting some new issues that we did not capture in our own QA process. And only after all of this is validated, then we go to external customers.

And even when we go external, we have like live dashboards of monitoring every critical event that’s happening in the fleet sorted by the criticality of it. So we are having a constant pulse on the build quality and the safety improvement along the way. And then any failures like Elon alluded to, we’ll get the data back, add it to the training and that improves the model in the next cycle. So we have this like constant feedback loop of issues, fixes, evaluations and then rinse and repeat.

And especially with the new V12 architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don’t have to be creative and like how they code the algorithms. It’s mostly learning on its own based on data. So you see that, okay, every failure or like this is how a person chooses, this is how you drive this intersection or something like that, they get the data back. We add it to the neural network, and it learns from that trained data automatically instead of some engineers saying that, oh, here, you must rotate the steering wheel by this much or something like that. There’s no hard inference conditions. If everything is neural network, it’s pretty soft, it’s probabilistic and circular. That’s probabilistic distribution based on the new data that it’s getting.

Tesla’s management has good insight on the level of improvement Tesla’s AI-powered self-driving technology can be over a 3-4 month time frame, based on a combination of model size scaling, data scaling, training compute scaling, and architecture scaling

And we do have some insight into how good the things will be in like, let’s say, 3 or 4 months because we have advanced models that our far more capable than what is in the car, but have some issues with them that we need to fix. So they are there’ll be a step change improvement in the capabilities of the car, but it will have some quirks that are — that need to be addressed in order to release it. As Ashok was saying, we have to be very careful in what we release the fleet or to customers in general. So like — if we look at say 12.4 and 12.5, which are really could arguably even be V13, V14 because it’s pretty close to a total retrain of the neural nets and in each case, are substantially different. So we have good insight into where the model is, how well the car will perform, in, say, 3 or 4 months…

… In terms of scaling, people in here coming and they generally talk about models scaling, where they increase the model size a lot and then their corresponding gains in performance, but we have also figured out scaling loss and other access in addition to the model side scaling, making also data scaling. You can increase the amount of data you use to train the neural network and that also gives similar gains and you can also scale up by training compute. You can train it for much longer and one more GPUs or more Dojo nodes, and that also gives better performance. And you can also have architecture scaling where you count with better architectures for the same amount of compute produce better results. So a combination of model size scaling, data scaling, training compute scaling and the architecture scaling, we can basically extrapolate, okay, with the continue scaling based at this ratio, we can predict future performance. 

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management will soon roll out a game-changing AI-fueled forecasting tool on the company’s Kokai platform

We are quickly approaching some of the biggest UX and product rollouts of Kokai that nearly all of our customers will begin to use and see benefits from over the next few quarters, including a game-changing AI-fueled forecasting tool.

Trade Desk’s management has been using AI since 2016; management has always thought about AI as a copilot for humans even before Trade Desk was founded

We’ve been deploying AI in our platform since we launched Koa in 2016…

… To that end, we’ve known since before our company existed that the complexity of assessing millions of ad opportunities every second, along with hundreds of variables for each impression, is beyond the scope of any individual human. We have always thought about AI as a copilot for our hands-on keyboard traders.

Through Kokai, Trade Desk is bringing AI to many decision-points in the digital advertising process; Trade Desk is also incorporating AI into new relevance indices in Kokai for advertisers to better understand the relevance of different ad impressions in reaching their target audience; US Cellular used Trade Desk’s TV Quality Index to improve its conversion rate by 71%, reach 66% more households, and decrease cost per acquisition by 24%

And with Kokai, we are bringing the power of AI to a broader range of key decision points than ever, whether it’s in relevant scoring forecasting, budget optimization, frequency management or upgraded measurement. AI is also incorporated into a series of new indices that score relevance, which advertisers can use to better understand the relevance of different ad impressions in reaching their target audience. For example, U.S. Cellular worked with their agency, Harmelin Media, to leverage our TV Quality Index to better reach new customers. Their conversion rates improved 71%. They reached 66% more households by optimizing frequency management, and their cost per acquisition decreased 24%. I think it’s important to understand how we’re putting AI to work in Kokai because this kind of tech dislocation will bring new innovators. 

Visa (NYSE: V)

Visa’s management is using AI to improve the company’s risk offerings; the company’s Visa Protect for account-to-account payments feature is powered by AI-based fraud detection models; another of the features, Visa Deep Authorization, is powered by a deep-learning recurrent neural network model for risk scoring of e-commerce payments specifically in the USA

Across our risk offerings, we continue to bolster them through our technology, innovation, and AI expertise and are expanding their utility beyond the Visa network. Recently, we announced 3 such capabilities in our Visa Protect offering. The first is the expansion of our signature solutions, Visa Advanced Authorization and Visa Risk Manager for non-Visa card payments, making them network-agnostic. This allows issuers to simplify their fraud operations into a single fraud detection solution. The second is the release of Visa Protect for account-to-account payments, our first fraud prevention solution built specifically for real-time payments, including P2P digital wallets, account-to-account transactions and Central Bank’s instant payment systems. Powered by AI-based fraud detection models, this new service provides a real-time risk score that can be used to identify fraud on account-to-account payments. We’ve been piloting both of these in a number of countries, and our strong results thus far have informed our decision to roll these out globally. The third solution is Visa Deep Authorization. It is a new transaction risk scoring solution tailored specifically to the U.S. market to better manage e-commerce payments powered by a world-class deep-learning recurrent neural network model and petabytes of contextual data…

…What we found in the U.S. e-commerce market is that, on the one hand, it’s the most developed e-commerce market on the planet. On the other hand, it’s become the place of the most sophisticated fraud and attack vectors that we see anywhere in the world. And so what we are bringing to market with Visa Deep Authorization is an e-commerce transaction risk scoring platform and capability that is specifically tailored and built for the unique sets of attack vectors that we’re seeing in the U.S. So as I was mentioning in my prepared remarks, it’s built on deep learning technology that’s specifically tuned to some of the sequential and contextual view of accounts that we’ve had in the U.S. market. 

Wix (NASDAQ: WIX)

Wix’s management released its AI website builder in 2024 Q1, which is the company’s cornerstone product; the AI website builder utilises a conversational AI chat experience where users describe their intent and goals, and it is based on Wix’s decade-plus of knowledge in website creation and user behaviour; the AI-generated sites include all relevant pages, business solutions (such as scheduling and e-commerce), and functions; management thinks the AI website builder is a unique product in the market; management is seeing strong utilisation of the AI website builder, with hundreds of thousands of sites already been created in a few months since launch by both Self Creators and Partners

Notably, this quarter, we released the highly anticipated AI website builder. This is our cornerstone AI product. It leverages our 10-plus years of web creation expertise and unparalleled knowledge based on users’ behavior through a conversational AI chat experience. Users describe their intent and goals. Our AI technology then creates a professional, unique, and fully built-out website that meets the users’ needs. Importantly, the AI-generated site includes all relevant pages with personalized layout themes, text, images and business solutions such as scheduling, e-commerce and more. Best of all, this website are fully optimized with Wix-reliable infrastructure, including security and performance as well as built in marketing, SEO, CRM and analytics tools. There is truly nothing like this on the market. Excitingly, feedback on the AI website building has been incredible. In just a few short months since its launch, hundreds of thousands of sites have been already been created using this tool by both Self Creators and Partner. This strong response and utilization is a testament to the depth of our AI expertise and strength of our product. 

Wix released AI-powered image enhancement tools within Wix Product Studio in April which allow users to edit images in a high-quality manner through prompts

In April, we released a suite of AI-powered image enhancement tools that provide users with the capability to create professional images on their own. High-quality images are an essential part of a professional website but often hard to achieve without the help of professional photographer. New users will be able to easily erase objects, generate images, edit them to add or replace objects with a simple prompt, all without ever leaving the Wix Product Studio. 

Wix will be releasing more AI products in 2024; the upcoming products include AI business assistants; the AI business assistants are in beta testing and management is seeing great feedback

This new capabilities are just the start of a robust pipeline of AI-enabled products still to come this year, including a variety of vertical AI business assistants that will be released for the year. A couple of these assistants are currently in beta testing and seeing great results and feedback. 

Wix is seeing that its AI products are resulting in better conversion of users into premium subscribers; management believes that Wix’s AI products will be a significant driver of Self Creators growth in the years ahead

We are seeing a tangible benefit from our entire AI offering particularly a better conversions among users into premium subscription. I strongly believe that our AI capability will be significant — a significant driver of Self Creators growth in 2024 and beyond.

 Wix’s AI tools will be exposed very frequently to both existing and new users of the Wix platform

[Question] I wanted to kind of follow on to that and just kind of understand with respect to the AI tools. Do you see this primarily impacting the new customers? 

[Answer] When users are building their websites, all the website creation tools are visible to them and are helping them. Most of our users will stay a few years or more than that with the same website and sometimes — and they’ll update it, but they’re not going to recreate it. So, in that term, of course, the exposure is limited. But the integration of the vertical assistance is something that means that every time you go to the website, you’re going to have a recommendation, and the ideas and things you can do with AI. So, the exposure will be pretty much every time you go into the website. And that is significantly higher. And if you think about the fact that we have a lot of people that run their business in top of Wix, it means that all of those guys will be daily or almost daily exposed to new products with AI…

…You’re going to find AI tools, but they are not going to replace what you already know how to do. Sometimes, if you want to change an image, for example, it’s easier to click on change image instead of writing to the prompt, hey, please change the third image from the top, right? So, it’s always about the combination of how you do things in a balanced way, while allowing users to feel comfortable with the changes, not move beyond that. 

Wix’s management believes that AI will be a boom for new technologies and innovation and will lead to more growth for Wix

I believe that there’s so much potential for new things coming with AI, so much potential with new things coming with market trends and new technologies introduced into the market that I believe that we’re going to continue to see significant innovation, growing innovation coming from small businesses and bigger businesses in the world, which will probably result in the formation of additional growth for us. 

Zoom Video Communications (NASDAQ: ZM)

Zoom is now far beyond just video conferencing, and AI is infused across its platform

Our rapid innovation over the years has taken us far beyond video conferencing. Every step of the way has been guided by our mission to solve customer problems and enable greater productivity. In the process, we have very deliberately created a communication and collaboration powerhouse with AI infused natively across the platform.

Zoom’s management announced Zoom Workplace, an AI-powered collaboration platform in March; Zoom Workplace already has AI-powered features but will soon have Ask AI Companion; Zoom Workplace also improves other Zoom products through AI Companion capabilities; the AI features in Zoom Workplace are provided at no additional cost

In March we announced Zoom Workplace, our AI-powered collaboration platform designed to help our customers streamline communications, improve productivity, increase employee engagement, and optimize in-person time. Within the launch of Zoom Workplace are new enhancements and capabilities like multi-speaker view, document collaboration, AI-powered portrait lighting, along with upcoming features and products like Ask AI Companion, which will work across the platform to help employees make the most of their time. The Workplace launch also boosts Zoom Phone, Team Chat, Events and Whiteboard with many more AI Companion capabilities to help make customers more productive…

…When you look at our Workplace customers, guess what, AI is not only a part of that but also at no additional cost, right? So that is our vision.

Expedia has signed a quadruple-digit seat deal for Zoom Revenue Accelerator, which includes AI products that can help Expedia to drive revenue

Let me thank Expedia, who needs no introduction, for becoming a Lighthouse Zoom Revenue Accelerator customer in the quarter, leaning heavily into our AI products to drive revenue. A power user of Zoom Phone for years, they wanted to better automate workflows, coach sellers and drive efficiencies. We partnered with them on an initial quadruple-digit seat Zoom Revenue Accelerator deal, which includes working directly with their team to improve and tailor the product based on their business model and industry-specific use case.

Centerstone, a nonprofit organisation, expanded Zoom Phone and Zoom Contact Center in 2024 Q1 to leverage AI to provide better care for its beneficiaries

Let me also thank Centerstone, a nonprofit health system specializing in mental health and substance use disorder treatments for individuals, families, and veterans, for doubling down on Zoom. Seeing strong value from their existing Zoom Meetings, Phone and Rooms deployment, in Q1, they expanded Zoom Phone and added Zoom Contact Center in order to leverage AI to provide better care, and Zoom Team Chat in order to streamline communications all from a single platform.

Zoom AI Companion is now enabled in >700,000 customer accounts just 8 months after launch; AI Companion improves the value proposition of all of Zoom’s products and it’s provided to customers without charging customers more; AI Companion also helps Zoom improve monetisation because its presence in Zoom’s Business Services enables Zoom to charge a premium price because the AI features are a key differentiator; management will leverage AI Companion to build a lot of new things

Zoom AI Companion has grown significantly in just eight months with over 700,000 customer accounts enabled as of today. These customers range all the way from solopreneurs up to enterprises with over 100,000 users…

… I think AI Companion not only help our Meetings, Phone, or Team Chat, it’s across the entire Zoom Workplace platform plus all the Business Services, right? Our approach, if you look at our Workplace, the deployment, right, for the entire collaboration platform not only makes all those services better but also customers appreciate it, right, without charging the customers more, right? We do add more value to customers at no additional cost, right? That’s kind of the power part of the Zoom company. At the same time, in terms of monetization, as I mentioned earlier, if you look at our Business Services, AI is a key differentiation, right, AI and we charge a premium price as well, and that’s the value. At the same time, we also are going to leverage AI Companion to build a lot of new things, new services like Ask AI that will be introduced later this year and also some other new services that we’re working on as well.

One of Zoom’s management’s key priorities is to embed AI across all of Zoom Workplace and Business Services

Embedding AI across all aspects of Zoom Workplace and Business Services is a key priority as we continue to drive productivity and engagement for our customers.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Coupang, Datadog, Etsy, Fiverr, Mastercard, Meta Platforms, Microsoft, Netflix, Shopify, TSMC, Tesla, The Trade Desk, Visa, Wix, and Zoom. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 May 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 May 2024:

1. How I Think About Debt – Morgan Housel

Japan has 140 businesses that are at least 500 years old. A few claim to have been operating continuously for more than 1,000 years…

…These ultra-durable businesses are called “shinise,” and studies of them show they tend to share a common characteristic: they hold tons of cash, and no debt. That’s part of how they endure centuries of constant calamities…

…I think this is the most practical way to think about debt: As debt increases, you narrow the range of outcomes you can endure in life…

…I hope to be around for another 50 years. What are the odds that during those 50 years I will experience one or more of the following: Wars, recessions, terrorist attacks, pandemics, bad political decisions, family emergencies, unforeseen health crises, career transitions, wayward children, and other mishaps?

One-hundred percent. The odds are 100%.

When you think of it like that, you take debt’s narrowing of survivable outcomes seriously…

…I’m not an anti-debt zealot. There’s a time and place, and used responsibly it’s a wonderful tool.

But once you view debt as narrowing what you can endure in a volatile world, you start to see it as a constraint on the asset that matters most: having options and flexibility.

2. Economists Aren’t the Best at Predicting the Economy – Tyler Cowen

Out of curiosity, I recently cracked open The American Economy in Transition, published in 1980, edited by Martin Feldstein and including contributions from other Nobel-winning economists, successful business leaders and notable public servants. Though most of the essays get it wrong, I found the book oddly reassuring…

…For instance, many authors in the book are focused on capital outflow as a potential problem for the US economy. Today, of course, the more common concern is a possible excess inflow of foreign capital, combined with a trade deficit in goods and services. Another concern cited in the book is European economies catching up to the US. Again, that did not happen: The US has opened up its economic lead. Energy is also a major concern in the book, not surprisingly, given the price shocks of the 1970s. No one anticipates that the US would end up the major energy exporter that it is today.

Then there is the rise of China as a major economic rival, which is not foreseen — in fact, China is not even in the book’s index. Nether climate change nor global warming are mentioned. Financial crises are also given short shrift, as the US had not had a major one since the Great Depression. In 1980 the US financial sector simply was not that large, and the general consensus was that income inequality was holding constant. Nor do the economics of pandemics receive any attention.

So you may see why the book stoked my fears that today’s economists and analysts do not have a good handle on America’s imminent problems.

As for opportunities, as opposed to risks: The book contains no speculation about the pending collapse of the Soviet Union. Nor are the internet, crypto or artificial intelligence topics of discussion…

…Then there are the things that haven’t changed much over the decades. Peter G. Peterson, who helped to found the fiscally conservative Peterson Institute, has an essay in the book worrying about the federal deficit.

The piece that most resonated with me, contrary to expectation, is by Paul Samuelson. Samuelson is the one contributor who realizes he doesn’t understand what is going on in the world. He starts by mentioning how forecasts in 1929 and 1945 failed to see the future very clearly. He hopes that the 1980 contributions will be luckier. “The facts tell their own story,” he writes, “but it is not the simple story that so many want to hear.”

Perhaps true reassurance comes from knowing that, all things considered, the US economy has done quite well since 1980.

3. The Cazique of Poyais: a Real Estate illusion in the new world – Javier Pérez Álvarez

After fighting in the South American wars of independence, Gregor MacGregor returned home declaring himself Cazique (kind of a tribal prince) of an imaginary Central American country called “Poyais.” His utopian paradise promised unparalleled wealth and opportunities, attracting hundreds of investors who, unfortunately, not only ended up losing their fortunes but also their lives…

…Gregor MacGregor, known as the Prince of Poyais, Cazique, and His Serene Highness, was a Scottish soldier who became one of the most notorious conmen of his time. He was born on December 24, 1786, into the MacGregor Clan, a family with a strong military tradition…

…At sixteen, Gregor joined the British Army just as the Napoleonic Wars were breaking out. Serving in the 57th Foot Regiment, he quickly rose to the rank of lieutenant within a year.

In June 1805, at the age of nineteen, he married Maria Bowater, a wealthy and well-connected woman, the daughter of a Royal Navy admiral. This marriage secured his social position, and he bought the rank of captain, avoiding the traditional path of promotion that would have required seven years of hard work…

…After his wife’s death, he faced financial difficulties, and his social aspirations crumbled. It was then that his interests turned to Latin America, inspired by the Venezuelan revolutionary general Francisco de Miranda.

Selling his property in Scotland, MacGregor sailed to Venezuela in 1812, presenting himself as “Sir Gregor” and offering his services to Miranda, who appointed him colonel and commander of a cavalry battalion. Despite some initial successes, his ambition drove him to rapidly ascend the ranks, achieving the position of General of Division in the armies of Venezuela and New Granada by the age of thirty…

…Then in 1820, MacGregor came across the swampy, inhospitable coast of Nicaragua, known as the Mosquito Coast. Here he persuaded the leader of the indigenous people to give him land to create a colony. A dream of empire began to take shape.

The self-appointed Prince of Poyais reappeared in London in 1820. He was seeking investors and colonists looking for a new opportunity across the Atlantic in a new world full of possibilities…

…He commissioned a book, illustrated with engravings, describing the country with “total” accuracy…

…Taking advantage of his past as a British Army officer, he managed to gain the sympathy of high society. Nothing has ever been more important than good marketing and PR. The Crown recognized him as a foreign dignitary and, to foster relations between the two countries, honored him with the title of Sir (finally). At that time, just as it happens now, brokers didn’t care what kind of securities they sold as long as they made money from them. Thus, in 1822, Sir Gregor managed to place “Poyais State bonds for stabilization” worth £200,000. These bonds were traded alongside securities from other already recognized states, such as Colombia, which had gained its independence in 1810.

After this, MacGregor took it a step further. He opened offices throughout Great Britain that sold land to colonists who wanted to start a new life in Poyais…

…Many were convinced. Hundreds of enthusiastic colonists spent their savings buying land in Poyais and the corresponding passage overseas…

…In 1822, the first emigrants arrived on the country’s shores in two ships. At the location where the capital should have been, described in detail in the book by the “Black River,” there was nothing. The place the colonists had arrived at was known as the “Mosquito Coast.” The natives themselves avoided that place due to its terrible climate…

…Nevertheless, typical of human psychology, the colonists’ discontent turned against the ship’s captain who had brought them, for it was he who was there. Somehow, he had made a mistake, disembarking them in that godforsaken place and immediately setting sail. No one thought to doubt Sir Gregor. The few natives there could not care for the colonists. Many fell ill and died.

The survivors returned to Great Britain in the autumn of 1823. Surprisingly, no scandal occurred. The emigrants continued to believe in the word of the Prince of Poyais…

…Naturally, all those who invested their money in Poyais bonds lost it. However, it must be said that the returns on these bonds were in line with other investments made in Latin America during those years. On many occasions, the solvency of real states was no different from that of fictional countries like Poyais.

4. 4 Economic Charts That Might Surprise You – Ben Carlson

Large corporations aren’t feeling inflation’s impact. Consumers hate inflation. Small businesses aren’t a fan. Politicians don’t like it much either.

But large corporations?

They seem just fine when it comes to profit margins…

…And the explanation:

Corporations are paying higher wages and input costs but they simply raised prices to combat those higher costs.

Corporate America puts profit first, second, and third, which is one of the reasons the stock market is so resilient.

If it seems like corporations always win it’s basically true. They know how to adapt regardless of the macro environment…

…When Russia invaded Ukraine in the spring of 2022, the price of oil quickly shot up from around $90/barrel to $120/barrel.

Energy experts and macro tourists alike came out with $200/barrel predictions. It made sense at the time!

That war still rages on, along with an additional conflict in the Middle East. In the past, this would have sent oil prices skyrocketing. The oil crisis was a big reason we had stagflation in the 1970s.

Not this time around. Oil prices are back down to $80/barrel. On an inflation-adjusted basis, oil prices are essentially flat since 2019 just before the pandemic…

…The U.S. becoming the biggest oil producer in the world is one of the most important macro developments of the past 20-30 years, yet you rarely hear about it.

This is a huge deal!

5. What It’s Like to Be a Regional Fed President On the Road – Tracy Alloway, Joe Weisenthal, Tom Barkin, and many others

Tracy (11:02):

What’s the biggest constraint on your growth right now? Is it getting the materials? Is it availability of contractors? What’s stopping you from selling even more?

Albert (11:14):

I guess for us it’s going to be more financial institutions understanding our business more. I think the supply chain issue for us, it’s okay, as we have access to different supplies, but it’s more of having a backing of a financial institution, for us.

Tracy (11:37):

So credit?

Carport Central employee (11:38):

So credit. But our turnaround time in our industry, luckily it is pretty quick, but because of the fabrication time and their time schedule for commercial projects, they are not able to pay us, let’s say within maybe 90 days.

And our credit terms are, say, net 30, net 45. So basically we have to have a reserve of cash. You know, it’ll come in, but it’s just a delayed situation. So the growth that we’re seeing, we’re actually being restrained because of not having access to the capital that we need to actually move forward.

Tom (12:14):

And what are the banks telling you when you go talk to them and say ‘I got a business and I got a lot of demand and I just need a little more capital?’

Carport Central employee (12:19):

Well, I think right now it’s mostly because of the way the economy’s going. They’re really, they’re not as free telling you ‘Hey, come on in, let’s help you.’ It’s more like ‘Eh, let me see if I can, I don’t know if I can,’ that kind of situation, not like it was before.

Tom (12:34):

But it’s access rather than rate because you could say ‘Oh, they’ll give it to me. It’s just costing me too much.’

Tom Williams (12:39):

Yeah, I think it’s more access. I think people are more reserved with that…

…Joe (16:20)

So you mentioned when we talked about the sort of anecdotal learnings, the examples you gave were sort of either confirmatory or maybe inform something at the margins like, okay, maybe there’s still more juice on the public sector for [the] labor side. How often does it come up where people will start consistently saying something that, oh, this is really not showing up in the data yet, and it’s sort of an early signal of something that later on you say ‘Yep, there it is, playing out in the numbers.’

Tom (16:48):

I’d say every quarter there’s something like that. So in the fourth quarter last year, in October, you may remember the numbers were really, really frothy. And I wasn’t hearing any of that in the market, and I actually came out and said ‘t’s just not consistent with what I’m hearing.’

Joe (17:02):

The inflation numbers?

Tom (17:03):

No, the demand numbers, the consumer spending numbers, the retail sales numbers were very frothy. That’s not consistent. I’d say today we just got a retail sales report recently that was quite strong and I’m hearing decent consumer spending. I’m not hearing that strong. And maybe I’ll be proven wrong by the time this airs, but that’s what I’m hearing.

So I do hear things that are different and then I hear some number of things that are in advance. May of 2020 in Bristol, Tennessee opened, Virginia wasn’t open. It was right at the end of the first part of Covid and I talked to a developer who said ‘Oh my God, the malls are packed.’

And that was before any of us knew that the opening of the economy would lead to that kind of spending. You know, that’s a good example. I’ll also get a reasonable amount of, I’ll call it segment specific information. You know, how are higher income consumers thinking versus lower income consumers? Or what’s the job market for professionals versus skilled trades? And so the overall number may be the same, but you’ll get some insight into what’s really driving it..

…Winston-Salem Rotary Club member (20:37):

To the extent you can, can you give us any flavor of what you all discussed in your interest rate meetings? And secondly, do you have favorite economic benchmarks you find very useful?

Tom (20:49):

You know, what I’m mostly interested in is real-time information. You’re trying to figure out what’s actually happening in the marketplace. So I get credit card spending every week, year-over-year, and during Covid, I got pretty calibrated on what that means in terms of retail sales.

But that’s something I look at closely to try to get a sense of demand. Consumer spending’s 70% of the economy. On the labor market, the jobs report that comes out every month is clearly the best, most secure thing. But I take some comfort from the weekly jobless claims, because it’s at least a real time measure of whether layoffs are accelerating, which is what you’d see if the economy turned south.

And I think you kind of get the point. I’m trying to figure out is there any risk of the economy turning? That’s really what I focus on.

In terms of the meeting, maybe I’ll give you a 10-day look at it rather than just the meeting itself because the weekend before, 10 days before the meeting, the weekend 10 days before the meeting, we’ll get, the staff does a 200-page vertical text, greatest analysis of the economy you’ve ever seen. And it’ll include domestic and international and financial markets and lending markets and different scenarios for where the economy might go and different monetary policy operations. And so it’s a brilliantly done piece of work.

Tom (22:15):

At the same time, Jay Powell sends around his first draft of what the statement might be. And so we work all weekend and into the week, debating how we want to talk about the economy and whether we like that statement.

We’ll offer Jay — I’m giving you this background so you understand me — we’ll offer Jay our perspective on this statements. He always likes mine best. That’s not actually true. I’m making the point, the statement that we issue the Wednesday of the meeting has largely, not always, but largely, been pretty well-vetted by the time you get to the meeting.

So we don’t go to the meeting and try to line edit a statement. For the most part, every time that the chair has a bad press conference, that’s because we’ve line edited the statement in the meeting and we send them out there two hours after the meeting to go defend it, which is, I think in my judgment, a little bit of malpractice. But we do it sometimes in the meeting itself.

There’s often a special topic and so the staff will present some papers on the special topic and we’ll have a debate about it. Then we all go around and talk about economic conditions. So I’ll say ‘I’ve been in the district for the last seven weeks and here’s what I think I’ve learned, and here’s what I take solace from in the recent data and here’s what I think are some interesting conclusions you might not have otherwise thought about.’

Then we all talk about the statement, pretty productive meeting. It’s a reasonably formal meeting. It’s not really flippant. There’s not tons of humor in there. You know, it’s a pretty serious meeting, but it’s also, every word is transcripted. So if you’re having trouble sleeping, you can go get them from five years ago…

…Tracy (45:55):

This is another theme that comes up regularly in Tom’s meetings. Big and small companies seem to have experienced a lot of the economy of recent years in very different ways. We asked Tom about this.

Tracy (46:08)

Do you notice a big difference between what larger companies are saying versus smaller companies?

Tom (46:12):

I do. Smaller companies are still struggling to fill workforce jobs. They’re still struggling to fill jobs. And that’s in part because there was more capacity to raise wages in the larger companies than there were in the smaller companies.

And we were with one earlier today, but when you go to a smaller company, you do hear that kind of constraint being much bigger. During the supply chain shortage era, you absolutely heard that the big companies had a lot more benefit than the smaller companies. And I think when it came to the margin recapture cycle, the big companies have led the way on that. And a lot of small companies are still saying that they’re working to recapture margins.

Joe (46:54):

Being able to compete on wages isn’t the only edge that larger companies have in the current environment. Many of them have also been able to refinance their debt. Contrast that with the smaller company, Carport Central, which told Tom that bank lending is becoming a constraint on its business.

Tracy (47:10):

That might be one reason, according to Tom, that economic growth has so far defied the gravity of higher interest rates. They just haven’t flowed through to some parts of the economy just yet.

Tom (47:20):

Well, so the data that I keep coming back to is interest payments as a percent of either personal disposable income or corporate revenue. And those numbers have only now finally gotten back to 2019 levels. And that’s because a lot of individuals paid down their credit cards and refinanced their mortgages, and a lot of companies paid down their debt and refinanced their debt.

And so the in aggregate impact of having the Fed funds rate at five and a third versus where it was basically at zero hasn’t really flown through the aggregate economy. Now it’s certainly flown through to individual parts of the economy.

And the most surprising things to me, obviously, the residential market, where you’ve got the 3% mortgage holders who don’t want to trade into a 7% mortgage and are unwilling to sell their house. But behind that is that 92% of mortgages are fixed rate, okay? So that’s different than what the economy was 15 years ago.

In commercial real estate, multifamily, you hear about a set of people who really can’t develop anymore, want to turn in the keys, whatever version of it. And another set of people who are owners who are feeling actually just fine.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time. Holdings are subject to change at any time.

OUE’s Buybacks, SIA’s Latest Deal With Garuda Airlines, Tencent Suspends Dungeon & Fighter Mobile Shortly After Debut, Mismanagement at Red Lobster, & More

Earlier this week, on 21 May 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s The Evening Runway show. We discussed a number of topics, including:

  • OUE’s announcement to buy back up to around 84 million shares in an off-market purchase (Hint: It looks like good capital allocation from OUE’s management on the surface because the company ended 2023 with a book value per share of S$4.31; the buybacks, which could be up to 10% of OUE’s outstanding shares, would be done at a price of S$1.25 per share, which equates to a price-to-book ratio of just 0.3; the buybacks would also not harm OUE’s balance sheet in any material way since it would cost S$105 million at most while the company ended 2023 with shareholder’s equity of S$3.6 billion)
  • Singapore Airlines’ agreement with Garuda Indonesia to explore revenue sharing arrangements for flights between Indonesia and Singapore, and to partner on their frequent flyer programmes (Hint: The latest agreement is unlikely to move the needle for Singapore Airlines because the entire SIA group serves more than 100 destinations in nearly 40 countries and Indonesia is just one of many key markets for the airline)
  • City Developments’ sales revenue in Singapore for the first quarter of 2024 and what it means for the property sector in Singapore (Hint: City Developments had a strong performance, but the company’s numbers cannot be seen as a broad read-through of Singapore’s property market)
  • Tencent’s suspension of its Dungeon & Fighter Mobile game within an hour of its Chinese debut and what it means for the company (Hint: For now, it seems that the game is enjoying really strong demand from gamers, and this bodes well for Tencent’s business)
  • The bankruptcy of US seafood restaurant chain Red Lobster (Hint: Red Lobster appears to have been badly mismanaged; court documents for its bankruptcy revealed that Red Lobster’s restaurant leases were “priced above market rates” and there were questionable aspects with its food ingredient procurement practices)
  • Nvidia’s earnings and what it means for the company’s share price (Hint: How Nvidia’s share price will react in the short run is anybody’s guess, but over the long run, the company’s share price movement will be determined by its business performance; its business performance, will in turn – at least based on the current picture – be largely determined by the growth in demand for its AI GPUs in the years ahead)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Costco, Tencent, and TSMC. Holdings are subject to change at any time.