All articles

Investing Like a Business Owner

We often forget that investing in stocks is investing in businesses. As such we need to think like a business owner to succeed.

Rob Vinall is one of the top performing fund managers in the past decade and a half.

Vinall manages a fund named the Business Owner Fund. Since inception 15 years ago, the Business Owner Fund has returned 589%, or an annualised rate of 13.7%, in euro terms. One thing about Vinall that stands out to me is that as his fund’s name suggests, he strives to invest like a business owner.

Too often, investors look at stocks as just prices that move up and down and make investments decisions based on these prices. They often forget that there are businesses and cash flows behind these stock prices and stock tickers.

Step into the shoes of a business owner

Imagine you are starting a restaurant business. There are two big financial numbers you need to consider before you start. They are: (1) how much do you need to put into the business and (2) how much can you get out of it over time?

For instance, let’s say the initial start up cost is $1m. But you can take out $200k in dividends every year after that for the next 20 years. Knowing these projections, you can decide if it is worthwhile to start your restaurant business. In the above projections, you can calculate that over twenty years, you would have quadrupled your money.

Investing in stocks should also involve the same thinking. How much can we get out of the stock over the lifespan of the business? That means, how much in dividends per share can we get over the lifespan of the business and will that cover the cost that we spend on buying the shares.

But what about selling the stock?

A business owner who owns her own restaurant may not have an opportunity to sell the restaurant. As such, the only way to receive any returns is from the profits of the business. This means that the business owner naturally places emphasis on ensuring the profits that the business can generate exceeds how much she puts in.

On the other hand, when we invest in stocks, we can sell the stock. This is both a blessing and a curse in my opinion. It’s good because it provides us with liquidity if we need the cash. But it’s bad because investors then tend to focus on the stock price and not the business fundamentals.

Like a business owner, stock investors should be focused on the cash flow of the business rather than its share price. This means looking at the future cash flow per share, and ultimately how much dividends, they can receive over the lifespan of the business.

In the long-term, while a company may not be paying dividends yet, the earnings and cash flows allows a company to eventually dish out dividends, which should offset the amount you paid for your investment and more.

Final words

Investing in the stock market should be similar to being a business owner. We should focus on how much profits a company can return to us instead of how much we can sell the stock at a future date. 

The quoted stock price on the stock market can fluctuate wildly and will depend greatly on external factors such as the risk free rate or how Wall Street views the company. This can distract us from what is truly important and why we really invested in the company.

By focusing on the cash flows of the business, we can more safely predict our returns instead of being beholden to the externalities of the environment that may impact our sale price.

Ultimately, just like a business owner, we should focus on our returns from the dividends instead of wasting energy hoping that the share price goes up. This is often outside our control and if it does then great but if it doesn’t, it shouldn’t matter as the overall returns from our cash flow should be good enough for us to make a positive return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 December 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 03 December 2023:

1. Charlie Munger, Warren Buffett’s Partner and ‘Abominable No-Man,’ Dies at 99 – Jason Zweig and Justin Baer

No equal business partner has ever played second fiddle better than Charlie Munger.

Warren Buffett’s closest friend and consigliere for six decades, the billionaire vice chairman of Berkshire Hathaway died Tuesday at age 99 in a California hospital. A news release from Berkshire confirmed his death.

In public, especially in front of the tens of thousands of attendees at Berkshire’s annual meetings, Munger deferred to Buffett, letting the company’s chairman hog the microphone and the limelight. Munger routinely cracked up the crowd by croaking, “I have nothing to add.”

In private, Buffett, who is 93, often deferred to Munger. In 1971, Munger talked him into buying See’s Candy Shops for a price equivalent to three times the chocolate stores’ net worth—a “fancy price,” Buffett later recalled, far higher than he was accustomed to paying for businesses.

See’s would go on to generate some $2 billion in cumulative earnings for Berkshire over the coming decades…

…Buffett nicknamed Munger the “abominable no-man” for his ferocity in rejecting potential investments, including some that Buffett might otherwise have made. But Munger, who was fascinated by engineering and technology, also pushed the tech-phobic Buffett into big bets on BYD, a Chinese battery and electric vehicle maker, and Iscar, an Israeli machine-tool manufacturer.

Munger was a brilliant investor in his own right. He began managing investment partnerships in 1962. From then through 1969, the S&P 500 gained an average of 5.6% annually. Buffett’s partnerships returned an average of 24.3% annually. Munger’s did even better, averaging annualized gains of 24.4%.

In 1975, shortly before he joined Berkshire as vice chairman, Munger shut down his partnerships. Over their 14-year history, his portfolios gained an average of 19.8% annually; the S&P 500 grew at only a 5.2% rate…

…“I have been shaped tremendously by Charlie,” Buffett said in 1988. “Boy, if I had listened only to Ben [Graham], would I ever be a lot poorer.”

In 2015, Buffett wrote that Munger taught him: “Forget what you know about buying fair businesses at wonderful prices; instead, buy wonderful businesses at fair prices.”

Berkshire “has been built to Charlie’s blueprint,” Buffett added…

…Munger also confronted tragedy: In 1955, his son Teddy died of leukemia at age 9. Munger later recalled pacing the streets of Pasadena in tears at “losing a child inch by inch.” More than six decades later he would still choke up at the memory of his son’s suffering.

In 1978, a surgeon bungled a cataract surgery, leaving Munger blind in one eye, which later had to be surgically removed. The investor refused to blame the doctor, noting that complications occurred in 5% of such procedures. For him, as always, it was about the numbers.

Munger taught himself Braille, then realized he could still see well enough to read. He ended up driving his own car, often to the consternation of friends and family, until his early 90s…

…At Berkshire’s annual meeting in 2000, a shareholder asked how the speculation in Internet stocks would affect the economy. Buffett answered with nearly 550 words. Munger growled, “if you mix raisins with turds, they’re still turds.”

When a shareholder asked at the 2004 meeting how Berkshire sets pay for executives, Buffett spoke for more than five minutes. Munger drawled, “Well, I would rather throw a viper down my shirtfront than hire a compensation consultant.”…

…Munger never stopped preaching old-fashioned virtues. Two of his favorite words were assiduity and equanimity.

He liked the first, he said in a speech in 2007, because “it means sit down on your ass until you do it.” He often said that the key to investing success was doing nothing for years, even decades, waiting to buy with “aggression” when bargains finally materialized.

He liked the second because it reflected his philosophy of investing and of life. Every investor, Munger said frequently, should be able to react with equanimity to a 50% loss in the stock market every few decades.

Munger retained his sense of humor into his 90s, even though he was nearly blind, could barely walk, and his beloved wife, Nancy, had died years earlier. Around 2016, an acquaintance asked which person, in a long life, he felt most grateful to.

“My second wife’s first husband,” Munger said instantly. “I had the ungrudging love of this magnificent woman for 60 years simply by being a somewhat less awful husband than he was.”

2. How Geopolitical Risks Are Impacting Iranian Stocks –  Tracy Alloway, Joe Weisenthal, Aashna Shah, and Maciej Wojtal

Maciej (04:03):

But what’s interesting and why we are doing this is that you mentioned that you were surprised how big Iran’s economy is and I would say that no, it’s actually very small compared to how big it could get because Iran, you know, it’s around 90 million people, the largest combined oil and gas reserves in the world, and a properly developed and diversified economy. Well, thanks to decades of sanctions, they didn’t have a choice. They had to develop all different parts of the economy.

And all this— in terms of GDP — is around, depending how you calculate it, but it’s around $200 billion. Now when you look at Turkey, which is a similar size country in terms of population and geographical size, but no natural resources, Turkey is around $800-$900 billion. If you look at Saudi Arabia, which has no other sectors except for oil and some petrochemicals, the GDP over there is around $1 trillion. So in some super optimistic, very positive scenario, if everything went well for Iran, Iran could become basically the combination of the two, which is anywhere between $1.8 to $2 trillion dollars.

So the upside for the economy is eight times from where it is right now. So this is the potential, this is the optionality that is in the market. On top of that, once the country starts to open up, obviously there is a long list of things that would have to come in place, then we expect to see a lot of capital flowing into the market and right now it’s only domestic capital and us, which means that because there is not enough capital the local assets are valued at very low levels.

So what we are seeing in the market is that we are buying stocks at four to five times forward net earnings and those earnings are growing, they are paying dividends. The average or the median dividend yield for the top 100 companies is probably close to 15%. So strong double digit dividend yields valuations at such levels that they cannot really fall further as long as those earnings are growing.

So investment risks are pretty small, pretty limited. You have different sorts of risks. You have geopolitics, exactly as you mentioned. I mean those equities basically are priced for war and obviously there is a reason, there might be a reason for that. It’s because it’s the Middle East and it’s amazing how the narrative, you know, the region reminded everyone that the situation and the perception of the region can make a u-turn overnight. Because a month ago, it was not only what you mentioned in the introduction that there was some sort of arrangement between Iran and the US which led to the prisoner exchange, which was very important because historically prisoner exchange was usually the first step to something bigger. Then on top of that, Iran is selling a lot of oil so obviously sanctions are probably not enforced very strictly and so on.

But the bigger story a month ago was in the whole Middle East where Iran basically signed what you can call a peace treaty with Saudi Arabia after many years of not having diplomatic relations. Then what followed were discussions and restoration of diplomatic ties with Iran and Egypt, Bahrain, all Saudi allies and so on…

Maciej (12:22):

So on the seventh of October, I believe that it was the case for the whole region that the local currencies sold off and local stock markets went down. What happened was that initially everything went down. For the first three weeks, the local equity index measured in dollar terms was going down with the lowest point around 10% in terms of the correction.

Since then it started bouncing back. In local currency terms, the equity index is actually at the level from the seventh of October so it made up for all the losses. The currency is still down. So for a foreign investor who is measuring the P&L in dollar terms, you are still roughly 3% down. So it’s actually not that bad given the circumstances, given the risk for local markets and especially Iran which is involved in everything that is going on.

The worst case scenario is that potentially there is a military conflict war, and I don’t know, Iranian refineries or petrochemical plants are military targets and so on. And people were quite scared. We could see this. Some of the sectors went down in the meantime by about 20%, bounced back since then, but mainly that was happening due to very low liquidity.

So what was the biggest impact? Actually, we could see was on liquidity. Normal liquidity is around $150 million per day, and it went to as low as $30- 40 million. So what was going down the most was actually the most illiquid stocks or illiquid industries. So when I look at sectors that really were hit the most, it’s textile producers, confectionaries, so things that are not related to war or geopolitics at all, but they are basically illiquid.

And, oh. One thing important to remember, so the stock market is driven by retail investors. 90% of daily trading is done by retail. So, it’s very emotional, it’s very short term momentum, I would say. So they are selling or buying depending on the, you know, recent price action. So they were driving the share price direction basically…

Maciej (15:36):

Yes, and the thing that is most volatile in Iran is the currency. So the stock market is much less volatile in the local currency than when measured in dollar terms. The local stock market is actually well hedged against currency depreciations because the majority of the biggest companies are actually exporters so they benefit from currency depreciation, but share prices react with a lag…

Maciej (19:47):

There are two interesting facts about the performance of the market. So first of all, when I looked at the last 15 years and big geopolitical events for example, like previous conflicts with Hamas in Gaza, or there was a situation between Iran and the US where people were saying that this was close to a military conflict when Iranian general Soleimani was killed and then Iran retaliated by firing some missiles at an American base in Iraq. When I looked at the performance of the market, it never went down more than 10% in dollar terms, actually.

So what happened right now, I think the bottom was around almost 11% was pretty much in line with those historical geopolitical events that also presented a big risk for the local market. But another way of looking at the Iranian market is the historical performance. And this is very interesting because if you look at the performance of the benchmark equity index, it’s called TedPix Index, total return.

For the last 15 years, so since the inception in 2018, the annualized return in dollars is around 11%, which I think is quite amazing because it’s pretty much the same as for S&P 500, maybe 12% for S&P 500, so it’s in the same ballpark and the environment was completely different. I mean, couldn’t be more different because over the last 15 years in the US you had a technology revolution, those mega caps appearing on the market, interest rates initially going to zero, top of the cycle valuations and in Iran, you had two episodes of currency depreciation of more than 75%. You had some crazy presidents and you had US sanctions, UN sanctions and still, at the end of the day, when you compare performance over the last 15 years, it’s pretty much the same, obviously with much bigger volatility because in Iran, the volatility was probably around 40% or something.

But that shows you that when you’re buying assets at very, very low valuations, and I’m say talking about this four times net earnings, let’s say, and the economy and those companies are actually naturally hedged against the currency volatility or big depreciation, then even in those countries where things are going really bad you can still make money. But what is more important is that if in bad times you are still averaging 11% per year, just think what you can make, what you can expect, when things finally go the right way for Iran and the country opens up and so on? That’s the potential that we are obviously hoping for…

Maciej (30:25):

There are several asset classes in Iran for retail investors. So real estate is the big one, the biggest one, but it’s a high ticket item so not everyone can trade in and out of apartments. It’s a well understood asset class as everywhere. That’s why it’s a bit less interesting for us. So if Iranians have any spare cash, they will buy real estate. From what I heard, 30% of apartments in Tehran are actually empty because they are basically used as a store of value just to park somewhere, assets, savings and they’re not even rented out, they’re just empty.

And also just bear in mind that in Tehran in the best places, the best neighborhoods of Tehran prices are quite expensive. So in the north of Tehran, if you want to buy an apartment, you have to pay around $10,000 per square meter. So a 100 square meter apartment, I don’t know three bedrooms will cost you a million dollars or something, in Iran, which is a poor country. So this is real estate. Real estate is the number one asset class.

Then a very important asset class are used cars. So people trade used cars because they are, again, a hedge against inflation against the currency depreciation, because car manufacturers will always adjust prices based on inflation. Some of the components have to be imported, which is not easy. They produce more than one million cars, or actually closer probably to 1.5 million cars per year, but this is not enough. So the demand is much higher.

So they’re trading used cars and there are platforms that help you trade used cars. It’s a proper asset class, and yes, every Iranian is actually a currency trader, because the currency has been so volatile historically. It’s very important that you know what’s happening to the dollar or the local currency against the dollar. So everyone is tracking the exchange rate and it’s not easy to buy and sell dollars. There are quotas for individual Iranians due to capital controls. So that’s why, instead of buying dollars or to get a bigger position, they go to those proxy asset classes, like used cars or real estate. Also interest rates so you can buy/sell Treasury bills, Treasury bills up to two years maturity. They pay around 25% yield to maturity, maybe a bit more right now so interest rates are high.

When you look at Iran, there is not enough capital there. There’s basically not enough money, credit doesn’t exist. I mean, you cannot get a mortgage at 25%, right? I mean, you cannot finance anything at 25%. And because of very volatile macro people also tend to postpone investment decisions, whether these are individuals or more importantly companies, right?

Everyone is looking like six months ahead, maybe 12 months ahead, right? And they are managing a crisis, because there is always some sort of a crisis, right? So when you think about it, for example, I don’t know, every company is running big inventories just in case, just so that they have enough material to manufacture their products. So they’re not optimized, organized in this very efficient, lean way. They are organized just to survive, basically, war conflict, currency depreciation, sanctions, trade disruptions, whatever…

Maciej (35:41):

So when sanctions were reintroduced in 2018, they haven’t hurt manufacturing, they haven’t hurt exports, companies that much, to be honest. I mean, because people find a way. I mean, companies that export in the region, they’re not really affected by sanctions, big exporters that used to send products to Japan and so on, yes, they were affected, but they found other routes and manufacturers.

Sanctions caused one thing. I mean, sanctions caused currency volatility so the big depreciations of Rial and manufacturers who have costs in Rial, but they either sell in hard currency or at prices linked to some regional benchmarks that are in hard currency, their margins actually expanded.

Look, it’s an interesting thing that the highest earnings growth that we’ve seen over the last couple of years was one year after the 2018 sanctions. This is crazy because this is not intended, I would assume. And, who got hurt by sanctions? Well, households, because they are price takers. So when the inflation shut off because of the currency depreciation, their spending power went down massively, right? And they were able to survive and it was actually quite interesting that they were holding up quite well. And this is because of those savings, right?

Because of the savings that Iranian households had. I’m not sure what’s the situation right now, because they’ve been, I think, on a net basis, those savings have been decreasing over the last couple of years because they had just had to spend them. But yes, that’s what helped them survive the inflation basically.

3. Frugal vs. Independent – Morgan Housel

Frugal, by my definition, means depriving yourself of something you want and could afford.

Not wanting something to begin with because you get your pleasure and identity from sources that can’t be purchased is something entirely different. The best word for it is probably independent…

…The world tells you – even by a mere whisper – that everyone should want the same things: A big house, a nice car, advanced degrees, credentials, social clubs, etc.

I like most of those things. But you have to realize how much of their appeal is an attraction to status, which can be completely different from happiness.

There’s a recent example of someone understanding the difference in real time that I think is more fascinating than Holt or Read’s story.

Chuck Feeney, who founded Duty Free stores, died last month.

The well-known part of Feeney’s story is that he gave away 99.99% of his $8 billion fortune years ago, before he died. He and his wife kept $2 million, lived in a small apartment, flew coach, and gave the rest to charity.

The less well-known part of Feeney’s story is that he once gave the High Life an honest try. The Washington Post wrote of his life in 1980s, when he was newly rich:

He had luxury apartments in New York, London and Paris and posh getaways in Aspen and the French Riviera. He hobnobbed with the other mega-rich on yachts and private jets. If he wanted it, he could afford it.

He quickly realized it wasn’t for him. Society told him he should want those things. But it wasn’t what actually made him happy.

Giving money away was.

“I’m happy when what I’m doing is helping people and unhappy when what I’m doing isn’t helping people,” Feeney said…

…He didn’t follow a typical path of what other people told him to like or how to live.

He found what made him happy.

He may have looked frugal, but he was actually the freest, most independent person you’ll ever hear of.

4. Value Investing with Legends: Nicolai Tangen – Decision-Making and Intuition in Investing (transcript here) – Michael Mauboussin, Tano Santos, and Nicolai Tangen

Mauboussin: What motivated you to do that? And a slightly odd question. Do you see parallels between the investing and the art worlds at all?

Tangen: So I had been very well paid at Egerton and so could afford to take a break. I wanted to do something which was very different. And so I studied German Expressionist Woodcuts, pretty black and white. And it’s wonderful to study with people who think differently and who really want to dig down. And of course, you get your attention span back up from like 2 seconds to 2 hours when you have to write a dissertation and so on. So that was good.

Are there any similarities between art and investing? Well, I don’t think so. Some people claim there is. It’s not for me. I love art because it’s very different from what I do on a daily basis. But perhaps it’s good for creativity. It’s certainly good for the soul. It’s fun, it’s beautiful, interesting.

Santos: You know, when I was telling that we had this point of connection, it’s because I came very close to studying art history when I was a young man. I became completely obsessed with our history. And I spent every summer during my teenage years travelling around France and Italy, trying to absorb as much as I could, you know. And at some point I learned that I also like teaching, so that’s when I decided to go in a different direction. But you’re absolutely right, it’s something that sustains you throughout life.

Tangen: A big difference is you study art, it’s something dead, it’s on the wall. Finance, it’s alive, it’s incredible. I just think finance is just an amazing thing to study because it’s everything that you eat, wear, drive, consume, all these kind of things. It’s about the people, it’s about the psychology, it’s about corporate culture, it’s about – in the market, greed and fear, it’s related to macro. Security, wars, geopolitics and it changes all the time. All the time. And if you’re good at it, you make money.  And so it is just the most interesting thing you can ever spend your time doing…

…Tangen: Now, we started off as a mid cap firm and then gradually went a bit larger cap because we thought we could add value also there. Also gradually we gravitated towards the higher quality spectrum of stocks and now that’s all I care about. It’s the high quality end. It’s companies which can grow earnings, high return on capital and solid moats. A lot of these things we look for. The rest of it is basically a waste of time. The fewer decisions you can make, the better they become. So if you can just sit there and compound, I just think it’s such a wonderful idea. Is it easy? No, it’s super tough. It’s super tough.

And why is that? Well, I kind of think, you come home from work, your husband or wife asks you, “What have you been doing today?” “Well, I’ve done nothing.” Next day, Tuesday, “What have you been doing today?” Nothing. Wednesday, nothing. Thursday nothing, Friday nothing. You just feel like a failure. So therefore you feel you have to trade a bit, but it’s mostly not very profitable…

…Mauboussin: Do you guys know this book came out this year called How Big Things Get Done? Do you know this, Nicolai? Bent Flyvbjerg and Dan Gardner?

Tangen: Yeah, I read it. It’s very good.

Mauboussin: But I think Chapter One is called Think Slow, Act Fast. And I really like that because the “think slow” part is, a lot of it is contemplation and from time to time you do have to act quickly. But for the most part it’s just sitting around and thinking and trying to line things up. You mentioned that finance is wondrous. I clearly agree with that. But I do want to come back to one of the educational items on your CV and that’s a Master’s in social psychology from the London School of Economics. And I believe you’ve suggested that social psychology is something that everyone should study. I think we spend a little bit of time on it in our finance curriculum, but probably not as much as we should. So tell us a little bit about your takeaways from studying behavior and how that applies to markets, both in good times and in challenging times.

Tangen: I think everybody should study it. And I saw that increasingly everything I read was within the social psychology area and I did it actually part time when I still ran AKO. I did my dissertation on looking at gut feel versus analysis and I interviewed the 15, who I thought were the best performing fund managers in Europe, and analyzed how were they actually going about making decisions. And it’s quite interesting because psychologists don’t typically have access to these well paid hedge fund managers and so on. So it was kind of gold dust kind of sample that I had there.

And what you see is that people, if you call it gut feel, nobody believes in it. If you call it pattern recognition, everybody believes in it, even though it’s the same thing. You don’t believe in anybody else’s gut feel, only your own. And you can mainly use it if you are quite senior in the firm, because you can’t come and say, listen, hey, I’m 22 years old, I really believe I have a gut feel that this and that. Now you’re 55, you’re the boss, everybody listens to you and you have more data points and more experience. So your gut feel is basically better or pattern recognition. That’s interesting. Then you use it when you have very little time, when things are urgent, and you use it when the problems are really complex and difficult to analyze.

My impression was that the best ones go from one to the other, so it depends on the situation. But that was really interesting…

…Tangen: I also spent time on people’s risk appetite. Now it’s very, very important when you run an asset management company, is to understand people’s risk appetite. Risk appetite is linked to different things, such as gender. So women take less risk than men. And you only look at the drowning statistics from Norway. Nine out of 10 people who drown are men, so they take more risks. You see it in traffic accidents and so on. Has to do with age, has to do with geographies, introvert, extrovert. So introverts take less risk. And you need to know that, because if an introvert woman aged 50 comes to you and want to take risk, that means something different from an extrovert guy, 22, from America. You need to dial it up and down. The noise level is really, really important.

The last thing I spent time on in social psychology was just to how to unbias your decisions. Extremely important that you’re able to question your own decision making and change your mind when the facts change. So really interesting, everybody, you just have to study it. It’s just the best thing to study.

Mauboussin: So, Nicolai, on that last one, are there a couple guides you would give to folks to debias as they go through their process, or there are tools that you would pull out?

Tangen: Well, the biggest bias people have is the fact that you don’t think you’re biased. Adam Grant’s book, Think Again, the whole mindset there of confident humility, that’s where you need to be as an investor. You need to be confident and you need to be really stubborn, because where you make money is, of course, where you do the opposite of everybody else. But when things change, you just need to change your mind. So that combination of being stubborn and agile is rare. But those are the guys who make the most money. I mean, look at Stan Druckenmiller, who is very confident about his decisions, but then, bang, something changes and he changes tact…

…Tangen: I sail. And at that stage, I sailed quite a bit of competition and I sailed with some spectacularly good sailors, some Olympic people. And I asked, why are you so good? And the whole debrief process was key. So two things which were key to their success. It was the bounce back ability – so how you get back on your horse after a loss, which you also, of course, need in investing. But then the debrief process was really important. And so we started to work with sports psychologists in terms how to improve these kind of things.

And one of the important thing when you look at high achievers in sport is that they focus in on the process rather than the results. And if your process is right, the results will come. And of course, in investing, this is more important than anywhere else, because in investing in the short term, there is just no correlation between process and outcome, whilst in the long term, that’s what it’s all about. And so you need to judge your process. And we kind of split the investment process into different categories and then we graded each analyst on each part of that process with regular intervals. And that’s a really good thing to do because if you go through a period of underperformance, as long as you see that your process is improving, you shouldn’t be too depressed about it…

…Tangen: I probably spend more time now on corporate culture than I did in the past. It’s so unbelievably important. And you have two companies which from the outside look exactly the same, right? They pretty much have the same product and so on. And then one of them is doing extremely well, and the other one is just not doing well. Gee, look at the banking. Look at the banking sector. On my podcast, I interviewed James Gorman, 14-year CEO of Morgan Stanley, and how Morgan Stanley has really done well compared to other banks. So it’s just intriguing how important corporate culture is. And that is also something that CEOs are very keen to talk about but the analysts generally are not so keen because the result of corporate culture work you see only in the long term, and most of the analysts are very short term.

And another interesting thing is that when you are young, you’re 25 years old, you are so in a hurry, despite having the whole life ahead of you. Now, when you are like 57, like me, and about to die, you suddenly get this long term time horizon. It makes no sense. But I think that’s just interesting. And I just met this 85 year old Spanish guy the other day and he was just planting some pistachio trees and he couldn’t wait. I can’t remember how long time it took for them to bear fruit, but it was certainly – I mean, he would probably not be alive then. He was really excited about it. I thought that was so cool…

…Tangen: Norway found oil in 69, on the very last attempt, on the very last well, they were drilling. If they hadn’t found oil on that last one, they would have just packed up the toys and gone home. So pretty amazing. Now, this was told to the Norwegian people on the day before Christmas Eve, 69, and wow, what a Christmas gift.

But the thing was that was it really? Because in a lot of other countries it had been a curse and it had led to corruption and crowding out effects and so on. And then some very clever politicians decided, you know what, let’s put the money into a fund. So they did, 27 years ago, started with a deposit of 2 billion Norwegian kroner, and that has now grown to more than 15,000 billion. So it’s been just an unbelievable success…

…Tangen: Now we are also generally pretty vocal on ESG because we think it’s very very important. We do think the link between climate and finance is strong and getting stronger. Climate is driving food inflation through bad harvests and food price increases. It’s also not driving it through productivity. So that link is strong and established…

…Mauboussin: So, Nicolai, when you end up hanging up your cleats, finally, how will you define success for the fund? I’m sure returns are obviously very important, but what other factors you think will be important to judge your success, as CEO of Norges?

Tangen: We have a clear goal in our strategy document. We want to be the best large investment fund in the world. How do you define that? Well, one thing is performance, but it has to do with process, reputation, risks. And also, I would judge it, just how happy are people working there? Are they using their full potential? Are they thriving? Do they have a good life? Very, very important. And are they having fun? Fun – completely underestimated. People who get fun, they’re more creative. It’s a great leveller. It’s kind of, in a way, the goal of everything we do. You want to have fun and you want to be fulfilled…

…Tangen: I do think a lot about productivity and the lack of productivity growth. And in particular, I’m thinking about Europe versus the US. Because in the US, there is more innovation, there is more speed, and Europe is pretty slow.

And what is it with Europe? Why don’t we have great technology companies? Why is growth pretty pedestrian? And it’s just a combination of so many things. It’s a mindset thing. In Europe, we think 2% growth is fine. Well, perhaps it should be five, perhaps it should be 10. We have very few kind of hairy goals. And you read the Elon Musk book and you understand what a hairy goal is. The speed, the speed by which you move. It just struck me here. I’m in New York now, and just the speed by which they pack a sandwich, right? It’s just like five times quicker than they do it in Europe. The depth of capital market, the lack of depth in the corporate bond market, you’ve got more risky capital here, or risk-seeking capital, you’ve got more venture capital. You can fail. And now in Europe, it’s not good to fail. Much bigger public sector, which probably slows down the thing. I really think union is a great thing, but it does something with structure of businesses. So you have a whole range of things which make Europe slower than America, then that worries me. But I’m doing more work here. This is my next thinking project. Really interesting. 

5. Parallel Bets, Microsoft, and AI Strategies – Matthew Ball

Parallel bets strategies are best suited to (1) cash-rich companies . . . that are (2) pursuing “must-win” categories” . . . in which (3) their assets and strategies are a good fit . . . but (4) may not be configured correctly . . . and (5) there is a high rate of change . . . and (6) many uncertainties . . . and (7) many players . . . with (8) progress often occurring out of sight. Deployed correctly, a company can cover all of the bases while also neutralizing the existential threat of a new competitor. Parallel bets are therefore likely the right strategy generally for “Big Tech” and during this phase of AI, during which there are many unresolved and interconnected hypotheses.

  • Will closed or open models be more capable? If closed models are technically superior, will open models nevertheless be considered “superior” on a cost-adjusted basis? What is the trade-off between the quality of a generative AI response and its cost? How does this vary by vertical?
  • How many of the potential uses of generative AI will result in new companies/applications, rather than new or improved functionality in the products of existing market leaders? Put another way, is the technology or distribution more important? Is there a hybrid model in which users access existing applications, such as PhotoShop or Microsoft Office, but while logging into a third-party AI service, such as OpenAI?
  • Which AI products or integrations will warrant additional revenue from the user, rather than just be baked into the core product as a new table-stakes feature?
  • To what extent are the answers to these questions path-dependent, that is, subject to specific decisions by specific companies and the quality of their specific products—as was the case with Meta open-sourcing its Llama 2 LLM). And how, again, do the answers differ by vertical

Eventually, though, it will be necessary for parallel bets to be winnowed; all strategy is eventually about execution. Note how quickly Microsoft focused its OS strategy on Windows after the success of Windows 3.0 in 1990 (the company was later accused of following an “Embrace, Extend, Extinguish” model where one-time partners would be crushed once emerging markets stabilized). The questions here, of course, are “When,” “How Much,” and “How do you know?”

Microsoft never halted its investments in applications and productivity tools, nor Internet services, and is better off as a result. Sometimes parallel bets lead to growth in new adjacent markets, rather than displace a current one (to that end, Microsoft’s more direct OS-bets were eventually paired). It’s possible that Amazon’s Alexa device footprint will still yet enable the company to regain market leadership. Indeed, OpenAI’s CEO, Sam Altman has confirmed reports that it is considering its own foray into consumer hardware (led, according to rumours, by Apple’s long-time design chief, Jony Ive).

And sitting alongside all of the above considerations is the biggest question: how might the focus on current AI architectures and opportunities distract from the development of artificial general intelligence? John Carmack, who is considered the “father of 3D graphics” due his pioneering work at Id Software, which he co-founded in 1991, and joined Oculus VR as its first CTO in 2013, founded his own AI start-up in 2022, Keen Technologies, which is exclusively focused on developing artificial general intelligence. According to Carmack, the number of [contemporary] “billion-dollar off-ramps” for AI technologies has become a de facto obstacle to achieving true AGI. “There are extremely powerful things possible now in the narrow machine-learning stuff,” Carmack told Dallas Innovates in his first major interview after founding Keen, “[but] it’s not clear those are the necessary steps to get all the way to artificial general intelligence.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Microsoft. Holdings are subject to change at any time.

Lessons From The Immortal Charlie Munger

Neuroscientist David Eagleman once wrote: “There are three deaths: the first is when the body ceases to function. The second is when the body is consigned to the grave. The third is that moment, sometime in the future, when your name is spoken for the last time.”

Along Eagleman’s line of reasoning, Charlie Munger, who passed away peacefully last night, would be immortal since he would never experience the third death – his accomplishments, and the wisdom he has shared throughout his life, would see to it. 

Munger is one of my investing heroes. In remembrance of his life, I would like to share my favourite lessons from him.

On the importance of thinking in reverse, or inverting

“Another idea that I discovered was encapsulated by that story Dean McCaffery recounted earlier about the rustic who wanted to know where he was going to die, so he wouldn’t go there. The rustic who had that ridiculous sounding idea had a profound truth in his possession. The way complex adaptive systems work, and the way mental constructs work, problems frequently become easier to solve through inversion. If you turn problems around into reverse, you often think better. For instance, if you want to help India, the question you should consider asking is not: How can I help India? Instead, you should ask: How can I hurt India? You find what will do the worst damage, and then try to avoid it. Perhaps the two approaches seem logically the same thing. But those who have mastered algebra know that inversion will often and easily solve problems that otherwise resist solution. And in life, just as in algebra, inversion will help you solve problems that you can’t otherwise handle.”

On the importance of being equanimous when investing

“If you’re not willing to react with equanimity to a market price decline of 50% two or three times a century you’re not fit to be a common shareholder and you deserve the mediocre result you’re going to get compared to people who do have the temperament, who can be more philosophical about these market fluctuations.”

On the importance of incentives

“From all business, my favourite case on incentives is Federal Express. The heart and soul of their system – which creates the integrity of the product – is having all their airplanes come to one place in the middle of the night and shift all the packages from plane to plane. If there are delays, the whole operation can’t deliver a product full of integrity to Federal Express customers. And it was always screwed up. They could never get it done on time. They tried everything – moral suasion, threats, you name it. And nothing worked. Finally, somebody got the idea to pay all these people not so much an hour, but so much a shift – and when it’s all done, they can go home. Well, their problems cleared up overnight.”

On great career advice

“Three rules for a career: (1) Don’t sell anything you wouldn’t buy yourself; (2) Don’t work for anyone you don’t respect and admire; and (3) Work only with people you enjoy.”

On the importance of admitting mistakes

“There’s no way that you can live an adequate life without many mistakes. In fact, one trick in life is to get so you can handle mistakes. Failure to handle psychological denial is a common way for people to go broke.”

On the importance of not letting rare events completely shape how you approach life

“Ben Graham had a lot to learn as an investor. His ideas of how to value companies were all shaped by how the Great Crash and the Depression almost destroyed him… It left him with an aftermath of fear for the rest of his life, and all his methods were designed to keep that at bay.”

On the importance of handling problems from many different angles

“Most people are trained in one model – economics, for example – and try to solve all problems in one way. You know the saying: “To the man with a hammer, the world looks like a nail.” This is a dumb way of handling problems.”

On the importance of getting a little wiser each day

“I constantly see people rise in life who are not the smartest, sometimes not even the most diligent, but they are learning machines. They go to bed every night a little wiser than they were when they got up, and boy, does that help, particularly when you have a long run ahead of you.”

On how to invest

Over the long term, it’s hard for a stock to earn a much better return than the business which underlies it earns. If the business earns 6% on capital over 40 years and you hold it for that 40 years, you’re not going to make much different than a 6% return—even if you originally buy it at a huge discount. Conversely, if a business earns 18% on capital over 20 or 30 years, even if you pay an expensive looking price, you’ll end up with a fine result. So the trick is getting into better businesses. And that involves all of these advantages of scale that you could consider momentum effects.”

On how to get others to agree with you

“Well, you’ll end up agreeing with me because you’re smart and I’m right.”

On the secret to a happy life

“I always say the same thing: realistic expectations, which is low expectations. If you have unreasonable demands on life, you’re like a bird that’s trying to destroy himself by bashing his wings on the edge of the cage. And you really can’t get out of the cage. It’s stupid. You want to have reasonable expectations and take life’s results good and bad as they happen with a certain amount of stoicism.”

On courage and perseverance

I saved the most poignant lesson I’ve learned from Munger for the last. Not many may know this, but the first decade-plus of Munger’s adulthood was tragic. 

Munger got married when he was 21, but the marriage ended when he was 29. He “lost everything in the divorce”, according to his daughter Molly Munger. Shortly after the divorce, Munger’s son, Teddy Munger, was diagnosed with leukaemia. “In those days, there was no medical insurance – I just paid all the expenses” Munger once said. But more importantly, there was absolutely nothing doctors back then could do for leukaemia. When Munger was 31, Teddy passed on. Munger recounted the heart-wrenching episode: “I can’t imagine any experience in life worse than losing a child inch by inch. By the time he died, my weight was down 10 to 15 pounds from normal.” One of Munger’s friends, Rick Guerin, said that “when his [Munger’s] son was in the bed and slowly dying, he’d go in and hold him for awhile, then go out walking the streets of Pasadena crying.”

So by the time Munger was 31, he had already gone through a divorce, experienced the painful death of his son from an incurable disease, and was broke. 

But when Munger left the world last night, he was a billionaire, and was widely revered around the world for his wit, wisdom, and character. He taught me that with courage and perseverance, we can eventually build a better life for ourselves. “You should never, when faced with one unbelievable tragedy, let one tragedy increase into two or three because of a failure of will,” he admonished. 

See you on the other side, Mr Munger.

Having a Margin of Safety

How do we buy stocks with a margin of safety and how wide of a margin do we need?

Warren Buffett once said that we should invest only when the price provides us with a margin of safety. But what does a margin of safety really mean? Let’s break it down.

Accounting for shortfalls in forecasts

Investing is a game of probability. 

It is impossible to forecast the exact cash flows and dividends that a company will pay in the future. This is where the concept of a margin of safety comes in. Morgan Housel once wrote:

“Margin of safety is simply the distance between your predictions coming true and needing those predictions to come true. You can still try to predict the future, but a margin of safety gives you room for error to be wrong.”

For instance, we may forecast a company to provide us with $1 per share in dividends for 10 years and then close down after the 10 years is over. 

Using a dividend discount model and a 10% required rate of return, we can calculate that the value of the shares should be $6.14 each. In other words, if we pay $6.14, it will give us a 10% annual return based on the expected dividends we can receive over time.

But what if our forecast falls short? Say the company ends up paying a dividend of only $0.80 per share each year. In this case, paying $6.14 for the company’s shares will not get us our desired return of 10% per year.

To account for this potential 20% shortfall in dividends per share, we should have a margin of safety. We can calculate that we should only buy the stock if the stock price is $4.92 so that we have a “margin of safety” in case our forecast falls short.

Accounting for different discount rates

But a margin of safety does not only mean that we should account for the company’s actual results deviating from our forecasts. There is another crucial factor that comes into play.

If you intend to sell the stock, we need to factor in our sale price, which will be dependent on the buyer’s required rate of return, or discount rate.

For instance, we want to buy the same company above but instead of buying and holding for the full 10 years, we intend to sell the shares after just 5 years.

If we are buying the stock for the full 10 years, we can pay $6.14 per share, knowing that we will get a 10% return simply by collecting the dividend and reinvesting the dividend at a 10% rate.

But if we intend to sell the shares after 5 years, another factor comes into play – the sale price of the shares at the 5-year mark. Obviously, if we can’t get a good price during the sale, our returns will be subpar.

If the person buying the stock from us at the 5-year mark also requires a 10% rate of return, we can sell the stock at “his price” ($3.79) and still receive a 10% annualised return.

However, if the person that we are selling the stock to requires a 12% rate of return, he will only be willing to pay us $3.60 for the shares. In this case, we will receive less than a 10% annual return over our 5-year holding period.

So instead of paying $6.14 per share, we should only pay $5.82 per share to provide us with a margin of safety in case the required rate of return of the buyer goes up to 12% at our point of sale.

Margin for upside

Factoring in a margin of safety provides us comfort that we can achieve our desired rate of return. In addition, if things go smoothly, there is the potential to earn even more than our required rate of return.

But while the concept seems straightforward, its application is a bit more challenging. It requires a keen understanding of business and a valuation that provides sufficient margin of safety. 

It also requires some judgement on our part. How much of a margin of safety is enough? For companies with very stable and predictable dividend streams, our margin of safety can be narrower. But for companies with less predictable dividend streams, we may want to factor in a larger margin of safety.

I also prefer to demand a relatively high rate of return so that it is unlikely that the required rate of return by the buyer at the point of sale will negatively impact my return.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 26 November 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 26 November 2023:

1. 11 Signs to Avoid Management Meltdowns – Todd Wenning

Pressure to maintain those numbers

Anyone who’s made it to the C-suite understands that missing Wall Street estimates can result in a stock price drop. There’s natural pressure to satisfy investors, particularly when the stock price drives a big part of employee compensation.

Some of that pressure can be good, but it can also lead to unethical decisions when a company can’t achieve those numbers in the ordinary course of business. A company may, for example, stuff a channel with inventory to pull forward demand. That can work for a while, but eventually, all the customers’ warehouses are full.

Companies might also make an acquisition, alter segment reporting, or take some type of restructuring initiative to reset investor expectations. These moves should be viewed with skepticism…

Young’uns and bigger-than-life CEOs

An iconic CEO who surrounds themselves with young, ambitious employees can be a warning sign. Jennings argues that’s because:

“These young’uns don’t have enough experience or wisdom to challenge the CEO, and the CEO has roped them in with executive success. They are hooked on the cash and its trappings and cannot speak up about obvious ethical and legal issues because they might lose the homes, the boats, the cars, and, yes, the prestige that comes with astronomical financial success at a young age.”

In contrast, when a CEO has an experienced team who – critically – have financial and professional options other than working at the company, it’s far less likely (though not impossible) for misbehavior to persist for long…

Innovation like no other

As technological advances accelerate, we’re more frequently dazzled by their potential impacts. Jennings warns that companies behind these technologies may consider themselves “as being above the fray, below the radar, and generally not subject to the laws of either economics or gravity.”

Founders and executives of tremendously successful companies often receive accolades from the business and financial media, as well as their local communities. In turn, this feedback can create an inflated sense of self-importance.

To illustrate, here’s a clip from a December 2000 press release announcing that Fortune magazine named Enron one of the 100 best companies to work for in America.

Enron adds the “100 Best Companies to Work For in America” distinction to its “Most Innovative Company in America” accolade, which it has received from Fortune magazine for the past five years. The magazine also has named Enron the top company for “Quality of Management and the second best company for “Employee Talent.”

When a company gets this type of public reinforcement, it can provide mental cover for justifying other actions.

As an antidote to this red flag, Jennings suggests being on the lookout for how management responds to external questions about the company, its performance, or its tremendous growth. If rather than thoughtfully respond to a tough question, management launches an ad hominem attack against the questioner, be on your guard…

Obsession with short sellers: If the company has been the target of a well-distributed short thesis, there are two appropriate responses for the types of companies we want to own. One is ignore it and focus on the business. In 1992, when Fastenal founder Bob Kierlin was asked about the huge short interest in his stock, he replied: “I’ve got nothing against short sellers…They have a role in the market place, too. My own portfolio has a couple of short positions. In the long run, the truth will always come out.” The second is to calmly and thoughtfully respond to short seller concerns like Netflix’s Reed Hastings did in reply to Whitney Tilson. Any other type of response – particularly when it’s driven by emotion – is a warning sign.

2. Waking up science’s sleeping beauties – Ulkar Aghayeva

Some scientific papers receive very little attention after their publication – some, indeed, receive no attention whatsoever. Others, though, can languish with few citations for years or decades, but are eventually rediscovered and become highly cited. These are the so-called ‘sleeping beauties’ of science.

The reasons for their hibernation vary. Sometimes it is because contemporaneous scientists lack the tools or practical technology to test the idea. Other times, the scientific community does not understand or appreciate what has been discovered, perhaps because of a lack of theory. Yet other times it’s a more sublunary reason: the paper is simply published somewhere obscure and it never makes its way to the right readers…

…The term sleeping beauties was coined by Anthony van Raan, a researcher in quantitative studies of science, in 2004. In his study, he identified sleeping beauties between 1980 and 2000 based on three criteria: first, the length of their ‘sleep’ during which they received few if any citations. Second, the depth of that sleep – the average number of citations during the sleeping period. And third, the intensity of their awakening – the number of citations that came in the four years after the sleeping period ended. Equipped with (somewhat arbitrarily chosen) thresholds for these criteria, van Raan identified sleeping beauties at a rate of about 0.01 percent of all published papers in a given year.

Later studies hinted that sleeping beauties are even more common than that. A systematic study in 2015, using data from 384,649 papers published in American Physical Society journals, along with 22,379,244 papers from the search engine Web of Science, found a wide, continuous range of delayed recognition of papers in all scientific fields. This increases the estimate of the percentage of sleeping beauties at least 100-fold compared to van Raan’s.

Many of those papers became highly influential many decades after their publication – far longer than the typical time windows for measuring citation impact. For example, Herbert Freundlich’s paper ‘Concerning Adsorption in Solutions’ (though its original title is in German) was published in 1907, but began being regularly cited in the early 2000s due to its relevance to new water purification technologies. William Hummers and Richard Offeman’s ‘Preparation of Graphitic Oxide’, published in 1958, also didn’t ‘awaken’ until the 2000s: in this case because it was very relevant to the creation of the soon-to-be Nobel Prize–winning material graphene.

Both of these examples are from ‘hard’ sciences – and interestingly, in physics, chemistry, and mathematics, sleeping beauties seem to occur at higher rates than in other scientific fields.

Indeed, one of the most famous physics papers, Albert Einstein, Boris Podolsky, and Nathan Rosen (EPR)’s ‘Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?’ (1935) is a classic example of a sleeping beauty. It’s number 14 on one list that quantifies sleeping beauties by how long they slept and how many citations they suddenly accrued…

…The EPR paper wasn’t hidden in a third-tier journal, unread by the scientific community. Indeed, it generated intense debate, even a New York Times headline. But in terms of its citations, it was a sleeper: it received many fewer citations than one would expect because it needed testing, but that testing wasn’t feasible for a long time afterward…

…In some cases, a sleeping beauty comes without the kind of great mystery attached to the EPR paper. In some cases, scientists understand something well enough – but just don’t know what to do with it.

The first report of the green fluorescent protein (GFP) – a crucial ingredient in many modern biological experiments because of its ability glow brightly under ultraviolet light, and thus act as a clear indicator of cellular processes like gene expression and protein dynamics – was published in 1962 in the Journal of Cellular and Comparative Physiology. GFP had been discovered in the jellyfish Aequorea victoria in research led by the marine biologist Osamu Shimomura.

Over the summers of the following 19 years, 85,000 A. victoria jellyfish were caught off Friday Harbor in Washington state in attempts to isolate sufficient amounts of GFP that allowed for a more thorough characterization. This resulted in a series of papers between 1974 and 1979. But as Shimomura admitted in one of the interviews many years later, ‘I didn’t know any use of . . . that fluorescent protein, at that time.’

In 1992, things changed. The protein was cloned, and the relevant genetic information was passed on to the biologist Martin Chalfie. Chalfie was first to come up with the idea of expressing GFP transgenically in E. coli bacteria and C. elegans worms. He demonstrated that GFP could be used as a fluorescent marker in living organisms, opening up new worlds of experimentation. GFP is now a routinely used tool across swathes of cell biology…

…With that caveat on the record, we can look at a final example of a true sleeping beauty – one that perhaps has the most to teach us about how to awaken dormant knowledge in science.

In 1911, the pathologist Francis Peyton Rous published a paper in which he reported that when he injected a healthy chicken with a filtered tumor extract from a cancerous chicken, the healthy chicken developed a sarcoma (a type of cancer affecting connective tissue). The extract had been carefully filtered to remove any host cells and bacteria, which might be expected to cause cancer, so another factor must have been at play to explain the contagious cancer.

It turned out that the cause of the tumor in the injected chicken was a virus – but Rous wasn’t able to isolate it at the time.

The importance of his study, and the paper reporting it, wasn’t recognized until after 1951, when a murine leukemia virus was isolated. This opened the door to the era of tumor virology – and to many citations for Rous’s initial paper. The virus Rous had unknowingly discovered in his 1911 paper became known as the Rous sarcoma virus (RSV), and Rous was awarded the Nobel Prize in Medicine in 1966, 55 years after publishing…

…Another lesson is related to collaboration. It could be that the techniques and knowledge required to fully exploit a discovery in one field lie, partly or wholly, in an entirely different one. A study from 2022 showed empirically how the ‘distance’ between biomedical findings – whether they were from similar subfields or ones that generally never cite each other – determines whether they tend to be combined to form new knowledge.

‘Biomedical scientists’, as the paper’s author, data scientist Raul Rodriguez-Esteban, put it, ‘appear to have a wide set of facts available, from which they only end up publishing discoveries about a small subset’. Perhaps understandably, they tend to ‘reach more often for facts that are closer’. Encouraging interdisciplinary collaboration, and encouraging scientists to keep an open mind about who they might work with, could help extend that reach.

That, of course, is easier said than done. Perhaps the most modern tools we have available – namely, powerful AI systems – could help us. It is possible to train an AI to escape the disciplinary lines of universities, instead generating ‘alien’, yet scientifically plausible, hypotheses from across the entire scientific literature.

These might be based, for example, on the identification of unstudied pairs of scientific concepts, unlikely to be imagined by human scientists in the near future. It’s already been shown in research on natural language processing that a purely textual analysis of published studies could potentially glean gene-disease associations or drug targets years before a human, or a human-led analysis, would discover them.

3. An Interview with Intel CEO Pat Gelsinger About Intel’s Progress Towards Process Leadership – Ben Thompson and Pat Gelsinger

Well, tell me more about that. Why is advanced packaging the future? I know this has been a big focus for Intel, it’s something you want to talk about, and from everything I know you have good reason to want talk about it, your technology is leading the way. Why is that so important in addition to the traditional Moore’s Law shrinking transistors, et cetera? Why do we need to start stacking these chiplets?

PG: Well, there’s about ten good reasons here, Ben.

Give me the top ones.

PG: I’ll give you the top few. One is, obviously, one of your last pieces talked about the economics of Moore’s Law on the leading edge node, well now you’re able to take the performance-sensitive transistors and move them to the leading edge node, but leverage some other technologies for other things, power delivery, graphics, IP sensitive, I/O sensitive, so you get to mix-and-match technologies more effectively this way.

Second, we can actually compose the chiplets to more appropriate die sizes to maximize defect density as well, and particularly we get to some of the bigger server chips, if you have a monster server die, well, you’re going to be dictated to be n-2, n-3, just because of the monster server die size. Now I get to carve up that server chip and leverage it more effectively on a 3D construct. So I get to move the advanced nodes for computing more rapidly and not be subject to some of the issues, defect density, early in the life of a new process technology. Additionally, we’re starting to run into some different scaling aspects of Moore’s Law as well.

Right.

PG: SRAMs in particular, SRAM scaling will become a bigger and bigger issue going forward. So I actually don’t get benefit by moving a lot of my cache to the next generation node like I do for logic, power, and performance. I actually want to have a 3D construct where I have lots of cache in a base die, and put the advanced computing on top of it into a 3D sandwich, and now you get the best of a cache architecture and the best of the next generation of Moore’s law so it actually creates a much more effective architectural model in the future. Additionally, generally, you’re struggling with the power performance and speed of light between chips.

Right. So how do you solve that with the chiplet when they’re no longer on the same die?

PG: Well, all of a sudden, in the chiplet construct, we’re going to be able to put tens of thousands of bond connections between different chiplets inside of an advanced package. So you’re going to be able to have very high bandwidth, low latency, low power consumption interfaces between chiplets. Racks become systems, systems become chips in this architecture, so it becomes actually a very natural scaling element as we look forward, and it also becomes very economic for design cycles. Hey, I can design a chiplet with this I/O…

The yield advantages of doing smaller dedicated chiplets instead of these huge chips is super obvious, but are there increased yield challenges from putting in all these tens of thousands of bonds between the chips, or is it just a much simpler manufacturing problem that makes up for whatever challenges there might be otherwise?

PG: Clearly, there are challenges here, and that’s another area that Intel actually has some quite unique advantages. One of these, we can do singulated die testing. Literally, we can carve up and do testing at the individual chiplet level before you actually get to a package, so you’re able to take very high yielding chiplets into the rest of the manufacturing process. If you couldn’t do that, now you’re subject to the order of effects of being able to have defects across individual dies, so you need to be able to have very high yielding individual chiplets, you need to be able to test those at temperature as well, so you really can produce a good, and then you need a high yielding manufacturing process as you bring them into an advanced substrate.

Why is Intel differentiated in this? What is your advantage that you think is sustainable? You’ve talked about it already being the initial driver of your Foundry Service.

PG: Yeah, it’s a variety of things. We’ve been building big dies and servers for quite a while, so we have a lot of big die expertise. Our advanced packaging with Foveros and EMIB (Embedded Multi-Die Interconnect Bridge) and now hybrid bonding and direct copper-to-copper interface, we see ourself as years ahead of other technology providers in the industry. Also, as an integrated IDM, we were developing many of these testing techniques already. So we have our unique testers that we have that allow us to do many of these things in high yield production today at scale, and we’re now building factories that allow us to do wafer level assembly, multi-die environment.

So it really brings together many of the things that Intel is doing as an IDM, now bringing it together in a heterogeneous environment where we’re taking TSMC dies. We’re going to be using other foundries in the industry, we’re standardizing that with UCIe. So I really see ourself as the front end of this multi-chip chiplet world doing so in the Intel way, standardizing it for the industry’s participation with UCIE, and then just winning a better technology…

And then RibbonFET and number one, explain them to me and to my readers. And number two, why are they so important? And number three, which one is more important? Are they inextricably linked?

PG: Yeah, PowerVia, let’s start with the easier one first. Basically, when you look at a metal stack up and a modern process, leading edge technology might have fifteen to twenty metal layers. Metal one, metal two…

And the transistors all the way at the bottom.

PG: Right, and transistors down here. So it’s just an incredible skyscraper design. Well, the top level of metals is almost entirely used for power delivery, so now you have to take signals and weave them up through this lattice. And then you want big fat metals and why do you want them fat? So you get good RC characteristics, you don’t get inductance, you’re able to have low IR drop right across these big dies.

But then you get lots of interference.

PG: Yeah and then they’re screwing up your metal routing that you want for all of your signals. So the idea of taking them from the top and using wafer-level assembly and moving them to the bottom is magic, right? It really is one of those things, where the first time I saw this laid out, as a former chip designer, I was like, “Hallelujah!”, because now you’re not struggling with a lot of the overall topology, die planning considerations, and you’re going to get better metal characterization because now I can make them really fat, really big and right where I want them at the transistor, so this is really pretty powerful. And as we’ve done a lot of layout designs now we get both layout efficiency because all of my signal routing get better, I’m able to make my power delivery and clock networks far more effectively this way, and get better IR characteristics. So less voltage drops, less guard banding requirements, so it ends up being performance and area and design efficiency because the EDA tools —

Right. It just becomes much simpler.

PG: Everybody loves this. That’s PowerVia, and this is really an Intel innovation. The industry looked at this and said, “Wow, these guys are years ahead of anything else”, and now everybody else is racing to catch up, and so this is one where I say, “Man, we are ahead by years over the industry for backside power or PowerVia, and everybody’s racing to get their version up and running, and we’re already well underway and in our second and third generation of innovation here”.

On the transistor, that’s the gate-all-around (GAA) or we call it RibbonFET, our particular formulation for that. Samsung and TSMC have their variation of that, so I’ll say on PowerVia, well ahead, while everybody’s working on GAA and you can say, “Why is Intel better?”, well hey, when you’ve done every major transistor innovation for the last twenty-five years…

Just to step back and look back over your two to three years, in our previous interview we talked about the importance of the Foundry business being separate from the product business. This is something that I was very anchored on looking at your announcement, and it’s why I was excited about Meteor Lake for example, because to me that was a forcing function for Intel to separate the manufacturing and design parts. At the same time, you are not actually unveiling a separate P&L for it until early next year. What took so long? Was that the area where maybe you actually were moving too slowly?

PG: Well, when you say something like separate the P&L, it’s sort of like Intel hasn’t done this in almost our 60-year history. The idea that we’re going to run fully separate operations with fully separate financials, with fully separate allocation systems at ERP and financial levels, I joke internally, Ben, that the ERP and finance systems of Intel were old when I left, that was thirteen years ago and we are rebuilding all of the corporate systems that we sedimented into IDM 1.0 over a five-decade period.

Tearing all of that apart into truly separate operational disciplines as a fabless company and as a foundry company, that’s a lot of work. Do I wish it could have gone faster? Of course I do, but I wasn’t naive to say, “Wow, I can go make this happen really fast.” It was five decades of operational processes, and internally we’ll be by the time we publish the financials in Q1 of next year, we’ll have gone through multiple quarters of trial running those internally, and now that’ll be the first time that we present it to the Street that way.

As we talk to Foundry customers we’re saying, “Come on in, let’s show you what we’re doing, test us.” And MediaTek, one of our early Foundry customers, “Hey, give us the feedback, give me the scorecard. How am I doing? What else do you need to see?”, start giving us the NPS scores for us as a Foundry customer, there’s a lot of work here. Yeah, I wish it would go faster but no, I’m not disappointed that it’s taken this long…

I mean, you’ve pushed vigorously, I would say, for the CHIPS Act and there was actually just a story in the Wall Street Journal, I saw it as I was driving in, that said Intel is the leading candidate for money for a national defense focused foundry, a secure enclave I think they called it, potentially in Arizona. But you mentioned the money aspect of being a foundry, and you have to be the first customer, but you’re the first customer with an also-threatened business that has— you talked about your earnings, you’re not filling your fabs currently as it is, and you don’t have trailing edge fabs spinning off cash to do this, you don’t have a customer base. Is this a situation where, “Look, if the US wants process leadership, we admit we screwed up, but we need help”?

PG: There are two things here. One is, hey, yeah, we realize that our business and our balance sheet, cash flows are not where they need to be. At the same time, there’s a fundamental economic disadvantage to build in US or Europe and the ecosystem that has emerged here (Taiwan), it’s lower cost.

Right. Which TSMC could tell you.

PG: Right. And hey, you look at some of the press that’s come out around their choice of building in the US, there’s grave concerns on their part of some of those cost gaps. The CHIPS Act is designed to close those cost gaps and I’m not asking for handouts by any means, but I’m saying for me to economically build major manufacturing in US and Europe, those cost gaps must be closed, because if I’m going to plunk down $30 billion for a major new manufacturing facility and out of the gate, I’m at a 30%, 40% cost disadvantage —

Even without the customer acquisition challenges or whatever it might be.

PG: At that point, no shareholders should look at me and say, “Please build more in the US or Europe.” They should say, “Well, move to Asia where the ecosystem is more mature and it’s more cost-effective to build.” That’s what the CHIPS Act was about: if we want balanced, resilient supply chains, we must close that economic gap so that we can build in the US and Europe as we have been. And trust me, I am fixing our issues but otherwise, I should go build in Asia as well, and I don’t think that’s the right thing for the world. We need balanced supply chains that are resilient for the Americas and for Europe and in Asia to have this most important resource delivered through supply chains around the world. That’s what the CHIPS Act was about.

I am concerned though. My big concern, just to put my cards on the table, is the trailing edge, where it’s basically Taiwan and China, and obviously China has its own issues, but if Taiwan were taken off the map, suddenly, part of what motivated the CHIPS Act was we couldn’t get chips for cars. Those are not 18A chips, maybe those will go into self-driving cars, I don’t want to muddy the waters, but that’s an issue where there’s no economic case to build a trailing edge fab today. Isn’t that a better use of government resources?

PG: Well, I disagree with that being a better use of resource, but I also don’t think it’s a singular use of resource on leading edge. And let me tease that apart a little bit. The first thing would be how many 28 nanometer fabs should I be building new today?

Economically, zero.

PG: Right, yeah, and I should be building zero economically in Asia as well.

Right. But China is going to because at least they can.

PG: Exactly. The economics are being contorted by export policy, not because it’s a good economic investment as well.

Right. And that’s my big concern about this policy, which is if China actually approaches this problem rationally, they should flood the market like the Japanese did in memory 40 years ago.

PG: For older nodes.

For older nodes, that’s right.

PG: Yeah because that’s what they’re able to go do and that does concern me as well. At the same time, as we go forward, how many people are going to be designing major new designs on 28 nanometers? Well, no. They’re going to be looking at 12 nanometers and then they’re going to be looking at 7 nanometers and eventually they will be moving their designs forward, and since it takes seven years for one of these new facilities to both be built, come online, become fully operational in that scale, let’s not shoot behind the duck.

And so your sense is that you are going to keep all these 12, 14 nanometer fabs online, they’re going to be fully depreciated. Even if there was a time period where it felt like 20 nanometer was a tipping point as far as economics, a fully depreciated 14 nanometer fab—

PG: And I’m going to be capturing more of that because even our fab network, I have a whole lot of 10 nanometer capacity. I’m going to fill that with something, I promise you, and it’s going to be deals like we just did with Tower. We’re going to do other things to fill in those as well because the depreciated assets will be filled. I’m going to run those factories forever from my perspective, and I’ll find good technologies to fill them in.

Let’s talk about AI. I know we’re running short on time, but there’s the question. I feel like AI is a great thing for Intel, despite the fact everyone is thinking about it being GPU-centric. On one hand, Nvidia is supply constrained and so you’re getting wins. I mean, you said Gaudi is supply constrained, which is not necessarily as fast as an Nvidia chip, I think, is safe to say. But I think the bull case, and you articulated this in your earnings call, is AI moving to the edge. Tell me this case and why it’s a good thing for Intel.

PG: Well, first I do think AI moves to the edge and there are two reasons for that. One is how many people build weather models? How many people use weather models? That’s training versus inference, the game will be in inference. How do we use AI models over time? And that’ll be the case in the cloud, that’ll be the case in the data center, but we see the AI uses versus the AI training becoming the dominant workload as we go into next year and beyond. The excitement of building your own model versus, “Okay, now we build it. Now what do we do with it?”

And why does Intel win that as opposed to GPUs?

PG: For that then you say, in the data center, you say, “Hey, we’re going to add AI capabilities.” And now gen four, Sapphire Rapids is a really pretty good inferencing machine, you just saw that announced by Naver in Korea. The economics there, I don’t now have to port my application, you get good AI performance on the portion of the workload where you’re inferencing, but you have all the benefits of the software ecosystem for the whole application.

But importantly, I think edge and client AI is governed by the three laws. The laws of economics: it is cheaper to do it on the client versus in the cloud. The laws of physics: it is faster to do it on the client versus round tripping your data to the cloud. And the third is the laws of the land: do I have data privacy? So for those three reasons, I believe there’s all going to be this push to inferencing to the edge and to the client and that’s where I think the action comes. That’s why Meteor Lake and the AIPC is something— 

4. Slicing and Dicing: How Apollo is Creating a Deconstructed Bank – Marc Rubinstein

Securitisation as a technology changed finance. By allowing loans to be repackaged for resale, it paved the way for the disintegration of the traditional value chain that cleaved loan origination to funding sources.

The basic form of an asset-backed security goes back a long time, but the modern-day version was born 40 years ago when First Boston, Salomon Brothers and Freddie Mac divided up residential mortgage pools into different tranches that allowed bondholders to be paid at different rates. Investors could choose between buying more expensive, higher rated bonds backed by tranches with first claim on payment flows, or purchasing subordinated bonds that were less expensive, lower rated and riskier. This technique helped the mortgage-backed securities market grow from $30 billion in 1982 to $265 billion in 1986.

The market soon spread, moving beyond mortgages in the 1980s to include student loans, auto loans and credit card receivables. Eventually, issuers securitised more exotic revenue streams, creating, for example, Bowie bonds securitised by revenues from David Bowie’s back catalogue and even Bond bonds securitised by revenues from James Bond movies. Market growth was aided by a friendly regulatory environment, improvements in computing power and new information technologies. With increasing precision, the risks and revenues associated with debts could be identified, catalogued, isolated and sold.

The securitisation process involves a chain of participants. In a stylised version, a borrower sits at one end and takes out a loan from an originator. The originator then sells the loan into a special purpose entity which issues bonds against it, with the help of an underwriter. To provide originators with liquidity, banks offer warehouse facilities which act as a kind of institutional credit card, allowing them to finance pre-agreed eligible assets. The underwriter manages the sale of bonds to investors. To ensure payment flows continue uninterrupted, a servicer sits underneath the process, collecting cash from the borrower and passing it through to the investor…

…In its 13 years of experience investing in asset-backed securities, Apollo has deployed over $200 billion of capital. Its annualised loss rate: just 1.3 basis points. 

But Apollo reckons that it could deploy more if only it had access to more origination. “There is no shortage of capital,” said CEO Marc Rowan at an investor day two years ago. “What there is, is a shortage of assets.”

Hence, the firm has reversed back down the value chain into direct origination. And because it’s not necessarily able to invest in everything its origination platforms throw off, it has built up a capital solutions group as well, to distribute asset-backed loans to other market participants. Apollo also recently lifted a business from Credit Suisse which it has renamed Atlas SP that offers warehouse facilities, securitisation and syndication to other originators. So, like a deconstructed bank, it now operates right across the value chain in a fairly unique way.

Apollo currently operates 16 different origination engines. They operate as stand-alone companies focused on their particular niche, independently capitalised and with their own management and board of directors. In total, the firm has invested around $8 billion of equity capital into these businesses; they collectively manage $130 billion of assets and employ 3,900 staff. The companies are at different stages of maturity: Seven manage less than $2 billion of assets, including two that Apollo launched de-novo; six manage between $2 billion and $10 billion of assets; and three manage in excess of $20 billion…

…The problem with operating a range of origination platforms is that their track record – at least as public businesses – is not very good. Origination businesses need to manage two risks: liquidity risk and credit risk.

Historically, liquidity risk has brought many down. Their reliance on market funding sources entwines their fortunes with market sentiment, and markets can be skittish. Following the Russian debt crisis in 1998, market disruption led to a steep fall in demand among investors for risky assets, including subprime securitizations, even before a recession took hold three years later. Subprime originators saw their own borrowing costs skyrocket. In the two years following the crisis, eight of the top 10 subprime lenders declared bankruptcy, ceased operations or sold out to stronger firms…

…Apollo argues that its long-term insurance liabilities are a better match for asset financing than commercial paper, money markets or even a bank’s deposits. The firm may have a point. Deposit outflows at Silicon Valley Bank, Signature Bank and First Republic highlight that bank funding isn’t what it was and that its realised duration may be lower than anticipated.

The second risk is credit risk. Apollo reckons its diversification helps – across originators and across asset types. Its platforms operate over 30 product lines and each deploys a large funnel. Since being founded in 2008, MidCap has closed only 2,000 deals out of around 29,000 identified opportunities, on which it issued 6,800 term sheets. Overall, the group’s platforms target a conversion of between 5% and 10% of opportunities. Such a large funnel avoids adverse selection.

5. Everything You Can’t Predict – Jack Raines

You would be hard-pressed to find a technological development from the last 20 years that is more important than “the cloud.”…

…Interestingly, the first company to launch an enterprise cloud solution wasn’t Amazon, Microsoft, or Google.

It was IBM.

Yes, IBM, whose stock price appreciated by a whopping 2.39% between August 2000 and April 2023, was the first entrant to the cloud space.

So, what went wrong?

IBM was the face of the computer industry for most of the 20th century, and in July 2002, they unveiled a service called Linux Virtual Services, which offered customers a radical new payment structure.

Historically, IBM had locked customers into long-term, fixed-price contracts in exchange for access to different hardware and software products. In contrast, Linux Virtual Services would allow customers to run their own software applications through IBM mainframes and pay based on their usage.

According to IBM, this usage-based payment model would result in savings of 20% – 55%.

Linux Virtual Services should have kickstarted a proliferation of cloud-based services, but instead, it was shut down a few years later…

…In 2002, IBM, a $130B computing giant with unlimited resources and a multi-decade head start, launched an enterprise cloud offering aimed at commoditizing computing power.

In 2002, Amazon, a $10B e-commerce store, was solving an internal engineering bottleneck.

In 2006, IBM shut down its cloud storage service.

In 2006, Amazon launched its cloud storage service.

In 2023, IBM is still worth $130B.

In 2023, Amazon is worth 10 IBMs, largely due to the success of AWS.

What went wrong at IBM? No one really knows, but Corry Wang, a former equity researcher at Bernstein, speculates that IBM’s sales team may have had misaligned incentives. In 2002, sales teams would have earned larger commissions on higher-priced, fixed contracts than on cheaper, usage-based contracts, and the new offerings would have cannibalized current customers as well. Why, as a salesperson, would you sell a service that made you less money?

Meanwhile, Amazon realized, almost by accident, that their internal solutions to infrastructure bottlenecks could be exported and sold as services. And Amazon didn’t have current SaaS customers to worry about cannibalizing, so their salespeople were free to sell the service to anyone.

16 years later, Amazon is the market leader in cloud, and IBM is stuck in 2006…

…Because it shows that predicting the future is easy, but predicting who wins in that future is much, much more difficult. By 2001, plenty of tech experts could have told you that cloud computing was going to emerge as an important technological development in a decade.

But how many of those experts would have predicted that an online bookstore would dominate the cloud market?

Picking trends is easy. Picking winners is hard.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Microsoft, Netflix, and TSMC. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q3)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q3 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the third quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management sees generative AI as an opportunity to reimagine the company’s product and transform Airbnb into the ultimate travel agent

First, I think that we are thinking about generative AI as an opportunity to reimagine much of our product category and product catalog. So if you think about how you can sell a lot of different types of products and new offerings, generative AI could be really, really powerful.  It can match you in a way you’ve never seen before. So imagine Airbnb being almost like the ultimate travel agent as an app. We think this can unlock opportunities that we’ve never seen. 

Airbnb’s management believes that digital-first travel companies will benefit from AI faster than physical-first travel companies

So Airbnb and OTAs are probably going to benefit more quickly from AI than, say, a hotel will just because Airbnb and OTAs are more digital. And so the transformation will happen at the digital surface sooner.

Airbnb’s management believes that Airbnb’s customer service can improve significantly by placing an AI agent between a traveller and her foreign host

One of the areas that we’re specifically going to benefit is customer service. Right now, customer service in Airbnb is really, really hard, especially compared to hotels. The problem is, imagine you have a Japanese host booking with — hosting a German guest and there’s a problem, and you have these 2 people speaking different languages calling customer service, there’s a myriad of issues, there’s no front desk, we can’t go on-premise. We don’t understand the inventory, and we need to try to adjudicate an issue based on 70 different policies that can be up to 100 pages long. AI can literally start to solve these problems where agents can supervise a model that can — in second, come up with a better resolution and provide front desk level support in nearly every community in the world. 

Airbnb’s management believes that AI can lead to a fundamentally different search experience for travellers

But probably more importantly, Kevin, is what we can do by reimagining the search experience. Travel search has not really changed much in 25 years since really Expedia, Hotels.com, it’s pretty much the same as it’s been. And Airbnb, we fit that paradigm. There’s a search box, you enter a date location, you refine your results and you book something. And it really hasn’t changed much for a couple of decades. I think now with AI, there can be entirely different booking models. And I think this is like a Cambrian moment for like the Internet or mobile for travel where suddenly an app could actually learn more about you. They could ask you questions and they could offer you a significantly greater personalized service. Before the Internet, there were travel agents, and they actually used to learn about you. And then travel got unbundled, it became self-service and it became all about price. But we do think that there’s a way that travel could change and AI could lead the way with that. 

Airbnb’s management believes that all travel apps will eventually trend towards being an AI travel agent

And I generally think for sure, as Airbnb becomes a little more of a so-called like AI travel agent, which is what I think all travel apps will trend towards to some extent.

Alphabet (NASDAQ: GOOG)

Alphabet’s management has learnt a lot from trials of Search Generative Experience (SGE), and the company has added new capabilities (videos and images); Search Generative Experience has positive user feedback and strong adoption

This includes our work with the Search Generative Experience, which is our experiment to bring generative AI capabilities into Search. We have learned a lot from people trying it, and we have added new capabilities like incorporating videos and images into responses and generating imagery. We have also made it easier to understand and debug generated code. Direct user feedback has been positive with strong growth in adoption.

SGE allows Alphabet to serve a wider range of information needs and provide more links; ads will continue to be relevant in SGE and users actually find ads useful in SGE; Alphabet wants to experiment with SGE-native ad formats

With generative AI applied to Search, we can serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. We are surfacing more links with SGE and linking to a wider range of sources on the results page, creating new opportunities for content to be discovered. Of course, ads will continue to play an important role in this new Search experience. People are finding ads helpful here as they provide useful options to take action and connect with businesses. We’ll experiment with new formats native to SGE that use generative AI to create relevant, high-quality ads customized to every step of the Search journey.

Alphabet’s management thinks SGE could be a subscription service; it’s still very early days in the roll-out of SGE and management wants to get the user experience correct (Alphabet has gone through similar transitions before, so management is confident about this)

And I do think over time, there will be newer paths, just like we have done on YouTube. I think with the AI work, there are subscription models as a possible path as well. And obviously, all of the AI investments we are doing applies across Cloud, too, and I’m pretty optimistic about what’s ahead there as well…

…On the first part about SGE, we are still in very, very early days in terms of how much we have rolled it out, but we have definitely gotten it out to enough people both geographically across user segments and enough to know that the product is working well, it improves the experience and — but there are areas to improve, which we are fine-tuning. Our true north here is getting at the right user experience we want to, and I’m pretty comfortable seeing the trajectory. And we’ve always worked through these transitions, be it from desktop to mobile or from now mobile to AI and then to experience. And so it’s nothing new. 

Alphabet is making it easier for people to identify AI-generated content through digital watermarks

One area we are focused on is making sure people can more easily identify when they are encountering AI-generated content online. Using new technology powered by Google DeepMind SynthID, images generated by Vertex AI can be watermarked in a way that is invisible to the human eye without reducing the image quality. Underlying all this work is the foundational research done by our teams at Google DeepMind and Google Research. 

Alphabet’s management is committed to changing Alphabet’s cost base to accommodate AI investments; Alphabet has, for a long time, driven its cost curves down spectacularly, and management is confident that it will be the same for the current build-out of AI infrastructure

As we expand access to our new AI services, we continue to make meaningful investments in support of our AI efforts. We remain committed to durably reengineering our cost base in order to help create capacity for these investments in support of long-term sustainable financial value. Across Alphabet, teams are looking at ways to operate as effectively as possible focused on their biggest priorities…

…When I looked at the strength of the work we have done across our infrastructure as a company, our technical infrastructure as a company, and various given stages, at a given moment in time when we adopted new generations of technology, we have looked at the cost of it. But then the curves, the efficiency curves, we have driven on top of it has always been phenomenal to see. And I see the current moment as no different. Already through this year, we are driving significant efficiencies both in our models, in training costs and serving costs and our ability to adapt what’s needed to the right use case. 

Alphabet has new tools (including those powered by AI) that make it easier for (1) creators to produce content for Youtube’s various formats, (2) creators to connect with advertisers, and (3) advertisers drive higher ROI on advertising

At Made On YouTube in September, we announced new tools that make it easier to create engaging content. Dream Screen is an experimental feature that allows creators to add AI-generated video or image backgrounds to Shorts. And YouTube Create is a new mobile app with a suite of production tools for editing Shorts, longer videos or both…

…AI will do wonders for creation and storytelling. From Dream Screen and YouTube Create, which Sundar talked about, to features that audit up content in multiple languages, flip interim existing assets, remix and clip videos and more, we’re just getting started. We’re also helping brands break through its speed and scale across the funnel to drive results. Spotlight Moments launched last week. It uses AI to identify trending content around major cultural moments for brand sponsorship opportunities. There’s video reach campaigns, which are expanding to in-feed and Shorts, and will be generally available in November. AI is helping advertisers find as many people as possible and their ideal audience for the lowest possible price. Early tests are delivering 54% more reach at 42% lower cost. And then with video view campaigns, AI is serving skippable ads across in-stream, in-feed and Shorts and helping advertisers earn the maximum number of views at the lowest possible cost. So far, they’re driving 40% more views on average versus in-stream alone. Then for YouTube and other feed-based services, there’s our new demand-gen campaign, which launched in April, rolled out worldwide last week and was designed for the needs of today’s social marketers to engage people as they stream, scroll and connect. It combines video and image ads in one campaign with access to 3 billion users across YouTube and Google and the ability to optimize and measure across the funnel using Google AI. Demand gen is already driving successful brands like Samsung and Toyota.

Alphabet’s management believes that Google Cloud offers optimised infrastructure for AI training and inference, and more than 50% of all generative AI start-ups are using Google Cloud; Alphabet’s TPUs (tensor processing units) are winning customers; Google Cloud’s Vertex AI platform offers more than 100 AI models and the number of active generative AI projects built on Vertex AI grew by seven times sequentially

We offer advanced AI optimized infrastructure to train and serve models at scale. And today, more than half of all funded generative AI start-ups are Google Cloud customers. This includes AI21 Labs, Contextual, Elemental Cognition, Writer and more. We continue to provide the widest choice of accelerator options. Our A3 VMs [virtual machines] powered by NVIDIA’s H100 GPU are generally available, and we are winning customers with Cloud TPU v5e, our most cost efficient and versatile accelerator to date. On top of our infrastructure, our Vertex AI platform helps customers build, deploy and scale AI-powered applications. We offer more than 100 models, including popular third-party and open source models, as well as tools to quickly build Search in conversation use cases. From Q2 to Q3, the number of active generative AI projects on Vertex AI grew by 7x, including Highmark Health, which is creating more personalized member materials.

Duet AI, Alphabet’s AI assistant, is built on Google’s large foundation models and is used by large companies to boost developer productivity and smaller companies to help with data analytics; more than 1 million testers have used Duet AI in Google Workspace

Duet AI was created using Google’s leading large foundation models and is specially trained to help users to be more productive on Google Cloud. We continue expanding its capabilities and integrating it across a wide range of cloud products and services. With Duet AI, we are helping leading brands like PayPal and Deutsche Bank boost developer productivity, and we are enabling retailers like Aritzia and Gymshark to gain new insights for better and faster business results…

…In Workspace, thousands of companies and more than 1 million trusted testers have used Duet AI. They are writing and refining content in Gmail and Docs, creating original images from text within slides, organizing data and sheets and more.

Alphabet’s new consumer hardware products have an AI chip – Tensor G3 – built in them

Our portfolio of Pixel products are brought to life, thanks to our combination of foundational technologies AI, Android and Google Tensor. Google Tensor G3 is the third generation of our tailor-built chip. It’s designed to power transformative experiences by bringing the latest in Google AI research directly to our newest phones. 

Gemini is the foundation of the next-generation AI models that Google Deepmind will be releasing throughout 2024; Gemini will be multi-modal and will be used internally across all of Alphabet’s products as well as offered externally via Vertex 

On Gemini, obviously, it’s effort from our combined Google DeepMind team. I’m very excited at the progress there as we’re working through getting the model ready. To me, more importantly, we are just really laying the foundation of what I think of as the next-generation series of models we’ll be launching throughout 2024. The pace of innovation is extraordinarily impressive to see. We are creating it from the ground up to be multimodal, highly efficient tool and API integrations and, more importantly, laying the platform to enable future innovations as well. And we are developing Gemini in a way that it is going to be available at various sizes and capabilities, and we’ll be using it immediately across all our products internally as well as bringing it out to both developers and cloud customers through Vertex. So I view it as a journey, and each generation is going to be better than the other. And we are definitely investing, and the early results are very promising.

Alphabet’s AI tools are very well received by advertisers and nearly 80% of advertisers use at least one AI-powered search ads product

Our AI tools are very well received, AI, gen AI are top of mind for everybody, really. There’s a ton of excitement, lots of questions about it. Many understand the value. Nearly 80% of our advertisers already use at least one AI-powered search ads product. And yes, we’re hearing a lot of good feedback on, number one, our ads AI Essentials, which are really helping to unlock the power of AI and set up for durable ROI growth on the advertiser side, this is — those are products like the foundation for data and measurement, things like Google Tech, consent mode and so on; and obviously, Search and PMax, we talked about it; and then all the gen AI products, all those different ones. So there’s a whole lot of interest in those products, yes.

Amazon (NASDAQ: AMZN)

Anthropic, a high-profile AI startup, recently chose AWS as its primary cloud provider, and Anthropic will work with Amazon to further develop Amazon’s Trainium (for training AI models) and Inferentia (for AI inference work) chips; Amazon’s management believes the collaboration with Anthropic will help Amazon bring further price performance advantages to Trainium and Inferentia

Recently, we announced the leading LLM maker Anthropic chose AWS as its primary cloud provider. And we’ll use Trainium training and Inferentia to build, trade and deploy future LLMs. As part of this partnership, AWS and Anthropic will collaborate on the future development of training and inferential technology. We believe this collaboration will be helpful in continuing to accelerate the price performance advantages that Trainium and Inferentia deliver for customers.

Perplexity is another AI startup that chose to run their models with Trainium and Inferentia

We are also seeing success with generative AI start-ups like Perplexity AI who chose to go all in with AWS, including running future models in Trainium and Inferentia.

Amazon’s management believes that Amazon’s Trainium and Inferentia chips are very attractive to people in the industry because they offer better price-performance characteristics and they can meet demand; Anthropic and Perplexity’s decisions to go with Trainium and Inferentia are statements to that effect

I would also say our chips, Trainium and Inferentia, as most people know, there’s a real shortage right now in the industry and chips, it’s really hard to get the amount of GPUs that everybody wants. And so it’s just another reason why Trainium and Inferentia are so attractive to people. They have better price performance characteristics than the other options out there, but also the fact that you can get access to them. And we’ve done a I think, a pretty good job providing supply there and ordering meaningfully in advance as well. And so you’re seeing very large LLM providers make big bets on those chips. I think anthropic deciding to train their future LLM model on Trainium and using Inferentia as well is really a statement. And then you look at the really hot start-up perplexity.ai, who also just made a decision to do all their Trainium and Inferentia on top of Trainium and Inferentia. So those are two examples. 

Amazon recently announced the general availability of Amazon Bedrock (AWS’s LLMs-as-a-service), which gives access to a variety of 3rd-party large language models (LLMs) as well as Amazon’s own LLM called Titan; Meta’s Llama-2 LLM will also be on Bedrock, the first time it is available through a fully-managed service

In the middle layer, which we think of as large language models as a service, we recently introduced general availability for Amazon Bedrock, which offers customers access to leading LLMs from third-party providers like anthropics, stability AI, coherent AI 21 as well as from Amazon’s own LLM called Titan, where customers can take those models, customize them using their own data, but without leaking that data back into the generalized LLM have access to the same security, access control and features that they run the rest of their applications with in AWS all through a managed service. In the last couple of months, we’ve announced the imminent addition of Meta’s Llama 2 model to Bedrock the first time it’s being made available through a fully managed service.

Amazon’s management believes that Bedrock helps customers experiment rapidly with different LLMs and is the easiest way to build and scale enterprise-ready generative AI applications; customer reaction to Bedrock has been very positive; 

Also through our expanded collaboration with Anthropic, customers will gain access to future anthropic models through bedrock with exclusive early access to unique features model customization and the ability to fine-tune the models. And Bedrock has added several new compelling features, including the ability to create agents which can be programmed to accomplish tasks like answering questions or automating workflows. In these early days of generative AI, companies are still learning which models they want to use, which models they use for what purposes and which model sizes they should use to get the latency and cost characteristics they desire. In our opinion, the only certainty is that there will continue to be a high rate of change. Bedrock helps customers with this fluidity, allowing them to rapidly experiment with move between model types and sizes and enabling them to pick the right tool for the right job. The customer reaction to Bedrock has been very positive and the general availability is buoyed that further. Bedrock is the easiest way to build and scale enterprise-ready generative AI applications and a real game changer for developers and companies trying to get value out of this new technology…

Bedrock’s ability to let customers conduct fast experiments is very useful because customers sometimes get surprised at the true costs of running certain AI models

Because what happens is you try a model, you test the model, you like the results of the model and then you plug it into your application and what a lot of companies figure out quickly is that using the really large — the large models and the large sizes ends up often being more expensive than what they anticipated and what they want to spend on that application. And sometimes too much latency in getting the answers as it shovels through the really large models. And so customers are experimenting with lots of different types of models and then different model sizes to get the cost and latency characteristics that they need for different use cases. It’s one of the things that I think is so useful about Bedrock is that customers are trying so many variants right now but to have a service that not only lets you leverage lots of third party as well as Amazon large language miles, but also lots of different sizes and then makes the transition of moving those workloads easy between them is very advantageous.

Amazon Code Whisperer, AWS’s coding companion, has a lot of early traction and has become more powerful recently by having the capability to be customised on a customer’s own code base (a first-of-its kind feature)

Generative AI coding companion Amazon Code Whisper has gotten a lot of early traction and got a lot more powerful recently with the launch of its new customization capability. The #1 enterprise request for coding companions has been wanting these companions to be familiar with customers’ proprietary code bases is not just having code companions trained on open source code. Companies want the equivalent of a long-time senior engineer who knows their code base well. That’s what Code Whisper just launched, another first of its kind out there in its current forum and customers are excited about it.

Amazon’s management believes that customers want to bring AI models to their data, not the other way around – and this is an advantage for AWS as customers’ data resides within AWS

It’s also worth remembering that customers want to bring the models to their data, not the other way around. And much of that data resides in AWS as the clear market segment leader in cloud infrastructure. 

There are many companies that are building generative AI apps on AWS and this number is growing fast

The number of companies building generative AI apps and AWS is substantial and growing very quickly, including Adidas, Booking.com, Bridgewater, Clarient, GoDaddy, Lexus Nexus, Merck, Royal Philips and United Airlines, name a few

Generative AI’s growth rate within AWS is very fast – even faster than Amazon’s management expected – and management believes that the absolute amount of generative AI business within AWS compares very favourably with other cloud providers

I could see it also just the growth rate for us in generative AI is very fast. Again, I have seen a lot of different numbers publicly. It’s real hard to measure an apples-to-apples. But in our best estimation, our — the amount of growth we’re seeing in the absolute amount of generative AI business we’re seeing compares very favorably with anything else I’ve seen externally.

Generative AI is already a pretty significant business for AWS, but it’s still early days

What I would tell you is that we have been surprised at the pace of growth in generative AI. Our generative AI business is growing very, very quickly, as I mentioned earlier. And almost by any measure, it’s a pretty significant business for us already. And yet I would also say that companies are still in the relatively early stages.

All of Amazon’s significant businesses are working on generative AI applications, with examples including using generative AI to (1) help consumers discover products, (2) forecast inventory in various locations, (3) help 3rd-party sellers create new product pages, (4) help advertisers with image generation for ads, and (5) improve Alexa

Beyond AWS, all of our significant businesses are working on generative AI applications to transform their customer experiences. There are too many for me to name on this call, but a few examples include, in our stores business, we’re using generative AI to help people better discover products they want to more easily access the information needed to make decisions. We use generative AI models to forecast inventory we need in our various locations and to derive optimal last mile transportation routes for drivers to employ. We’re also making it much easier for our third-party sellers to create new product pages by entering much less information and getting the models to the rest. In advertising, we just launched a generative AI image generation tool, where all brands need to do is upload a product photo and description to quickly create unique lifestyle images that will help customers discover products they love. And in Alexa, we built a much more expansive LLM and previewed the early version of this. Apart from being a more intelligent version of herself, Alexa’s new conversational AI capabilities include the ability to make multiple requests at once as well as more natural and conversational requests without having to use specific phrases.

Amazon’s management still believes in the importance of building the world’s best personal assistant and they thinksAlexa could be one of these assistants

We continue to be convicted that the vision of being the world’s best personal assistant is a compelling and viable one and that Alexa has a good chance to be one of the long-term winners in this arena. 

While Amazon’s management is pulling back Amazon’s capital expenditure on other areas, they are increasing capital expenditure for AI-related infrastructure

For the full year 2023, we expect capital investments to be approximately $50 billion compared to $59 billion in 2022. We expect fulfillment and transportation CapEx to be down year-over-year partially offset by increased infrastructure CapEx, support growth of our AWS business, including additional investments related to generative AI and large language model efforts. 

Apple (NASDAQ: AAPL)

Apple’s management sees AI and machine learning as fundamental technologies to the company and they’re integrated in virtually every product that Apple ships

If you kind of zoom out and look at what we’ve done on AI and machine learning and how we’ve used it, we view AI and machine learning as fundamental technologies, and they’re integral to virtually every product that we ship. 

Apple’s AI-powered features include Personal Voice in iOS17, and fall detection, crash detection, and ECG on the Apple Watch; Apple’s management does not want to label Apple’s AI-powered features with “AI” – instead the features are labelled as consumer benefits

And so just recently, when we shipped iOS 17, it had features like Personal Voice and Live Voicemail. AI is at the heart of these features. And then you can go all the way to then life-saving features on the Watch and the phone like fall detection, crash detection, ECG on the watch. These would not be possible without AI. And so we don’t label them as such, if you will. We label them as to what their consumer benefit is, but the fundamental technology behind it is AI and machine learning.

Apple is investing in generative AI but management has no details to share yet

In terms of generative AI, we have — obviously, we have work going on. I’m not going to get into details about what it is because as you know, we really don’t do that. But you can bet that we’re investing, we’re investing quite a bit. We are going to do it responsibly. And it will — you will see product advancements over time where those technologies are at the heart of them.

Arista Networks (NYSE: ANET)

From the vantage point of Arista Networks’ management, Oracle has become an important AI data centre company

Our historic classification of our Cloud Titan customers has been based on industry definition of customers with or likely to attain greater than 1 million installed compute service. Looking ahead, we will combine Cloud and AI customer spend into one category called Cloud and AI Titan sector. And as a result of this combination, Oracle OCI becomes a new member of the sector, while Apple shift to cloud specialty providers…

…So I think OCI has become a meaningful top-tier cloud customer and they belong in the cloud tightening category and in addition to their AI investments as well. So for reasons of classification and definition, the change is very warranted. And yes, they happened to be a good customer of Arista, that’s nice as well.

Arista Networks’ management has observed that its large customers have different needs when it comes to AI and non-AI networking technologies 

During the past year, our Cloud Titan customers have been planning a different mix of AI networking and classic cloud networking for their compute and storage clusters.

Arista Networks’ management believes that the company’s recent deal with a public sector organisation to provide Ethernet networking technology for the organisation’s AI initiative is an example of why Ethernet is important in AI

Our next [ one ] showcases our expansion of Arista in the public sector with their AI initiative. This grant-funded project utilizes Arista simplified operational models with CloudVision. New AI workloads require high scale, high ratings, high bandwidth and low latency as well as a need for granular visibility. This build out of a single EVPN-VXLAN based 400-gig fabric is based on deep buffers fines and underscores the importance of a lossless architecture for AI networking.

Arista Networks’ management is seeing its customers prioritise AI in their data centre spending right now, but demand for other forms of data centre-related spending will follow

We’ve always looked at that the cloud network as a front end and the back end. And as we said last year, many of our cloud customers are favoring spending more on the back end with AI, which doesn’t mean they stop spending on front end, but they’re clearly prioritized and doubled down on AI this year. My guess is as we look at the next few years, they’ll continue to double down on AI. But you cannot build an AI bank cluster without thinking of the front end. So we’ll see a full cycle here, while today the focus is greatly on AI and the back end of the network. In the future, we expect to see more investments in the front end as well.

Arista Networks’ management sees AI networking as being dominated by Infiniband today- with some room for a combination of Infiniband and Ethernet – but they still believe that AI networking will trend toward Ethernet over time, with 2025 being a potential inflection point

Today if I look at the 5 major designs for AI networking, one of them is still very InfiniBand dominated, all the others we’re looking at is — are adopting on dual strategy of both Ethernet and InfiniBand. So I think AI networking is going to become more and more favorable to Ethernet, particularly with the Ultra Ethernet Consortium and the work they’re doing to define a spec, you’re going to see more products based on UEC. You’re going to see more of a connection between the back end and the front-end using IP as a singular protocol. And so we’re feeling very encouraged that especially in 2025, there will be a lot of production rollout of back end and, of course, front end based on Ethernet.

Arista Networks’ management sees networking spend as contributing to 10%-15% of the total cost of an AI data centre 

Coming back to this networking spend versus the rest of the GPUs and et cetera, I would say it started to get higher and higher with 100-gig, 400-gig and 800-gig, where the optics and the switches are more than 10%, perhaps even 15% in some cases, 20, a lot of its governed by the cables and optics too. But the percentage hasn’t changed a lot in high-speed networking. In other words, it’s not too different between 10, 100, 200, 400 and 800. So we — you’ll continue to see that 10% to 15% range.

Arista Networks’ management sees diversified activity when it comes to the development of AI data centres

[Question]  And just what you’re seeing in terms of other people kind of building out some of these AI clusters, if you classify some of those customers as largely focused on back end today, and those represent opportunities going forward? Or just kind of what the discussion is outside of the Cloud Titans amongst some of these other guys that are building very large networks?

[Answer]  The Tier 2 cloud providers are doing exactly what the Tier 1 is doing just at a smaller scale. So the activity is out there. Many companies are trying to build these clusters, maybe not hundreds of thousands GPUs but thousands of GPUs together in their real estate if they can get them. But the designs that we’re working on with them, the type of sort of features, fine-tuning is actually very, very similar to the cloud, just at a smaller scale. So we’re very happy with that activity and this is across the board. It’s very positive to see this in the ecosystem that it’s not limited just 4 or 5 customers.

Arista Networks’ management is observing that data centre companies are facing a shortage of GPUs (graphics processing units) and they are trying to develop AI with smaller GPU clusters

I think they’re also waiting for GPUs like everyone else is. So there’s that common problem that we’re not the only one with lead time issues. But to clarify the comment on scale, Anshul and I are also seeing some very interesting enterprise projects against smaller scale. So a lot of customers are trying AI for small clusters, not too different from what we saw with HPC clusters back in the day.

Arista Networks’ management believes that good networking technology for AI requires not just good silicon, but the right software, so they are not concerned about Arista Networks’ suppliers moving up the stack

It’s not just the merchant silicon but how you can enable the merchant silicon with the right software and drivers, and this is an area that really Arista excels, and if you just have chips, you can’t build the system. But our system-wide features, whether it’s a genetic load balancing, or latency analyzer to really improve the job completion time and deal with that frequent communication and generative AI is also fundamentally important…

… [Question] So I think there was a mention on merchant silicon earlier in the Q&A. And one of your merchant silicon partners has actually moved up the stack towards the service provider routing. I’m just curious if there’s any intention on going after that piece if that chip is made available to you?

[Answer] I believe you are referring to the latest announcement Broadcom on their 25.60 Jericho chip that was announced recently.

[Question] Yes, the Qumran3D.

[Answer] Qumran3D, exactly. So it’s the same family, same features. And as you know, we’ve been a great partner of Broadcom for a long time, and we will continue to build new products. This is not a new entry, so to speak. We’ve been building these products that can be used on switches our orders for a while, and that bandwidth just doubled going to now 25.6. So you can expect some products from us in the future with those variants as well. But really — nothing really changed…

…And the investment we have made in our routing stack over the last 10 years, I want to say, has just gotten better and stronger. Power in the Internet, power in the cloud, power in the AI, these are hard problems. And they require thousands of engineers of investment to build the right VXLAN, BGP routing, EVPN, et cetera. So it’s not just a chip. It’s how we name the chip to do these complicated routing algorithms.

AI is becoming a really important component of Arista Networks’ customers

We’re simply seeing AI is going to become such an important component of all our cloud titans that is now a combined vertical.

Datadog (NASDAQ: DDOG)

Datadog’s management is excited about generative AI and large language models and they believe that the adoption of AI will lead to additional growth in cloud workloads

Finally, we continue to be excited about the opportunity in generative AI and Large Language Models. First, we believe adopting NextGen AI will require the use of cloud and other modern technologies and drive additional growth in cloud workloads.

Datadog is building LLM observability products

So we are continuing to invest by integrating with more components at every layer of the new AI stack and by developing our own LLM observability products. 

Datadog’s management is seeing adoption of AI across many of its customers, but the activity is concentrated in AI-native customers

And while we see signs of AI adoption across large parts of our customer base, in the near term, we continue to see AI-related usage manifest itself most accurately with next-gen AI native customers who contributed about 2.5% of our ARR this quarter.

Datadog is adding value to its own platform using AI with one example being Bits AI, Datadog’s test-and-analysis tool

Besides observing the AI stack, we also expect to keep adding value to our own platform using AI. Datadog’s unified platform and purely SaaS model, combined with strong multiproduct adoption by our customers generates a large amount of deep and precise observability data. We believe combining AI capabilities with this broad data set will allow us to deliver differentiated value to customers. And we are working to productise differentiated value through recently announced capabilities such as our Bits AI assistant, AI generated synthetic test and AI-led air analysis and resolution, and we expect to deliver many more related innovation to customers over time.

Datadog’s management is seeing that AI-native customers are using Amazon’s AWS whereas the larger enterprises that are using AI are using Microsoft’s Azure

Interestingly enough, the — when we look at our cohort of customers that are that we consider to be AI native and built largely on AI in all AI providers, they tend to be on different clouds. What we see is that the majority of those companies actually have a lot of their usage on AWS. Today, the larger part of the usage or the larger of these customers are on Azure. So we see really several different adoption trends there that I think are interesting to the broader market.

Datadog’s management is seeing broad usage of AI across Datadog’s customers, but the customers are adopting AI only at low volumes

Whereas we see broad usage of AI functionality across the customer base, but at low volumes, and it corresponds to the fact that for most customers or most enterprises really, they’re still in the early stages of developing and shipping applications. So for now, the usage is concentrated among the model providers.

Datadgo’s management sees a lot of opportunity for Datadog as AI usage proliferates – for example, management believes that the widespread use of AI will result in the creation of a lot of code and these code will need to be monitored

So on the DevSecOps side, I think it’s too early to tell how much the revenue opportunity there is in the tooling specific lab there. When you think of the whole spectrum of tools, the closer you get to the developer side to how are is to monetize and the further you get towards operations and infrastructure, the easier it is to monetize. You can ship things that are very useful and very accretive to our platform because they get you a lot of users, a lot of attention and a lot of stickiness that are harder to monetize. So we’ll see where on the spectrum that is. What we know, though, is that broader Generative AI up and down the stack from the components themselves, the GPUs all the way up to the models and the various things that are used to orchestrate them and store the data and move the data around all of that is going to generate a lot of opportunity for us. We said right now, it’s conciliated among the AI native largely model providers. But we see that it’s going to broaden and concern a lot more of our customers down the road…

…So in general, the more complexity there is, the more useful observability, the more you see his value from writing code to actually understanding it and observing it. So to caricature if you — if you spend a whole year writing 5 lines of code that are really very deep, you actually know those 5 lines pretty well, maybe you don’t observe because you’ll see you understand exactly how they work and what’s going on with them. On the other hand, if thanks to all the major advances of technology and all of the very super source AI and you can just very quickly generate thousands of lines of code, ship them and start operating them, you actually have no idea how these work and what they do. And you need a lot of tooling observability to actually understand that and keep driving that and secure it and do everything you need to do with it over time. So we think that overall, this increases in productivity are going to favor observability.

Datadog’s management is also trying to guess how transformative AI will be, but there are signs that AI’s impact will be truly huge

In terms of the future growth of AI, look, I think like everyone, we’re trying to guess how transformative it’s going to be. It looks like it’s going to be pretty is, if you judge from just internally, how much of that technology we are adopting a how much is the productivity impact, it seems to be having. 

AI-related use cases are still just a small fraction of the overall usage of Datadog’s products, but Datadog’s management thinks that AI will drive a lot of the company’s growth in the future 

So again, today, we only see a tiny bit of it, which is early adoption by model providers and a lot of companies that are trying to scale up and experiment and figure out who it applies to their businesses and what they can ship to use the technology. But we think it’s going to drive a lot of growth in the years to come.

Datadog’s management can’t tell when Datadog’s broader customer base will start ramping up AI workloads but they are experimenting; most of the innovation happening right now is concentrated among the model providers

[Question] Olivier, you called out the 2.5 points from AI native customers a few times, but you’ve also said that the broader customer base should start adding AI workloads to our platform over time. When do you think that actually takes place and the broader customer base starts to impact that AI growth in more earnest?

[Answer] We don’t know. And I think it’s too early to tell. For one part, there’s some uncertainty in terms of — these customers are being to figure out what it is they are going to ship to their own customers. I think everybody is trying to learn that right now and experiment it. And — but the other part is also that right now, the innovation is largely concentrated among the model providers. And so it’s rational right now for most customers to rely on those instead of they’re deploying their own infrastructure. Again, we think it’s slightly going to change. We see a lot of demand in interest in other ways to host models and run models and customers and all those things like that. But today, that’s the — these are the trends of the market today basically.

Etsy (NASDAQ: ETSY)

Etsy’s management is improving the company’s search function by combining humans and machine learning technology to better identify the quality of each product listing on the Etsy platform

We’re moving beyond relevance to the next frontier of search focused on better identifying the quality of each Etsy listing, utilizing humans and ML technology so that from a highly relevant result set, we bring the very best of Etsy to the top, personalized to what we understand of your tastes and preferences. For example, from the start of the year, we’re tracking to a ninefold increase in the number of human-curated listings on Etsy to over 1.5 million listings by year-end. We’re also utilizing ML models designed to determine the visual appeal of items and incorporating that information into our search algorithms. 

Etsy’s management is using generative AI to improve the Etsy search-experience when buyers enter open-ended queries, which helps build purchase-frequency

There’s also a huge opportunity to evolve the Etsy experience so that we show buyers a more diverse set of options when they search for open-ended head query items such as back-to-school. On the left of this slide, you can see an example of how a search for back-to-school items looks on Etsy. We generally show multiple very similar versions of customized pencils, stickers, lawn signs and so on, all mixed together. This is suboptimal as it offers buyers only a few main ideas on the first page of search and requires a ton of cognitive load to distinguish between virtually identical items. We’ve recently launched a variety of experiments with the help of Gen AI to evolve these types of head query searches. As we move into 2024, when a buyer searches for broad queries, we expect to be able to show a far more diverse and compelling set of ideas, all beautifully curated by organizing search results into a number of ideas for you that are truly different and helping to elevate the very best items within each of these ideas, we can take a lot of the hard work out of finding exactly the perfect item. And help build frequency as we highlight the wide range of merchandise available on Etsy.

Etsy’s management is using machine learning to identify product-listings that are not conforming to the company’s product policies, and listing-takedowns are already up 140% year-on-year 

We’ve hired a lot of people, and we also have been investing a lot in machine learning and machine learning is really helping us to be able to identify among the 120 million listings on Etsy, those that may not conform with our policy. Takedowns are up 140% year-over-year. 

Fiverr (NYSE: FVRR)

Fiverr’s management has developed Fiverr Neo, a generative AI tool that helps customers scope their projects better and match them with suitable freelance talent, just like a human recruiter would, just better; management believes that Fiverr Neo will help save customers time when they are looking for freelance talent

The vision for Fiverr Neo is quite wild – we imagine Neo will serve as a personalized recruiting expert that can help our customers more accurately scope their projects and get matched with freelance talent, just like a human recruiter, only with more data and more brain power. What we have done so far is leverage the existing LLM engines to allow customers to express their project needs in natural language, which Neo will synthesize and define the scope before matching the client with a short list of choices pulled from the entire Fiverr freelancer database. It’s a substantial step forward from the existing experience and streamlines the time the customer needs to make an informed decision.

Fiverr’s management used a combination of Fiverr’s own software and LLMs from other companies to build Fiverr Neo

So there’s a lot of learning as we build this product. And what we’re doing is really a hybrid of technologies. Some of them are being developed by us. Some are off the shelf, most of the leading companies that are developing LLM, which have partnered with us. And we’re putting this to the maximum. I think a lot of these systems are not yet optimized for large scale and high performance but we find our own ways of developing a lot of this technology to provide a very smooth experience to our customers. 

Fiverr Neo is still new, but users are already experiencing more accurate matches

In terms of Fiverr neo, we’re very pleased with the rollout. Obviously, very, very young product, but we’re seeing over 100,000 users that are trying the product. And what we’re seeing from their experience is that we’re able to provide more accurate matches, which is basically what we wanted to do and have a higher engagement and satisfaction levels, which we’re very happy with and the beginning of a repeat usage of the product. 

Fiverr’s management thinks that AI has a positive impact on the product categories that Fiverr can introduce to its marketplace and management is ensuring that Fiverr’s catalog will contain any new skills that the AI-age will require; management thinks that a lot of AI hype at the beginning of the year has died down and the world is looking for killer AI applications

So I did address this also in how we think about next year and the fact that AI both impact the efficiency of how we work allows us to do pretty incredible things in our product. It also has an impact — positive impact on the categories that we can introduce. So again, we’re not getting into specific category breakdown. But what we’re seeing on the buyer side, I think we’ve introduced these categories, these categories continue growing. I think that a lot of the height that surrounded AI in the beginning of the year subsided and right now, it’s really looking for the killer applications that could be developed with AI, and we’re developing some of them and our customers are as well. So these are definitely areas where we continue seeing growth, but not just that, but we continue investing in the catalog side to ensure that the new types of skills that pop up are going to be addressed on the Fiverr market base.

Mastercard (NYSE: MA)

Mastercard’s management is using AI to improve the company’s fraud-related solutions and has signed agreements in Argentina, Saudi Arabia, and Nigeria in this area

AI also continues to play a critical role powering our products and fueling our network intelligence. We’re scaling our AI-powered transaction fraud monitoring solution, which delivers real-time predictive scores based on a unique blend of customer and network level insights. This powerful solution gives our customers the ability to take preventive action before the transaction is authorized. This quarter alone, we signed agreements in Argentina, Saudi Arabia and Nigeria with financial institutions and fintechs who will benefit from early fraud detection and with merchants who will experience less friction and higher approval rates.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management is very excited about AI and how it can help MercadoLibre improve the user experience and its business operations

As you know, we don’t guide, but there are many exciting things going on, particularly, obviously, AI. That hopefully will enable us to provide our users a better experience, enable us to launch innovative ideas, and also scale and gain efficiencies, whether it is in customer service, or whether it is in fraud prevention or whether it is in the way our developers, 15,000 developers, go about developing and performing quality control, et cetera. So obviously, looking forward for the next 3 years, I think that’s a key thing to look into.

MercadoLibre’s management is working on using AI to improve the company’s product-search function and they are happy with the progress so far 

Last question in terms of AI and search, we are working on that. I mean we are putting a lot of effort into building solutions around AI. I think we don’t have much to disclose as of now, but search, reviews, questions and answers, buy box and products, as Marcos was saying, copilot for our developer. We’re looking at the broad range of AI uses for MercadoLibre to boost consumer demand and efficiency. And we’re happy with the progress that we have so far, but not much to be said yet.

MercadoLibre’s management has been using AI for many years in fraud prevention and credit scoring for the company’s services

We have been using AI for a long time now for many, many years, both in terms of fraud prevention and credit scoring. Both 2 instances, they are pretty much use cases which are ideal for AI, because we have, in the case of fraud prevention, millions of transactions every day and with a clear outcome, either fraud or not fraud. So with the right variables, we can build a very strong model that has predicted and have really best-in-class fraud prevention. And with that knowledge and given the experience we have been building on credits, we have also been — built our credit scoring models leveraging the AI.

Meta Platforms (NASDAQ: META)

The next-generation Ray-Ban Meta smart glasses has embedded AI

The next generation of Ray-Ban Meta smart glasses, which are the first smart glasses with our Meta AI built in.

Meta Platforms’ management thinks glasses are an ideal form-factor for an AI device as it can see exactly what you see and hear what you hear

And in many ways, glasses are the ideal form factor for an AI device because they enable your AI assistant to see what you see and hear what you hear. 

Llama 2 is now the leading open source AI model with >30 million downloads last month

We’re also building foundation models like Llama 2, which we believe is now the leading open source model with more than 30 million Llama downloads last month.

Beyond generative AI, Meta Platforms’ management is using recommendation AI systems for the company’s Feeds, Reels, ads, and integrity systems and these AI systems are very important to the company; AI feed recommendations led to increases in time spent on Facebook (7%) and Instagram (6%)

Beyond that, there was also a different set of sophisticated recommendation AI systems that powers our Feeds, Reels, ads and integrity systems. And this technology has less hype right now than generative AI but it is also very important and improving very quickly. AI-driven feed recommendations continue to grow their impact on incremental engagement. This year alone, we’ve seen a 7% increase in time spent on Facebook and a 6% increase on Instagram as a result of recommendation improvements. 

Meta Platforms’ AI tools for advertisers has helped drive its Advantage+ advertising product to reach a US$10 billion revenue run-rate, with more than 50% of the company’s advertisers using Advantage+ creative tools

Our AI tools for advertisers are also driving results with Advantage+ shopping campaigns reaching a $10 billion run rate and more than half of our advertisers using our Advantage+ creative tools to optimize images and text and their ads creative.

AI-recommended content has become increasingly incremental to engagement on Meta Platforms’ properties

AI-recommended content from unconnected accounts and feed continues to become increasingly incremental to engagement, including in the U.S. and Canada. These gains are being driven by improvements to our recommendation systems, and we see additional opportunities to advance our systems even further in the future as we deploy more advanced models.

Meta Platforms’ management believes that the company’s Business AIs can easily help businesses set up AIs to communicate with consumers at very low cost, which is important in developed economies where cost of labour is high (businesses in developing economies tend to hire humans to communicate with consumers)

Now I think that this is going to be a really big opportunity for our new Business AIs that I talked about earlier that we hope will enable any business to easily set up an AI that people can message to help with commerce and support. Today, most commerce and messaging is in countries where the cost of labor is low enough that it makes sense for businesses to have people corresponding with customers over text. And in those countries like Thailand or Vietnam, there’s a huge amount of commerce that happens in this way. But in lots of parts of the world, the cost of labor is too expensive for this to be viable. But with business AIs, we have the opportunity to bring down that cost and expand commerce and messaging into larger economies across the world. So making business AIs work for more businesses is going to be an important focus for us into 2024.

Meta Platforms’ management has started testing the company’s AI capabilities with a few partners in business messaging

We’ve recently started testing AI capabilities with a few partners and we’ll take our time to get the experience right, but we believe this will be a big unlock for business messaging in the future.

Meta Platforms’ management still believes in the benefits of open-sourcing Meta’s AI models: It increases adoption (which benefits the company as the security features and cost-efficiency of the models improves) and talent is more attracted to Meta Platforms

We have a pretty long history of open sourcing parts of our infrastructure that are not kind of the direct product code. And a lot of the reason why we do this is because it increases adoption and creates a standard around the industry, which often drives forward innovation faster so we benefit and our products benefit as well as there’s more scrutiny on kind of security and safety-related things so we think that there’s a benefit there.

And sometimes, more companies running models or infrastructure can make it run more efficiently, which helps reduce our costs as well, which is something that we’ve seen with open compute. So I think that there’s a good chance that, that happens here over time. And obviously, our CapEx expenses are a big driver of our costs, so any aid in innovating on efficiency is sort of a big thing there.

The other piece is just that over time with our AI efforts, we’ve tried to distinguish ourselves as being a place that does work that will be shared with the industry and that attracts a lot of the best people to come work here. So a lot of people want to go to the place to work where their work is going to touch most people. One way to do that is by building products that billions of people use. But if you’re really a focused engineer or researcher in this area, you also want to build the thing that’s going to be the standard for the industry. So that’s pretty exciting and it helps us do leading work.

Meta Platforms’ management thinks the AI characters that the company introduced recently could lead to a new kind of medium and art form and ultimately drive increasing engagement for users of the company’s social apps

We’re designing these to make it so that they can help facilitate and encourage interactions between people and make things more fun by making it so you can drop in some of these AIs into group chats and things like that just to make the experiences more engaging. So this should be incremental and create additional engagement. The AIs also have profiles in Instagram and Facebook and can produce content, and over time, going to be able to interact with each other. And I think that’s going to be an interesting dynamic and an interesting, almost a new kind of medium and art form. So I think that will be an interesting vector for increasing engagement and entertainment as well.

Meta Platforms’ management thinks that generative AI is a really exciting technology and that it changes everything and although it’s hard to predict what generative AI’s impact is going to be on how individuals use Meta’s services, they still thinks it’s worth investing in it;In terms of how big this is going to be, it’s hard to predict because I don’t think that anyone has built what we’re building here. I mean, there’s some analogy is like what OpenAI is doing with ChatGPT, but that’s pretty different from what we’re trying to do. Maybe the Meta AI part of what we’re doing overlaps with the type of work that they’re doing, but the AI characters piece, there’s a consumer part of that, there’s a business part, there’s a creators part. I’m just not sure that anyone else is doing this. And when we’re working on things like Stories and Reels, there were some market precedents before that. Here, there’s technology which is extremely exciting. But I think part of what leading in an area and developing a new thing means is you don’t quite know how big it’s going to be. But what I predict is that I do think that the fundamental technology around generative AI is going to transform meaningfully how people use each of the different apps that we build…

…So I think you’re basically seeing that there are going to be — this is a very broad and exciting technology. And frankly, I think that this is partially why working in the technology industry is so awesome, right, is that every once in a while, something comes along like this, that like changes everything and just makes everything a lot better and your ability to just be creative and kind of rethink the things that you’re doing to be better for all the people you serve…

…But yes, it’s hard sitting here now to be able to predict like the metrics are going to be around, like what’s the balance of messaging between AIs and people or what the balance and Feeds between AI content and people content or anything like that. But I mean, I’m highly confident that this is going to be a thing and I think it’s worth investing in.

Meta Platforms’ management believes that generative AI will have a big impact on the digital advertising industry

It’s going to change advertising in a big way. It’s going to make it so much easier to run ads. Businesses that basically before would have had to create their own creative or images now won’t have to do that. They’ll be able to test more versions of creative, whether it’s images or eventually video or text. That’s really exciting, especially when paired with the recommendation AI.

Microsoft (NASDAQ: MSFT)

Microsoft’s management is making AI real for everyone through the introduction of Copilots

With Copilots, we are making the age of AI real for people and businesses everywhere. We are rapidly infusing AI across every layer of the tech stack and for every role and business process to drive productivity gains for our customers.

Microsoft’s management believes that Azure has the best AI infrastructure for both training and inference

We have the most comprehensive cloud footprint with more than 60 data center regions worldwide as well as the best AI infrastructure for both training and inference. And we also have our AI services deployed in more regions than any other cloud provider.

Azure AI provides access to models from OpenAI and open-sourced models (including Meta’s) and 18,000 organisations now use Azure OpenAI

Azure AI provides access to best-in-class frontier models from OpenAI and open-source models, including our own as well as from Meta and Hugging Face, which customers can use to build their own AI apps while meeting specific cost, latency and performance needs. Because of our overall differentiation, more than 18,000 organizations now use Azure OpenAI service, including new to Azure customers.

GitHub Copilot increases developer productivity by up to 55%; there are more than 1 million paid Copilot users and more than 37,000 organisations that subscribe to Copilot for business (up 40% sequentially)

With GitHub Copilot, we are increasing developer productivity by up to 55% while helping them stay in the flow and bringing the joy back to coding. We have over 1 million paid Copilot users and more than 37,000 organizations that subscribe to Copilot for business, up 40% quarter-over-quarter, with significant traction outside the United States.

Microsoft’s management is using AI to improve the healthcare industry: Dragon Ambient Experience (from the Nuance acquisition) has been used in more than 10 million patient interactions to-date to automatically document the interactions, andDAX Copilot can draft clinical notes in seconds, saving 40 minutes of documentation time daily for physicians

In health care, our Dragon Ambient Experience solution helps clinicians automatically document patient interactions at the point of care. It’s been used across more than 10 million interactions to date. And with DAX Copilot, we are applying generative models to draft high-quality clinical notes in seconds, increasing physician productivity and reducing burnout. For example, Atrium Health, a leading provider in Southeast United States, credits DAX Copilot with helping its physicians each save up to 40 minutes per day in documentation time.

Microsoft’s management has infused Copilot across Microsoft’s work-productivity products and tens of thousands of users are already using Copilot in early access

Copilot is your everyday AI assistant, helping you be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams. Tens of thousands of employees at customers like Bayer, KPMG, Mayo Clinic, Suncorp and Visa, including 40% of the Fortune 100, are using Copilot as part of our early access program.

Users find Copilot amazing and have enjoyed similar productivity gains as developers did with Github Copilot

Customers tell us that once they use Copilot, they can’t imagine work without it, and we are excited to make it generally available for enterprise customers next week. This quarter, we also introduced a new hero experience in Copilot, helping employees tap into their entire universe of work, data and knowledge using chat. And the new Copilot Lab helps employees build their own work habits for this era of AI by helping them turn good prompts into great ones…

…And in fact, the interesting thing is it’s not any one tool, right, which is the feedback even sort of is very clear that it’s the all up. You just keep hitting the Copilot button across every surface, right, whether it’s in Word to create documents, in Excel to do analysis or PowerPoint or Outlook or Teams. Like clearly, the Teams Meeting, which is an intelligent recap, right? It’s not just a dumb transcript. It’s like having a knowledge base of all your meetings that you can query and add to essentially the knowledge terms of your enterprise. And so we are seeing broad usage across and the interesting thing is by different functions, whether it’s in finance or in sales by roles. We have seen productivity gains like we saw with developers in GitHub Copilot.

At the end of the day, Microsoft management is still grounded about the rate of adoption of Copilot in Office, since it is an enterprise product

And of course, this is an enterprise product. I mean at the end of the day, we are grounded on enterprise cycle times in terms of adoption and ramp. And it’s incrementally priced. So therefore, that all will apply still. But at least for something completely new, to have this level of usage already and this level of excitement is something we’re very, very pleased with.

Microsoft’s management recently introduced Security Copilot, the world’s first generative AI cybersecurity product, and it is seeing high demand

 We see high demand for Security Copilot, the industry’s first and most advanced generative AI product, which is now seamlessly integrated with Microsoft 365 Defender. Dozens of organizations, including Bridgewater, Fidelity National Financial and Government of Alberta, have been using Copilot in preview and early feedback has been positive.

Bing users have engaged in over 1.9 billion chats and Bing has a new personalised answers feature, and better support for DALL-E-3 (more than 1.8 billion images have been created with DALL-E-3 to-date)

Bing users have engaged in more than 1.9 billion chats, and Microsoft Edge has now gained share for 10 consecutive quarters. This quarter, we introduced new personalized answers as well as support for DALL-E 3, helping people get more relevant answers and to create incredibly realistic images. More than 1.8 billion images have been created to date.

Bing is now incorporated into Meta’s AI chat experience

We’re also expanding to new end points, bringing Bing to Meta’s AI chat experience in order to provide more up-to-date answers as well as access to real-time search information. 

Azure saw higher-than-expected AI consumption

In Azure, as expected, the optimization trends were similar to Q4. Higher-than-expected AI consumption contributed to revenue growth in Azure.

Micosoft’s management is seeing new AI project starts in Azure, and these bring other cloud projects

Given our leadership position, we are seeing complete new project starts, which are AI projects. And as you know, AI projects are not just about AI meters. They have lots of other cloud meters as well. So that sort of gives you one side of what’s happening in terms of enterprise.

Microsoft’s management believes the company has very high operating leverage with AI, since the company is using one model across its entire stack of products, and this operating leverage goes down to the silicon level

Yes, it is true that we have — the approach we have taken is a full-stack approach all the way from whether it’s ChatGPT or Bing chat or all our Copilots all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties. And also over time, you can see that sort of stack optimization all the way to the silicon because the abstraction layer to which the developers are riding is much higher up than no-level kernels, if you will. So therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open-source models or proprietary models. We also have a bunch of open-source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it, but the thing is we have scale leverage of one large model that was trained and one large model that’s been used for inference across all our first-party SaaS apps as well as our API in our Azure AI service…

…In addition, what Satya mentioned earlier in a question, and I just want to take every chance to reiterate it, if you have a consistent infrastructure from the platform all the way up through its layers that every capital dollar we spend, if we optimize revenue against it, we will have great leverage because wherever demand shows up in the layers, whether it’s at the SaaS layer, whether it’s at the infrastructure layer, whether it’s for training workloads, we’re able to quickly put our infrastructure to work generating revenue on our BEAM workloads. I mean I should have mentioned all the consumer workloads use the same frame.

Microsoft’s management believes that having the discipline to concentrate Microsoft’s tech stack and capital spend is important because the costs of developing and using AI can run up really quickly

I think, is very important for us to be very disciplined on both I’ll call it our tech stack as well as our capital spend all to be concentrated. The lesson learned from the cloud side is this, we’re not running a conglomerate of different businesses. It’s all one tech stack up and down Microsoft’s portfolio. And that I think is going to be very important because that discipline, given what the spend like — it will look like for this AI transition, any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that its chips, together with the Infiniband networking technology, are the reference architecture for AI

NVIDIA HDX with InfiniBand together are essentially the reference architecture for AI supercomputers and data center infrastructures.

Inferencing is now a major workload for Nvidia chips

Inferencing is now a major workload for NVIDIA AI compute.

Nvidia’s management is seeing major consumer internet companies ramping up generative AI deployment, and enterprise software companies starting to

Most major consumer Internet companies are racing to ramp up generative AI deployment. The enterprise wave of AI adoption is now beginning. Enterprise software companies such as Adobe, Databricks, Snowflake and ServiceNow are adding AI copilots and assistance with their pipelines.

Recent US export controls have affected Nvidia’s chip exports to China, Vietnam, and parts of the Middle East

Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products, including our Hopper and MPIR 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe will be more than offset by strong growth in other regions.

Many countries are keen to invest in sovereign AI infrastructure, and Nvidia’s management is helping them do so as it is a multi-billion economic opportunity

Many countries are awaiting to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystem. For example, we are working with India Government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI enterprise software to fuel advancement across France and Europe. National investment in compute capacity is a new economic imperative and serving the sovereign AI infrastructure market represents a multibillion-dollar opportunity over the next few years…

…The U.K. government announced it will build 1 of the world’s fastest AI supercomputer called Isambard-AI with almost 5,500 Grace Hopper Super chips. German Supercomputing Center, Elec, also announced that it will build its next-generation AI supercomputer with close to 24,000 Grace Hopper super chips and Quantum 2 InfiniBand, making it the world’s most powerful AI supercomputer with over 90 exaflops of AI performance…

…You’re seeing sovereign AI infrastructures. People countries that now recognize that they have to utilize their own data, keep their own data, keep their own culture, process that data and develop their own AI. 

Nvidia has a new chip with inference speeds that are 2x faster than the company’s flagship H100 GPUs (graphics processing units)

We also announced the latest member of the Hopper family, BH 200, which will be the first GPU to offer HBM3E, faster, larger memory to further accelerate generative AI and LLMs. It moves inference speed up to 2x compared to H100 GPUs for running LLM like [indiscernible]. 

Major cloud computing services providers will soon begin to offer instances for Nvidia’s next-generation GPU, the H200  

Compared to the H100, H200 delivers an 18x performance increase for infancy models like GPT-3, allowing customers to move to larger models and with no increase in latency. Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200 base instances starting next year. 

Nvidia’s management is seeing very strong demand for Infiniband; management believes that Infiniband is critical in the deployment of LLMs (large language models); management believes that the vast majority of large-scale AI factories had standardised on Infiniband because of Infiniband’s vastly superior value proposition compared to Ethernet (data-traffic patterns are very different for AI and for typical hyperscale cloud environments)

Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gain the scale and performance needed for training LLMs. Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe…

……The vast majority of the dedicated large-scale AI factories standardized on InfiniBand. And the reason for that is really because of its data rate and not only just the latency, but the way that it moves traffic around the network is really important. The way that you process AI and a multi-tenant hyperscale ethernet environment, the traffic pattern is just radically different. And with InfiniBand and with software-defined networks, we could do congestion control, adaptive routing, performance isolation and noise isolation, not to mention, of course, the data rate and the low latency that — and the very low overhead of InfiniBand that’s a natural part of InfiniBand. .

And so InfiniBand is not so much just a network. It’s also a computing fabric. We put a lot of software-defined capabilities into the fabric, including computation. We do floating point calculations and computation right on the switch and right in the fabric itself. And so that’s the reason why that difference in Ethernet versus InfiniBand where InfiniBand versus Ethernet for AI factories is so dramatic. And the difference is profound and the reason for that is because you’ve just invested in a $2 billion infrastructure for AI factories, a 20%, 25%, 30% difference in overall effectiveness, especially as you scale up is measured in hundreds of millions of dollars of value. And if you were renting that infrastructure over the course of 4 or 5 years, it really adds up. And so InfiniBand’s value proposition is undeniable for AI factories. 

Nvidia’s management is expanding the company into Ethernet and Nvidia’s Ethernet technology performs better than traditional offerings; management’s go-to-market strategy for Nvidia’s new Ethernet technology is to collaborate with the company’s large enterprise partners

We are expanding NVIDIA networking into the Ethernet space. Our new Spectrum end-to-end Ethernet offering with technologies, purpose-built for AI will be available in Q1 next year. We support from leading OEMs, including Dell, HP and Lenovo. Spectrum X can achieve 1.6x higher networking performance for AI communication compared to traditional ethernet offerings…

…And our company is — for all of our employees, doesn’t have to be as high performance as the AI factories, we use to train the models. And so we would like the AI to be able to run an Ethernet environment. And so what we’ve done is we invented this new platform that extends Ethernet. It doesn’t replace the Ethernet, it’s 100% compliant with Ethernet, and it’s optimized for east-west traffic, which is where the computing fabric is. It adds to Ethernet with an end-to-end solution with Bluefield as well as our Spectrum switch that allows us to perform some of the capabilities that we have in InfiniBand, not all but some, and we achieved excellent results.

And the way we go to market is we go to market with our large enterprise partners who already offer our computing solution. And so HPL and Lenovo has the NVIDIA AI stack, the NVIDIA enterprise software stack. And now they integrate with BlueField as well as bundle take to market their Spectrum switch and they’ll be able to offer enterprise customers all over the world with their vast sales force and vast network of resellers the — in a fully integrated, if you will, fully optimized at least end-to-end AI solution. And so that’s basically bringing AI to Ethernet for the world’s enterprise. 

Nvidia’s management believes that there’s a new class of data centres emerging, and they’ve named them as “AI factories”; these AI factories are being built all across the world 

This is the traditional data centers that you were just talking about, where we represent about 1/3 of that. But there’s a new class of data centers. And this new class of data centers, unlike the data centers of the past where you have a lot of applications running used by a great many people that are different tenants that are using the same infrastructure and the data center stores a lot of files. These new data essentials are very few applications if not 1 application used by basically 1 tenant. And it processes data. It trains models and it generates tokens, it generates AI. And we call these new data center AI factories. We’re seeing AI factories being built out everywhere in just about every country. 

Nvidia’s management is seeing the appearance of CSPs (cloud services providers) that specialise only in GPUs and processing AI

You’re seeing GTU specialized CSPs cropping up all over the world, and they’re dedicated to doing really 1 thing, which is processing AI. 

Nvidia’s management is seeing an AI adoption-wave moving from startups and CSPs to consumer internet companies, and then to enterprise software companies, and then to industrial companies

And so we’re just — we’re seeing the waves of generative AI starting from the start-ups and CSPs moving to consumer Internet companies moving to enterprise software platforms, moving to enterprise companies. And then — and ultimately, 1 of the areas that you guys have seen us spend a lot of energy on has to do with industrial generative AI. This is where NVIDIA AI and NVIDIA Omniverse comes together. And that is a really, really exciting work. And so I think the — we’re at the beginning of a and basically across the board industrial transition to generative AI to accelerated computing. This is going to affect every company, every industry, every country.

Nvidia’s management believes that Nvidia’s AI Enterprise service – where the company helps its customers develop custom AI models that the customers are then free to monetise in whatever manner they deem fit – will become a very large business for Nvidia

Our monetization model is that with each 1 of our partners, they rent a sandbox on DGX Cloud where we work together. They bring their data. They bring their domain expertise. We’ve got our researchers and engineers. We help them build their custom AI. We help them make that custom AI incredible. Then that customer AI becomes theirs, and they deploy it on a run time that is enterprise grade enterprise optimized or outperformance optimized, runs across everything NVIDIA. We have a giant installed base in the cloud on-prem anywhere. And it’s secure, securely patched, constantly patched and optimized and supported. And we call that NVIDIA AI enterprise.

NVIDIA AI Enterprise is $4,500 per GP per year. That’s our business model. Our business model is basically a license. Our customers then would that basic license can build their monetization model on top of. In a lot of ways we’re wholesale, they become retail. They could have a per-subscription license base. They could per instance or they could do per usage. There’s a lot of different ways that they could take to create their own business model, but ours is basically like a software license like an operating system. And so our business model is help you create your custom models, you run those custom models on NVIDIA AI Enterprise. And it’s off to a great start. NVIDIA AI Enterprise is going to be a very large business for us.

PayPal (NASDAQ: PYPL)

PayPal’s management wants to use AI and the data collected from the company’s Rewards program to drive a shopping recommendation engine

For example, our PayPal Cashback Mastercard provides 3% cash back on PayPal purchases as well as cash back on all other purchases. Customers with this card make, on average, 56 more purchases with PayPal in the year after they adopt the product than they did the year before. Over 25 million consumers have used PayPal Rewards in the past 12 months, and we’ve put more than $200 million back in our customers’ pockets with cashback and savings during that time. But even more interesting, through our Rewards product, we have an active database of over 300 million SKUs of inventory from our merchant partners. These data points can help us use AI to power a robust shopping recommendation engine, to provide more relevant rewards and savings back to our customers.

PayPal’s management believes that machine learning and generative AI can be applied to the company’s data to improve fraud protection and better connect merchants and consumers

 Our machine learning capabilities combine hundreds of risk and fraud models with dozens of real-time analytics engines and petabytes of payments data to generate insights by learning users’ behaviors, relationships, interests and spending habits. This scale gives us a very unique advantage in the market. Our ability to create meaningful profiles with the help of AI is exceptionally promising. You will see us using our data and the advances in generative AI in responsible ways to further connect our merchants and consumers together in a tight flywheel.

Shopify (NASDAQ: SHOP)

Shopify’s management has integrated Shopify Magic – the company’s suite of free AI features – across its products

At Shopify, we believe AI is for everyone, and its capabilities should be captured and embedded across the entirety of a business. We’ve integrated Shopify Magic, our suite of free AI-enabled features, across our products and workflows.

Shopify Magic can help merchants craft personalised pages and content, and is designed specifically for commerce

Shopify Magic can make the power of Shopify and a merchant’s own data to make it work better for them, whether it’s enabling unique personalized page and content generation like instantly crafting an About Us page in your brand voice and tone or building a custom page to showcase all the sizes available in your latest product collection…

…Now unlike other AI products, the difference with Shopify Magic is it’s designed specifically for commerce. And it’s not necessarily just 1 feature or 1 product. It’s really embedded across Shopify to make these workflows in our products just easier to use. It makes it easier for merchants to run and scale their businesses. And of course, we think it’s going to unlock a ton of possibilities for not just small merchants, but merchants of all sizes. And we’re going to continue to work on that over time. It’s just going to get better and better.

Shopify’s management is using AI internally so that the company can make better decisions and improve its customer support

We ourselves are using AI inside of Shopify to make better decisions, but also for things like — things like our support team using it so that questions like domain reconfiguration, or a new password, or I don’t know what my password is. Those things should not necessarily require high-touch communication. What that does is it means that our support team are able to have much higher-quality conversations and act as business coaches for the merchants on Shopify. 

Shopify’s management believes that Shopify is uniquely positioned to harness the power of AI because commerce and the company represent the intersection of humans and technology, and that is the domain of AI

If you kind of think about commerce and Shopify, we kind of interact at the intersection of humans and technology, and that’s exactly what AI is really, really good at. So we think we’re uniquely positioned to harness the power of AI, and the ultimate result of it will be these capabilities for our merchants to grow their businesses.

Shopify has AI-powered language translations for merchants within its software products

This includes things like launching shipping guidance for merchants, navigating them through streamlined privacy guidance, initiating localization experiments across various marketing channels and bringing localization tools and AI-backed language translations to the Shopify App Store.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees strong AI-related demand for its chips, but it’s not enough to offset cyclicality in its business 

Moving into fourth quarter 2023. While AI-related demand continues to be strong, it is not enough to offset the overall cyclicality of our business. We expect our business in the fourth quarter to be supported by the continued strong ramp of our 3-nanometer technology, partially offset by customers’ continued inventory adjustment.

TSMC;s management is seeing strong customer interest for its N2 technology node because the surge in AI-related demand leads to demand for energy-efficient computing, and TSMC’s technology platform goes beyond geometry-shrink (making transistors smaller), helping with power efficiency

The recent surge in AI-related demand supports our already strong conviction that demand for energy-efficient computing will accelerate in an intelligent and connected world. The value of our technology platform is expanding beyond the scope of geometry shrink alone and increasing toward greater power efficiency. In addition, as process technology complexity increases, the lead time and engagement with customers also start much earlier. As a result, we are observing a strong level of customer interest and engagement at our N2 similar to or higher than N3 at a similar stage from both HPC and smartphone applications.

TSMC’s management is seeing its customers add AI capabilities into smartphones and PCs and expects more of this phenomenon over time

We do see some activities from customers who add AI capability in end devices such as smartphone and PCs, [ so new growth ] engine and AI and PC, whatever. And we certainly hope that this one will add to the course, help TSMC more strengthen under our AI’s business…

…It started right now, and we will expect that the more and more customer will put that AI’s capability into the end devices, into their product.

TSMC’s management is seeing AI-related demand growing stronger and stronger and TSMC has to grow its manufacturing capacity to support this

The AI demand continues to grow stronger and stronger. So from TSMC’s point of view, now we have about — we have a capacity limitation to support them — to support the demand. We are working hard to increase the capacity to meet their demand, that’s for one thing.

TSMC’s management believes that any kind of AI-related chip will require leading edge chip technology and this is where TSMC excels

Whether customer developed the CPU, GPU, AI accelerator or ASIC for all the type for AI applications, the commonality is that they all require usage of leading-edge technology with stable yield delivery to support larger die size and a strong foundry design ecosystem. All of those are TSMC’s strengths. So we are able to address and capture a major portion of the market in terms of a semiconductor component in AI.

Tencent (NASDAQ: TCEHY)

Tencent’s management is increasing the company’s investments in its AI models and management wants to use AI for the company’s own benefit as well as that of society and its customers

We are increasing investment in our AI models, providing new features to our products and enhancing our targeting capabilities for both content and advertising. We aspire to position our leading AI capability, not only as a growth multiplier for ourselves, but also as a value provider to our enterprise customers and the society at large.

Tencent’s management recently upgraded the size and capabilities of the company’s foundational model – Tencent Hunyuan – which is now available to customers on a limited basis and deployed in some of Tencent’s cloud services

For cloud, we upgrade the size and capabilities of our proprietary foundation model, Tencent Hunyuan. We are making Hunyuan available on a limited basis to the public and to customers and deploying QiDian in Tencent Meeting and Tencent Docs…

…We have upgraded our proprietary foundation model, Tencent Hunyuan. We have made Tencent Hunyuan bot initially available to a smaller but expanding number of users via a mini program. Hunyuan is also now powering meeting summarization in Tencent Meeting and content generation in Tencent Docs. And externally, we’re enabling enterprise customers to utilize our large language model via APIs or model as a Service solutions in our cloud in functions such as coding, data analysis and customer service automation.

Tencent’s management believes that Tencent is one of China’s AI leaders with the development of Hunyuan

In terms of Hunyuan and the overall AI strategy, I would say we have been pretty far along in terms of building up Hunyuan, and we feel that we are one of the leaders within China, and we are also continuously increasing the size of the model and preparing for the next generation of our Hunyuan model, which is going to be a mixture of experts architecture, which we believe will further improve the performance of our Hunyuan model. And by building up Hunyuan, we actually have really build up our capability in general AI across the board. Because Hunyuan, the transformer-based model include — involve the handling of a large amount of data, large amount of training data, large size of computing cluster and a very dedicated fine-tuning process in terms of improving the AI performance.

Tencent’s management is using AI to improve the company’s advertising offerings, in areas such as ad targeting, attribution accuracy, and the generation of advertising visuals – management sees this as evidence that Tencent’s AI investments are already generating tangible results

We have expanded our AI models with more parameters to increase their ad targeting and attribution accuracy contributing to our ad revenue growth. We’re also starting to provide generative AI tools to advertiser partners, which enables them to dynamically generate ad visuals based on text fronts and to optimize ad sizes for different inventories, which should help advertisers create more appealing advertisements with higher click-through rates boosting their transactions in our revenue…

…And the general AI capability is actually helping us quite a bit in terms of the targeting technology related to advertising and our content provisioning service. So in short video by improving our AI capability, we can actually ramp up our video accounts at the faster clip. And in terms of the advertising business by increasing the targeting capability, we are actually increasing our ad revenue and by delivering better results to the — to our customers. So they are generating — so our AI capabilities is generating tangible result at this point in time. 

Tencent’s management wants to build an AI-powered consumer-facing smart agent down the road, but they are wary about the costs of inference

And we also feel that further in the future, when there’s actually a consumer-facing product that is more like a smart agent for people right now, that is further down the road, but it actually carries quite a bit of room for imagination…

…Now in terms of the Hunyuan and in the future, the potential of an AI assistant, I think it’s fair to say it’s still in a very, very early stage of concept design. So definitely not at the stage of product design yet and definitely not at the stage of thinking about monetization yet. But of course, right, if you look at any of these generative AI technology at this point in time, inference cost is a real variable cost, which needs to be considered in the entire equation. And that, to some extent, add to the challenge of the product design, too. So I would say, at this point in time, it’s actually very early stage. There is a promise and imaginary room for opportunity for the future. 

Tencent’s management believes that the company has sufficient amount of chips for the company’s AI-related development work for a couple more generations; the US’s recent semiconductor bans will not affect the development of Tencent’s AI models, but it could affect Tencent’s ability to rent out these chips through Tencent Cloud

Now in terms of the chip situation, right now, we actually have 1 of the largest inventory of of AI chips in China among all the players. And one of the key things that we have done was actually we were the first to put in order for H800, and that allow us to have a pretty good inventory of H800 chips. So we have enough chips to continue our development of Hunyuan for at least a couple more generations. And the ban does not really affect the development of Hunyuan and our AI capability in the near future. Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. 

Tencent’s management wants to explore the use of lower-performance chips for AI inference purposes and they are also exploring domestic suppliers of chips

Going forward, we feel that the shipment does actually affect our ability to resell these AI chips to — through our cloud services. So that’s one area that may be impacted. And going forward, we will have to figure out ways to make our — the usage of our AI chips more efficient. We’ll try to see whether we can offload a lot of the inference capability to lower-performance chips so that we can retain the majority of our high-performance AI chips for training purpose. And we also try to look for domestic stores for these training chips.

Tencent’s management believes that AI can bring significant improvement to a digital ad’s current average click-through rate of 1%

Today, a typical click-through rate might be around 1%. As you deploy large language models, then you can make more use of the thousands of discrete data points that we have potentially for targeting and bring them to bear and turn them into reality. And you can get pretty substantial uplifts in click-through rate and therefore, in revenue, which is what the big U.S. social networks are now starting to see.

Tesla (NASDAQ: TSLA)

Tesla vehicles have now driven over 0.5 billion miles with FSD (Full Self Driving) Beta and the mileage is growing

Regarding Autopilot and AI, our vehicle has now driven over 0.5 billion miles with FSD Beta, full self-driving beta, and that number is growing rapidly.

Tesla’s management sees significant promise with FSD v.12

We’re also seeing significant promise with FSD version 12. This is the end-to-end AI where it’s a photon count in, controls out or really you can think of it as there’s just a large bit stream coming in and a tiny bit stream going out, compressing reality into a very small set of outputs, which is actually kind of how humans work. The vast majority of human data input is optics, from our eyes. And so we are, like the car, photons in, controls out with neural nets, just neural nets, in the middle. It’s really interesting to think about that.

Tesla recently completed building a 10,000 GPU cluster of Nvidia’s H100s chips and has brought the cluster into operation faster than anyone has done (the H100s will help with the development of Tesla’s full self driving efforts)

We recently completed a 10,000th GPU cluster of H100s. We think probably bringing it into operation faster than anyone’s ever brought that much compute per unit time into production since training is the fundamental limiting factor on progress with full self-driving and vehicle autonomy.

Tesla’s management believes that AI is a game changer and wants the company to continue to invest in AI 

We will continue to invest significantly in AI development as this is really the massive game changer, and I mean, success in this regard in the long term, I think has the potential to make Tesla the most valuable company in the world by far.

Tesla’s management believes that the company’s AI team is the best in the world

The Tesla AI team is, I think, one of the world’s best, and I think it is actually by far the world’s best when it comes to real-world AI. But I’ll say that again: Tesla has the best real-world AI team on earth, period, and it’s getting better.

Tesla’s management is very excited about the company’s progress with autonomous driving and it is already driving them around with no human-intervention

I guess, I am very excited about our progress with autonomy. The end-to-end, nothing but net, self-driving software is amazing. I — drives me around Austin with no interventions. So it’s clearly the right move. So it’s really pretty amazing. 

Tesla’s management believes that the company’s work in developing autonomous driving can also be applied to Optimus (the company’s autonomous robots)

And obviously, that same software and approach will enable Optimus to do useful things and enable Optimus to learn how to do things simply by looking. So extremely exciting in the long term.

Tesla’s management believes that Optimus will have a huge positive economic impact on the world and that Tesla is at the forefront of developing autonomous robots; Tesla’s management is aware of the potential dangers to humankind that an autonomous robot such as Optimus can pose, so they are designing the robot carefully

As I’ve mentioned before, given that the economic output is the number of people times productivity, if you no longer have a constraint on people, effectively, you’ve got a humanoid robot that can do as much as you’d like, your economy is twice the infinite or infinite for all intents and purposes. So I don’t think anyone is going to do it better than Tesla, not by a long shot. Boston Dynamics is impressive, but their robot lacks the brain. They’re like the Wizard of Oz or whatever. Yes, lacks the brain. And then you also need to be able to design the humanoid robot in such a way that it can be mass manufactured. And then at some point, the robots will manufacture the robots.

And obviously, we need to make sure that it’s a good place for humans in that future. We do not create some variance of the Terminator outcome. So we’re going to put a lot of effort into localized control of the humanoid robot. So basically, anyone will be able to shut it off locally, and you can’t change that even if you put — like a software update, you can’t change that. It has to be hard-coded.

Tesla’s management believes that Mercedes can easily accept legal liability for any FSD-failures because Mercedes’ FSD is very limited whereas Tesla’s FSD has far less limitations 

[Question] Mercedes is accepting legal liability for when it’s Level 3 autonomous driving system drive pilot is active. Is Tesla planning to accept legal liability for FSD? And if so, when?

[Answer] I mean I think it’s important to remember for everyone that Mercedes’ system is limited to roads in Nevada and some certain cities in California, doesn’t work in the snow or the fog. It must have a [indiscernible] car in plains, only 40 miles per hour. Our system is meant to be holistic and drive in any conditions, so we obviously have a much more capable approach. But with those kind of limitations, it’s really not very useful.

Tesla’s management believes that technological progress building on technological progress is what will eventually lead to full self driving

I would characterize our progress in real world AI as a series of stacked log curves. I think that’s also true in other parts of AI, like [ LOMs ] and whatnot, a series of stacked log curves. Each log curve gets higher than the last one. So if we keep stacking them, we keep stacking logs, eventually, we get to FSD.

The Trade Desk (NASDAQ: TSLA)

The Trade Desk’s management believes that AI will change the world, but not everyone working on AI is delivering meaningful impact

AI has immense promise. It will change the world again. But not everyone talking about AI is delivering something real or impactful.

The Trade Desk’s management is not focusing the company’s AI-related investments on LLMs (large language models) – instead, they are investing in deep-learning models to improve bidding, pricing, value, and ad relevance for Trade Desk’s services

Large Language Models (the basis of ChatGPT) aren’t the highest priority places for us to make our investments in AI right now. Deep learning models pointed at bidding, pricing, value, and ad relevance are perfect places for us to concentrate our investments in AI—all four categories have private betas and some of the best engineers in the world pointed at these opportunities.

The Trade Desk’s management believes that they are many areas to infuse AI into the digital advertising dataset that the company holds

Second is the innovation coming from AI and the many, many opportunities we have ahead of us to find places to inject AI into what may be the most rich and underappreciated data asset on the Internet, which we have here at The Trade Desk.

The Trade Desk’s management believes that traders in the digital advertising industry will not lose their jobs to AI, but they might lose their jobs to traders who know how to work with AI

Traders know that their jobs are not going to be taken away by AI. But instead, they have to compete with each other. So their job could be taken away from a trader who knows how to use AI really well until all of them are looking at ways to use the tools that are fueled by AI that were provided, where AI is essentially doing 1 or 2 things. It’s either doing the math for them, if you will, of course, with very advanced learning models or, in other cases, it’s actually their copilot.

Old Navy achieved a 70% reduction in cost to reach each unique household using The Trade Desk’s AI, Koa

A great example of an advertiser pioneering new approaches to TV advertising with a focus on live sports is Old Navy…  But as Old Navy quickly found out, programmatic guaranteed has limitations. Programmatic guaranteed, or PG, does not allow Old Navy to get the full value of programmatic such as frequency management, audience targeting and the ability to layer on their first-party data. So they took the next step in the form of decision biddable buying within the private marketplace and focused on live sports inventory. CTV live sports advertising was appealing because it offered an opportunity to expose their brand against very high premium content that might be more restrictive and expensive in a traditional linear environment. They were able to use Koa, The Trade Desk’s AI, to optimize pacing and frequency management across the highest-performing inventory. As a result, they saw a 70% reduction in the cost to reach each unique household versus their programmatic guaranteed performance. 

Wix (NASDAQ: WIX)

Users of Wix’s Wix Studio product are enjoying its AI features

Users particularly [indiscernible] Studio responsive AI technology that simplify high-touch and time-sensitive tasks such as ensuring consistent design across web pages on different screen sizes. They are also enjoying the AI code assistant inside the new Wix IDE [integrated development environment], which allowed them to write clinic code and detect errors easily.

Wix recently released new AI products: (1) an SEO tool powered by AI called AI Meta Tags Creator, and (2) AI Chat Experience for Business, which allows new users to chat with an AI who will walk them through the Wix onboarding process; AI Chat Experience for Business is in its early days, but it has already driven a positive impact on Wix’s conversion and revenue

Earlier this week, we released our latest AI products. The first was AI Meta Tags Creator, a groundbreaking SEO tool powered by AI and our first AI-powered feature within our collection of SEO tools. Both self creators looking to generate SEO-friendly tags for each of their pages and professionals looking to enhance their efficiency and make real-time adjustments will benefit from this product. The second was our Conversational AI Chat Experience for Business. This feature, which is now live, paves the way to accelerate onboarding using AI in order to get businesses online more quickly and efficiently. These new tools continue to demonstrate our leadership in utilizing AI to help users of all types to succeed online… 

…Avishai spoke about the AI chat experience for business and its early weeks — and in its early weeks, we have already seen its positive impact on conversion and revenue.

Wix’s management expects Wix’s AI products to drive higher conversion, monetisation, and retention in the company’s Self Creators business

Compounding Partners growth is complemented by re-accelerating growth in our stable and profitable Self Creators business, which we saw once again this quarter. We expect our market-leading product innovation as well as our powerful AI products and technology to drive higher conversion, monetization and retention as we maintain our leadership position in the website building space.

Wix’s management believes that Wix’s AI products are helping to improve conversion because the new AI tools help to generate content for users, which reduces the inertia to create a website

I believe your second question was in regards to what kind of effect we are seeing from different AI products that we are launching, and mostly in regards to improvement in conversion. And we do actually see an improvement in conversion, which is probably the most important KPI by which we measure our success in deploying new products. The reason for that is that with AI, we are able to ask the user better questions and to understand in a smarter way, why is that the user is trying to achieve. From that, we are able to generate a better starting point for their business on top of Wix. And that is not just the skeleton, we are also able to fill in a lot of information, a lot of the content that the user would normally have to fill in manually. The result is that the amount of effort and knowledge that you need to create a website and for your business on Wix is dramatically reduced. And from that, we are able to see very good results in terms of improvement of conversion.

The use of AI tools internally has helped to improve Wix’s margins

So we saw this year a tremendous improvement in margins — in gross margin. And it came mostly from 2 places. The first one is a lot of improvements and savings that we have with our infrastructure, most of you know the hosting activity. So we had a lot of savings over there, but also about our core organization, for example, benefiting from all kind of AI tools that enable us to be more efficient.

Wix’s management believes that the company’s AI features help users with website-creation when it would normally take specialists to do so

And then because of the power of the AI tools, you can create very strong, very professional websites because the AI will continue and finish for you the thing that would normally require to specialize in different variations of web designs.

Zoom Video Communications (NASDAQ: ZM)

Zoom AI Companion, which helps create call summaries, is included in Zoom’s paid plans at no additional costs to customers, and more than 220,000 accounts have enabled it, with 2.8 million meeting summaries created to-date

We also showcased newly-released innovations like Zoom AI Companion, as well as Zoom AI Expert Assist and a Quality Management for the Contact Center. Zoom AI Companion is especially noteworthy for being included at no additional cost to our paid plans, and has fared tremendously well with over 220,000 accounts enabling it and 2.8 million meeting summaries created as of today.

Zoom’s management believes that Zoom AI Companion’s meeting-summary feature is really accurate and really fast; management attributes the good performance to the company’s use of multiple AI models within Zoom AI Companion

I think we are very, very proud of our team’s progress since it launched the Zoom AI Companion, as I mentioned earlier, right, a lot of accounts enabled that. Remember, this is no additional cost to [ outpay ] the customer. A lot of features.One feature of that is like take a meeting summary, for example. Amazingly, it’s very accurate and it really save the meeting host a lot of time. And also, our federated AI approach really contributed to that success because we do not count on a single AI model, and in terms of latency, accuracy, and also the response, the speed and so on and so forth, I think, it really helped our AI Companion.

Free users of Zoom are unable to access Zoom AI Companion

For sure, for free users, they do not — they cannot enjoy this AI Companion, for sure, it’s a [ data health ] for those who free to approve for online upgrade. So anyway, so we keep innovating on AI Companion. We have high confidence. That’s a true differentiation compared to any other AI features, functionalities offered by some of our competitors.

Zoom’s management thinks that Zoom’s AI features for customers will be a key differentiator and a retention tool

But I think what Eric was just mentioning about AI is probably really going to be a key differentiator and a retention — retention tool in the future, because as a reminder, all of the AI Companion features come included for our free — sorry, for our paid users. So we’re seeing it not only help with conversion, but we really believe that for the long term, it will help with retention as well.

Zoom’s management believes that Zoom’s AI features will help to reaccelerate Zoom’s net dollar expansion rate for enterprise customers

[Question] You’re showing stabilization here on some of the major metrics, the Enterprise expansion metric took a step down to 105%. And so just wondering what it takes for that metric to similarly show stabilization as given like in Q1 renewal cohort and kind of walking through that. Anything on the product side for us to consider or just any other commentary there is helpful.

[Answer] Well, as a reminder, it’s a trailing 12-month metric. So as we’ve worsely seen our growth rates come down this year that’s following behind it. But absolutely, we believe that AI Companion in general as well as the success that we are seeing in Zoom Phone, in Zoom Contact Center, Zoom Virtual Agent, all of those will be key contributors to seeing that metric start to reaccelerate again as we see our growth rate starting to reaccelerate as well.

Zoom’s management thinks tjat Zoom’s gross margin could decline – but only slightly – due to the AI features in Zoom’s products being given away for free at the moment

[Question] As I look at gross margins, how sustainable is it keeping at these levels? I know AI Companion is being given away from as part of the package, I guess, prepaid users. But if you think about the cost to run these models, the margin profile of Contact Center and Phone. How durable is it to kind of sustain these levels?

[Answer] But we do expect there’s going to be some impact on gross margins. I mean we — I don’t think it’s going to be significant because the team will continue to operate in the very efficient manner that they do and run our co-los [co-locateds] that way, but we do expect there’s going to be some impact to our gross margin as we move forward.

Zoom’s management wants to leverage AI Companion across the entire Zoom platform

So again, it’s a lot of other features as well. And like for me, I also use our — the client, [indiscernible] client, connect and other services you can, right? You can have you compose e-mail as well, right? It’s a lot of features, right? And down the road awareness for the Whiteboard with AI Companion as well. Almost every service entire platform, we’re going to lever the AI Companion. So and a lot of features and the AI Companion.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

What We’re Reading (Week Ending 19 November 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 19 November 2023:

1. Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data – Scott Belsky

Increasing perversion of certain business models that are liable to be gamed or constrained by AI: We’re shifting from a world where data analysis required long cycles (analysts need lots of time to run queries, analyze, and then present findings in a way that people understand) to a new world of real-time optimization and insights (AI will mine the data to surface insights and make optimization decisions in real-time). But when businesses start optimizing themselves, all sorts of crazy things might start happening (or at least be suggested by the AI). What wild examples can we think of here? For dating apps, where the perfect match of two people increases churn, will Tinder or Bumble constrain the efficiency of AI so the product doesn’t become too “unsustainably effective”? Or in the world of music streaming: Since Spotify pays artists per song, will Spotify automatically optimize its algorithms to favor longer songs, taking into account the number of minutes each customer listens per day? As AI gets really good at optimization, some industries and business models will need to change…

AI will threaten subjectivity in purchase decisions, and with it the sway of brand and marketing. As we gain trust in the guidance of agent-assisted experiences, will the impact of brand, referral, and relationships in purchase decisions be diminished? Whether we’re buying batteries, sneakers, potato chips, or kitchen appliances, we are often influenced more than we care to admit by brand perceptions as opposed to factual comparisons. However, as your “AI Agent” gets to know you better – infused by every personal preference and previous purchase as well as every online review and consumer reports determination – you may start trusting the guidance of your agent more than any other signal. Perhaps the stakes are even more pronounced in the enterprise, where a procurement process tainted by human emotions, laziness, and previous relationships is the persistent fear of any CFO. How many purchase decisions are made for the wrong reasons – like relationships strengthened by football games and steak dinners with salespeople as opposed to the value and quality of a solution? Companies like Globality (in my portfolio, tackling enterprise procurement) and many others are leveraging AI to radically transform every function of a company. And if you look at this wave of companies overall, they are tackling the tremendous costs of subjectivity in decision making and are designed to yield better and more cost-effective solutions. Ultimately, elevating product meritocracy solves problems in both the worlds of consumer and enterprise purchasing. AI threatens subjective decision making tainted by human error and bias and will usher in an era where the best product at the best value may in fact win. This is a win for buyers, but may be quite disruptive to sellers who fail to innovate and endlessly optimize.

2. 15-Year Anniversary of the Business Owner Fund – Robert Vinall

On Monday the Biden administration released an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This Executive Order goes far beyond setting up a commission or study about AI, a field that is obviously still under rapid development; instead it goes straight to proscription…

If the wrong mental model or a lack of intellectual acumen is not why people fail to beat the market, what is? In my view, it is the sheer difficulty of remaining rational – i.e., buying businesses for less than they are worth and selling them for more than they are worth – when being constantly bombarded with market gyrations, news flow, social media, expert opinions, and any number of other influences. This is not something that I or, likely, anyone is immune to. Over the last five years, I likely became too optimistic in some of my assumptions as the bull market approached its peak in 2021.

That the emotions interfere with rational thought is not an original idea. The definitive book on psychological biases has already been written – “Thinking, Fast and Slow,” by Israeli-American Nobel Prize winner Daniel Kahneman…

…In practice, there are two main failings resulting from emotional biases that an investor needs to eliminate – turning overly pessimistic when markets or investments are down and turning overly optimistic when they are up. Perhaps it could even be argued that turning overly pessimistic when markets are down is the key risk to be cognizant of. Assuming markets trend upwards over time – albeit in a lumpy fashion – the single most important attribute in an investor is the ability to remain steadfast when the outlook temporarily looks bleak. However, given how damaging to wealth it can be to be caught up in a bubble, I will leave the list at two…

…I cannot overstate the importance of being independent. It is far easier to reach a rational conclusion about any topic if your thought process is unclouded by the opinions of others.

In the 1950s, psychologist Solomon Asch conducted a series of psychological experiments known as the Asch conformity experiments. A group of eight participants engaged in a simple perceptual task, whereby all but one of the participants were actors. Each participant was shown a card with one line on it followed by another card with three lines of differing lengths on it… The task was to state which line was of similar length, a simple task that everyone got right when left to their own devices. The catch is that in some trials, the actors gave the wrong response. When this happened, the study’s subject was far more likely to give the wrong response as well. The research suggests people are more likely to conform than they might expect.

Furthermore, the study found that people are more likely to conform when a) more people are present; b) the task is more difficult; and c) the other members of the group are of higher social status. The study found they are less likely to conform when able to respond privately.

This suggests to me four ways to increase your chances of thinking independently: avoid large groups, stick to simple investment opportunities, avoid experts and gurus, and make investment decisions in private, i.e., not in a committee. It is fascinating to me that three of these four measures suggest working alone is better than in a team, and the fourth (keeping it simple) is independent of team size. All the evidence suggests that the smaller the team size, the better the decision-making…

…Given that a good investment idea should be obvious, but the analysis before investing needs to be detailed, it stands to reason that the optimal approach is to analyse many ideas superficially and a handful (the ones considered for investment) in depth. In other words, it is necessary to kiss lots of frogs until you find a prince.

The investment blog, value and opportunity, occasionally takes an entire stock market and analyses every company in it. I am a big fan of the idea. The blogger’s reasons for rejecting an investment sometimes amount to just a single sentence, so these entries will not win any prizes for their intricacy. However, to the end of creating as many free options as possible, it is great…

…If the name of the game is to keep an even keel despite what is happening around you, I cannot for the life of me imagine how having a prominent social media presence can improve investment returns. Social media tends to amplify whatever emotions are prevalent at the time. When returns are good, you are likely to be celebrated as the next Warren Buffett. When they are bad, you will be pummelled mercilessly. An ideal environment is the exact opposite – one where you receive gentle encouragement when things are going badly and a reality check when things are going well. I realise that building a prominent social media presence is an effective way to raise capital fast. But if you live by the sword, expect to die by the sword…

…The single best way I have found to evaluate whether something is important or not is to consider whether I will care about it or even remember it in 10 years’ time. Changes in interest rates, disappointing quarters and even recessions are things that people will likely not care about in ten years’ time. A customer exodus, the breach of debt covenants, and a disruptive new entrant potentially are. Looking through the lens of how things will appear in the future helps to separate signal from the noise. Note though, it is only possible to practice long-term thinking if you have capital providers who take a similarly long-term perspective.

The final point is the importance of humility. Markets occasionally do unexpected things and the best investment strategies ultimately fail as competitors imitate them. Anyone who thinks they have everything figured out is riding for a fall. It is important to be humble.

You have read on two occasions in this memo that it is not clear to me whether I will beat the market in the long-term. This is not false humility. I believe it – not just because of the sobering experience of the last five years, but for the simple reason that so few investors do beat the market over the long-term. Why should I be the chosen one? I hope this is not too disconcerting to my investors. It should not be. Even Buffett is circumspect about Berkshire’s ability to continue to beat the market given its enormous scale, though he would no doubt fancy his chances with a smaller capital base. Anyone who is certain they are going to beat the market belongs in the marketing department, not the investment department.

3. Attenuating Innovation (AI) – Ben Thompson

On Monday the Biden administration released an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This Executive Order goes far beyond setting up a commission or study about AI, a field that is obviously still under rapid development; instead it goes straight to proscription…

…To rewind just a bit, last January I wrote AI and the Big Five, which posited that the initial wave of generative AI would largely benefit the dominant tech companies. Apple’s strategy was unclear, but it controlled the devices via which AI would be accessed, and had the potential to benefit even more if AI could be run locally. Amazon had AWS, which held much of the data over which companies might wish to apply AI, but also lacked its own foundational models. Google likely had the greatest capabilities, but also the greatest business model challenges. Meta controlled the apps through which consumers might be most likely to encounter AI generated content. Microsoft, meanwhile, thanks to its partnership with OpenAI, was the best placed to ride the initial wave generated by ChatGPT.

Nine months later and the Article holds up well: Apple is releasing ever more powerful devices, but still lacks a clear strategy; Amazon spent its last earnings call trying to convince investors that AI applications would come to their data, and talking up its partnership with Anthropic, OpenAI’s biggest competitor; Google has demonstrated great technology but has been slow to ship; Meta is pushing ahead with generative AI in its apps; and Microsoft is actually registering meaningful financial impact from its OpenAI partnership.

With this as context, it’s interesting to consider who signed that letter Allen referred to, which stated:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

There are 30 signatories from OpenAI, including the aforementioned CEO Sam Altman. There are 15 signatories from Anthropic, including CEO Dario Amodei. There are seven signatories from Microsoft, including CTO Kevin Scott. There are 81 signatories from Google, including Google DeepMind CEO Demis Hassabis. There are none from Apple or Amazon, and two low-level employees from Meta.

What is striking about this tally is the extent to which the totals and prominence align to the relative companies’ current position in the market. OpenAI has the lead, at least in terms of consumer and developer mindshare, and the company is deriving real revenue from ChatGPT; Anthropic is second, and has signed deals with both Google and Amazon. Google has great products and an internal paralysis around shipping them for business model reasons; urging caution is very much in their interest. Microsoft is in the middle: it is making money from AI, but it doesn’t control its own models; Apple and Amazon are both waiting for the market to come to them.

In this ultra-cynical analysis the biggest surprise is probably Meta: the company has its own models, but no one of prominence has signed. These models, though, have been gradually open-sourced: Meta is betting on distributed innovation to generate value that will best be captured via the consumer touchpoints the the company controls.

The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors…

…I wrote at the time in an Update:

In 1991 — assuming that the “dawn of the Internet” was the launch of the World Wide Web — the following were the biggest companies by market cap:

$88 billion — General Electric
$80 billion — Exxon Mobil
$62 billion — Walmart
$54 billion — Coca-Cola
$42 billion — Merck

The only tech company in the top 10 was IBM, with a $31 billion market cap. Imagine proposing a bill then targeting companies with greater than $550 billion market caps, knowing that it is nothing but tech companies!

What doesn’t occur to Senator Klobuchar is the possibility that the relationship between the massive increase in wealth, and even greater gain in consumer welfare, produced by tech companies since the “dawn of the Internet” may in fact be related to the fact that there hasn’t been any major regulation (the most important piece of regulation, Section 230, protected the Internet from lawsuits; this legislation invites them). I’m not saying that the lack of regulation is causal, but I am exceptionally skeptical that we would have had more growth with more regulation.

More broadly, tech sure seems like the only area where innovation and building is happening anywhere in the West. This isn’t to deny that the big tech companies aren’t sometimes bad actors, and that platforms in particular do, at least in theory, need regulation. But given the sclerosis present everywhere but tech it sure seems like it would be prudent to be exceptionally skeptical about the prospect of new regulation; I definitely wouldn’t be celebrating it as if it were some sort of overdue accomplishment.

Unfortunately this week’s Executive Order takes the exact opposite approach to AI that we took to technology previously…

…I fully endorse Sinofsky’s conclusion:

This approach to regulation is not about innovation despite all the verbiage proclaiming it to be. This Order is about stifling innovation and turning the next platform over to incumbents in the US and far more likely new companies in other countries that did not see it as a priority to halt innovation before it even happens.

I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.

What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?

They should not. We should accelerate innovation, not attenuate it. Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst. Indeed, the shoggoth to fear is our societal sclerosis seeking to drag the most exciting new technology in years into an innovation anti-pattern. 

4. Bad Office – Marc Rubinstein

On the very same avenue, there have been multiple examples of borrowers handing back keys to lenders on office properties:

  • Between 55th and 56th Streets, Blackstone gave up on a $308 million loan on 26-storey 1740 Broadway after tenants L Brands and law firm Davis & Gilbert relocated. 
  • Between 49th and 50th, Brookfield surrendered the deeds to the 11-storey Brill Building at 1619 Broadway in a transfer valued at $216 million, six years after buying it.
  • At Times Square, CIM Group and Australian pension fund QSuper handed back the keys on 1440 Broadway after being unable to pay a $399 million loan. They had bought the building in 2017 and had it valued at $540 million when they took the loan out two years ago.

Such stress in the office market stems from a combination of lower occupancy and higher interest rates. Over three years on from the start of the pandemic, occupancy in major US office markets remains depressed. Latest turnstile swipe data from Kastle indicates that office occupancy stands at 49.8% of pre-pandemic levels, a figure which has remained stable for eighteen months. And data from XY Sense, which uses sensor data to measure physical office presence, points to office utilisation of around 30% compared with around 60% pre-pandemic, consistent with a roughly 50% drop in occupancy. Whatever incentives companies deploy to get employees back into the office, they’re not working.

For a team of New York-based academics, this doesn’t bode well. In a recent paper, they established a clear connection between companies’ remote work policies and their actual reductions in leased office space. Because only a third of pre-pandemic office leases have come up for renewal, the impact has yet to fully flow through even with vacancy rates at 30-year highs (22.1% in Manhattan). They estimate that on the basis of current working behaviours, New York office stock is worth 42% less than it was in December 2019. At that level, even 60% loan-to-value financing deals are at risk.

In the meantime, higher interest rates are also squeezing borrowers. A lot of debt in the commercial real estate market is floating rate. When times were good, floating rate loans gave borrowers the flexibility to prepay early and sell assets. A typical structure is a five year loan, with the rate hedged for the first two years. With rates up, borrowers have to decide whether to buy new rate protection or alternatively, what to do with the asset. To finance its downtown Los Angeles portfolio, Brookfield took on a lot of floating-rate debt and when an interest-rate hedge expired on one of its properties last year, it opted not to get a new one, leading to it defaulting on a $319 million loan.

Not all of the office debt is with banks. For commercial real estate overall – encompassing hotels, retail, industrial and multifamily as well as office – banks sit on around 50% of debt outstanding. The rest is in commercial mortgage-backed securities structures (16%), government and government sponsored enterprise pools (13%), insurance companies (13%) and Real Estate Investment Trusts (5%).

In some of these segments, stress has been evident for some time. Office REIT stock prices  are down 60% since the end of 2019 and commercial mortgage-backed securities spreads have widened. For banks, though, the pressures have been slower to build.

This week, Bawag, another Europe-based commercial real estate lender, took a €20 million provision against a specific US office exposure as part of its third quarter earnings. The bank underwrote the loan in 2019 based on a rent roll that failed to materialise and has now written down the value of the collateral (to “an 8.5% cap rate, which I think is quite conservative, if you kind of benchmark that to where other people are valuing assets,” said the CEO on his earnings call). The write-down reflects a concern the European Central Bank recently expressed, that bank property valuers may be slow to update estimates. “Despite its importance, collateral valuation is a blind spot for many of the banks reviewed,” ECB supervisors warned last year.

Other banks are trying to get ahead of the curve. Among the largest US banks, PNC has the highest exposure to office (2.7% of total loans). Although it has barely seen any losses in its portfolio, it has begun to classify many of the loans as non-performing. In total, it views 23% of its office portfolio as “criticised” and in the third-quarter, it shifted $373 million of those onto non-performing status. “I think they’re actually all still accruing. We just kind of get there because we don’t think they’re refinanceable in the current market,” explained CEO Bill Demchak on his earnings call. “The move to non-performing from already being criticised comes about as you just watch cap rates creeping higher and adjust the underlying value of the properties accordingly.”

PNC has now set aside reserves to cover 8.5% of its total office loans and within that 12.5% of multi-tenant office loans. Other banks have provisioned at similar levels. Wells Fargo is at 10.8% in the large corporate office segment (although only at 2.2% for small offices); US Bancorp is at 10%.

For many, the crunch will come when borrowers look to refinance. Bank of America has disclosed that around a half of its office loans mature before the end of 2024. As long as they are still paying, many banks may be tempted to roll out the maturity rather than force a loss. Willy Walker, CEO of commercial real estate finance company Walker & Dunlop, calls it “extend and pretend” (emphasis added):

“What the regulator is allowing the banks to do is to take provisions for loan losses and then go and renegotiate those loans and extend and pretend. And given that the banking system is so overcapitalised right now, very much unlike the great financial crisis, a lot of this paper is just going to get rolled. Because the banks are sitting there going: I don’t want to foreclose on this; there’s no sale market for me to get rid of it. Do I want 60 cents on the dollar today or hope that maybe I get 80 or 90 cents on the dollar if I allow them to hold the asset, or I get worked out? The only place where this obviously causes some problems is in the CMBS world where you don’t have the flexibility because you have a special servicer who has a fiduciary responsibility to the bondholders to seize the asset.”

5. RWH034: The High Road To Riches w/ Peter Keefe – William Green and Peter Keefe

[00:17:43] Peter Keefe: And I think if you, opened up the minds of a lot of people in this business. You discover that their motivations may not be exactly what they think they are. And I think money is an incredibly powerful motivator and people may not be willing to admit just how powerful of a motivator it is. And I think it was Henry Kissinger who said money’s the ultimate aphrodisiac and it just can accomplish all kinds of things.

[00:18:11] Peter Keefe: And I think we all know that subconsciously. And so, and of course, am I interested in the rewards, the financial rewards of this business? Absolutely. I don’t know anybody in this business who isn’t, and I’d worry about you a little bit if you said that you weren’t, but having said that, This business is a calling and I think that when I’m talking to people about why they want to be in this business or when I’m mentoring younger investors, I do, this is sort of, it’s sort of an ominous statement says we’re all think that we’re in service to others.

[00:18:50] Peter Keefe: But sometimes you’re serving yourself. So, I sort of ask this question, it’s gently, you’re serving someone, but who are you serving? Make sure you understand who you’re serving. So, you know, we’re all in service to others, but. Make sure you understand who you’re serving.

[00:19:07] William Green: I wonder if it changes as we get older, because I often find when I interview great investors, it seems like early in their lives, there’s a sort of, I have no factual basis for this, it’s more impressionistic, but I have this sense that there’s a real hunger, often, for money, a kind of ill-defined hunger for money, whether it’s to get out of straightened circumstances, if you’re someone like Bill Miller or Mario Gabelli who grew up with nothing or desire to sort of impress people and get, you know, be noticed, you know, which I think, you know, if you were someone like Bill Ackman, who came from a very successful family, you know, you needed to make your mark. And then at a certain point, it shifts, maybe one, at least for a lot of people. I don’t know.

[00:19:54] William Green: And then also there’s a sense of just loving the game, right? I remember you, one thing that I heard you would ask the people you were interviewing for jobs was you would say to them, would you do this on a teacher’s salary for five years? And I think that’s a really important issue as well. Like, you know, actually having to enjoy the game enough the actual craft. Sorry, I’m going on. Well, do you have any thoughts?

[00:20:17] Peter Keefe: You’ve got to enjoy the game, but you’ve really got to appreciate the craft, and you do have to, the reason I asked that question, would you do this on a teacher’s salary it’s serious, but it’s also a trick question, because anybody who’s good at this is not going to have to live on a teacher’s salary for very long.

[00:20:34] Peter Keefe: I don’t want to be involved professionally with people who are doing this solely for the money. You are serving someone, and you should be serving those who need your skills. If you are good at this business, then you have an obligation to give those skills to those who need it. And they’re desperately needed.

[00:20:57] Peter Keefe: They’re desperately needed by hospitals, schools, retirees, poor people. Wealthy people who simply don’t care about investing, so the need is enormous, so I think it’s important to approach this business from a standpoint of service, and if you’re any good at it, you know, the money is going to rain down upon you more money than you ever imagined and more money than you’re ever going to need.

[00:21:23] Peter Keefe: So, you need to take the money out of the equation. Cause if you’re any good, you’re going to make a lot of it. If you’re not, you still might make a lot of it, but I think the principal motivation has to be to serve. Now, when you’re a young man, young woman, you know, we’ve all been there. You just want to go out and slay the world.

[00:21:41] Peter Keefe: I think that’s just part of the natural deal of you know, being young and moving to a new city like I did and wanting to do something. I mean, I tell people I had no idea what I wanted to do, but I wanted to do something, and I wanted to make an impact on people’s lives, a favorable impact on people’s lives.

[00:21:59] Peter Keefe: I’m not sure I even cared about making a favorable impact. I think I wanted to be noticed and I wanted my life to amount to something. You know, there was just a sort of ill-defined desire, this yearning to make some kind of mark. It was very ego filled, definitely. I think we’re saying exactly the same thing.

[00:22:18] Peter Keefe: I mean, I think that over time, my objectives evolved. Yeah. You know, on day one, you’ve got to pay the rent. You know, on day two, you know, you’re thinking about building a family. Day three, you’re thinking about the legacy. So, your way you approach your life evolves over time. And but yeah, I mean, I just, like I said, I got here in DC and I thought I was going to be a lawyer and then decided that was a really bad idea and I just wanted to do something, you know, I had a lot of energy and I was curious about everything, you know, which can be a problem because if you’re curious about everything, it’s hard to focus on one thing…

…[00:31:49] William Green: Yeah, I wanted to dwell for a moment on that idea of the biggest mistakes you’ve made, because you said recently that my biggest mistakes have been early sales of great businesses, and you described that as the silent killer, where you sell compounders too soon, and while we’re dwelling on your mistakes, can I cause you heart palpitations by asking you about Pool Corp, which is a pretty good example of this.

[00:32:14] Peter Keefe: Sure. Yeah. I think Pool Corporation’s the biggest mistake I’ve ever made in my investment career. And like you said, you know, selling early is the high blood pressure of the investment business. It’s a silent killer. And you know, people will always talk about the business they bought that went to zero, or the one that went down 50% or 75%.

[00:32:32] Peter Keefe: Yes, that’s bad. You want to avoid that but the business that you sold too early, that went on the compound tenfold, or 20-fold after that in my career, has been a real killer. I bought Pool Corp at the right price. Interesting story that I tell you about going down to visit management in Covington, Louisiana, which is not exactly a business mecca, but in any event, you know, it checked all our boxes.

[00:32:58] Peter Keefe: Manny Perez de la Mesa, who was still the chairman of the board, I think, was the CEO then, he’d been in GE, he had a great background, he was, understood capital allocation beautifully, and Pool Corp was a terrific business, and for those of your listeners who aren’t familiar with it, I’ll tell them in a breath.

[00:33:18] Peter Keefe: It’s a distributor of pool supplies and equipment, and it gets margins in the mid-single digits, most distributors, margins in the low single digits, but Pool Corp has this unique franchise that permits it to get these ridiculously high returns for a distributor. And those ridiculously high returns multiplied by frequent inventory turnover mean they get huge returns on capital and do everything that you want a compounder to do.

[00:33:50] Peter Keefe: Well, I bought it right and sold it after depreciating four or five times. In our portfolio, because I allowed some thinking about erroneous thinking about evaluation and probably about the economy didn’t creep into my thinking and full construction is somewhat like to the construction cycle. And so, I probably let all these things influence my thinking and went up selling the position in its entirety and having made, like I said, four or five times our capital on the business. Well, I think it’s appreciated about tenfold since then. So, what that taught me was that you either have to be an investor or an economist. Not many people can be both, and I don’t know any wealthy economists, so I’d rather be an investor.

[00:34:38] Peter Keefe: And I, so I just try to tune out some of this economic stuff that can infect your thinking unusually negatively.

[00:34:46] William Green: Yeah, I was reading your letters to shareholders yesterday, and there was one from, I guess, August 2020, so in the middle of the COVID pandemic, and you wrote, We have no predictions about the direction of the economy or markets, and certainly not the virus.

[00:35:01] William Green: The trajectory of the virus and its ultimate duration and impact on the economy are unknowable. What is knowable? is that on occasion the unthinkable happens, unforeseen acts of terrorism occur, real estate bubbles burst, or a pandemic emerges. This means we must own businesses with both bulletproof balance sheets and outstanding and durable business models that can withstand unthinkable economic hardship, which are run by ethical managers whom we can trust to act in our best interests.

[00:35:26] William Green: And that strikes me as such an important insight that you, I mean, in a way it gets back to stoicism, right? It’s like controlling what we can control and letting go of what we can’t control and just recognizing the fact that the direction of the economy and the market and viruses and stuff is just not really knowable unless maybe you’re Soros or Druckenmiller or someone like that, I don’t know. And so just that recognition, having the humility to recognize that’s not knowable strikes me as a really important first step-in long-term investment success.

[00:35:58] Peter Keefe: Yeah, so I, one of the things I tell young investors when I’m mentoring them is, make a choice, you’re either an economist or an investor, unless you’re one of those five people you mentioned.

[00:36:10] Peter Keefe: And I just rhetorically say there’s probably five people on the planet who can consistently tell you what the dollar is going to do, what the price of gypsum is going to be, what oil might be, or the price of money. I know none of those things, and I simply don’t have enough mental bandwidth to be able to allocate any room at all to these things.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta Platforms, and Microsoft. Holdings are subject to change at any time.

How to Get 10% Returns a Year

Investors may be bombarded with so many different methods to value a company. Ultimately it all comes down to how much cash will be returned to the shareholder.

How much should you pay for a stock to get a 10% return? In this article I explore a valuation method that helps me find just that.

The dividend discount model

First, we need to understand that the core concept of investing is that we are investing to earn a stream of future cash flows. Ideally, the amount we receive in the future should exceed the amount we invest today.

In this exercise, I’ll make one assumption: We are long term “buy and never sell investors” who make our money through the cash a company returns to shareholders as dividends.

Using this assumption, we should use the dividend discount model to value a stock. A dividend discount model discounts all future dividend income back to the present. It also assumes that you can reinvest the dividend at a rate similar to the discount rate.

Achieving a 10% return

So how do you get a 10% return? Let’s start with a simple example. Suppose a company will pay $1 per share in dividends for 10 years. At the end of the 10 years, it closes down with no liquidation value.

In total, you will receive $10 per share in dividends.

Using a dividend discount model, and discounting all the dividends to the present day at a 10% discount rate, we can calculate that the price to pay for the stock is $6.14. You can find the calculation in this spreadsheet.

Just looking at the price alone, it may seem strange that the dividend yield that you are getting is more than 10%.

At $6.14 per share, you will be earning a dividend yield of 16.3% but your annual return is still 10%. This is because the company closes down after 10 years and your initial capital will not be returned to you. To make up for that, you need to generate more than 10% in your annual dividend yield just to make a 10% annualised return. 

More durable companies

In the above scenario, the company is not durable and is only able to pay you a dividend for 10 years. But for more durable companies, you can afford to pay more to achieve the same return.

For instance, there’s a more durable company that pays $1 per share in dividend for 20 years before closing down. In this scenario, you can pay $8.51 per share to earn a 10% return.

The more durable the company, the more you can pay. If a company can pay you $1 per share in dividend for eternity, you can pay $10 per share to earn a 10% yield and a 10% return.

Vicom – a no-growth company

An example of a steady but no-growth company is Vicom, which provides car inspection services in Singapore. It is a stable business as Singapore’s law requires vehicles to undergo regular inspections for road-worthiness. 

Vicom, with its longstanding history, is also trusted by Singapore’s authorities to provide these inspection services, making it difficult for competitors to encroach into the space. But there is limited opportunity for Vicom to grow as the authorities regulates the number of vehicles given entitlement to be owned and driven in Singapore, resulting in zero vehicle-growth in Singapore for many years. In addition, the inspection fees are also likely regulated by the government, ensuring that consumers are protected from price gouging.

As a result, Vicom’s annual net income has hovered around S$25 million for years. The company also pays out around 100% of its net profit to shareholders.

Given all of this, as well as assuming that Vicom’s business can sustain for a long period of time and we want a 10% annualised return from owning Vicom’s shares, Vicom’s value should be S$250 million, representing a dividend yield of around 10%. Vicom’s current market cap is around S$450 million, which means that shareholders will earn less than a 10% rate of return.

Amphenol – a growth stock

Amphenol Corporation, a company based in the USA, designs and manufactures electronic connectors and sensors. Unlike Vicom, Amphenol has a track record of growing revenue and earnings per share while paying a growing dividend.

Since 2011, Amphenol’s revenue and earnings per share has compounded at 10.5% and 13% per year, respectively. In addition, the company’s dividend per share has grown from US$0.02 per share in 2011 to US$0.83 per share in 2022, for an annulised growth rate of 40%. Amphenol’s business can likely continue to grow at a steady rate if the company continues to acquire other companies for growth.

How much will you pay for its stock? Let’s assume Amphenol will pay US$1 per share in dividend in 2023 and grow that dividend at 9% per year for a long period of time.

In this case, using a dividend discount model, we can calculate that to earn a 10% return on investment (assuming no dividend withholding tax), we will need to pay around US$109 per share. My calculation can be found here.

You may notice that the dividend yield based on the price we are willing to pay is only 0.9%. Yet, we can still make 10% a year because the dividend that we will collect in future years grows over time. 

Using the model 

This model can be applied to all companies as long as you can predict its dividend stream. However this model only works if you are going to be holding the company for the full duration of its lifespan. If you intend to sell the shares to someone else, the share price that you are able to sell the shares at depends on the buyer’s own required rate of return.

This can be influenced by a range of factors, such as the risk-free rate at the time of the sale, or the state of the economy. 

The model also assumes that you can predict with strong certainty the timing and amount of dividends. In practice, this may be hard to predict for companies without a history of dividend payments.

Nevertheless, this framework provides me with a clear way of thinking about valuation and gives me a sense of how I should approach valuing companies.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Thoughts on Artificial Intelligence

How is artificial intelligence reshaping the world?

I published Thoughts on Artificial Intelligence on 19 July 2023. Since then, developments in AI have continued at a breath-taking speed. In here, I want to share new thoughts on AI that I have, as well as provide updates on some of the initial discussions.

Let’s start with the new thoughts, in no particular order (note that the caution from Thoughts on Artificial Intelligence that my thinking on AI is fragile still apply):

  • AI could be a long-term tailwind for the development of biotechnology drugs. AlphaFold is an AI-model from Alphabet’s subsidiary, Google Deepmind, that is capable of predicting the structure of nearly every protein discovered by scientists thus far – this amounts to more than 200 million structures. And Alphabet is providing this data for free. Proteins are molecules that direct all cellular function in a living organism, including of course, humans. A protein’s structure matters because it is what allows the protein to perform its job within an organism. In fact, diseases in humans can be caused by mis-structured proteins. Understanding the structure of a protein thus means knowing how it could affect the human body. Biotechnology drugs can be composed of proteins, and they tend to manipulate proteins, or the production of proteins, within the human body. According to an Economist article published in September this year, AlphaFold has been used by over 1.2 million researchers to-date. Elsewhere, researchers from biotechnology giant Amgen noted in a recent paper that with the help of AI, the company has reduced, by 60% compared to five years ago, the time it needs to develop a candidate drug up to the clinical-trial stage. But the researchers also shared that AI could do more to help biotechnology companies make the development process for protein-based drugs faster and cheaper. An issue confronting biotechnology companies today is a lack of sufficient in-house data to build reliable models to predict the effects of protein-based drugs. The researchers proposed methods for biotechnology companies to share data to build more powerful predictive AI models in a way that protects their intellectual properties. As AI technology improves over time, I’m excited to observe the advances in the protein-drug creation process that is likely to occur alongside.
  • It now looks even more possible to us that generative AI will have a substantial positive impact on the productivity of technology companies. For example, during Oracle’s earnings conference call that was held in September, management shared that the company is using generative AI to produce the code needed to improve all the features in Cerner’s system (Oracle acquired Cerner, a healthcare technology company, in June 2022), instead of its usual way of writing code in the Java programming language. Oracle’s management also said that even if AI code generators make mistakes, “once you fix the mistake, you fix it everywhere.” In another instance, MongoDB announced in late-September this year that it’s introducing generative AI into its MongoDB Relational Migrator service, which helps reduce friction for companies that are migrating from SQL to NoSQL databases. When companies embark on such a migration, software code needs to be written. With generative AI, MongoDB is able to help users automatically generate the necessary code during the migration process.
  • The use of AI requires massive amounts of data to be transferred within a data centre. There are currently two competing data switching technologies to do so, namely, Ethernet and Infiniband, and they each have their supporters. Arista Networks builds high-speed Ethernet data switches. During the company’s July 2023 earnings conference call, management shared their view that Ethernet is the right long-term technology for data centres where AI models are run. In the other camp, there’s Nvidia, which acquired Mellanox, a company that manufactures Infiniband data switches, in 2020. Nvidia’s leaders commented in the company’s latest earnings conference call (held in late-August this year) that “Infiniband delivers more than double the performance of traditional Ethernet for AI.” It’s also possible that better ways to move data around a data centre for AI workloads could be developed. In Arista Networks’ aforementioned earnings conference call, management also said that “neither technology… were perfectly designed for AI; Infiniband was more focused on HPC [high-performance computing] and Ethernet was more focused on general purpose networking.” We’re watching to see which technology (existing or new) would eventually have the edge here, as the market opportunity for AI-related data switches is likely to be huge. For perspective, Arista Networks estimates the total data centre Ethernet switch market to be over US$30 billion in 2027, up from around US$20 billion in 2022. 

Coming to the updates, in Thoughts on Artificial Intelligence, I discussed how AI software, especially generative AI, requires vector databases but that NoSQL databases will remain relevant. During MongoDB’s latest earnings conference call, held in August this year, management shared their view that the ability to perform vector searches (which is what vector databases do) will ultimately be just a feature that’s built into all databases. This is because standalone vector databases are point-products that still need to be used with other types of databases in order for developers to build applications. I am on the same side as MongoDB’s management because of two things they shared during the company’s aforementioned earnings conference call. Firstly, they see developers preferring to work with multi-functional databases compared to bolting on a separate vector solution onto other databases. Secondly, Atlas Vector Search – MongoDB’s vector search feature within its database service – is already being used by customers in production even though it’s currently just a preview-product; to us, this signifies high customer demand for MongoDB’s database services within the AI community. 

I also touched upon the phenomenon of emergence in AI in Thoughts on Artificial Intelligence. I am even more confident now that emergence is present in AI systems. Sam Altman, the CEO of OpenAI, the company behind ChatGPT, was recently interviewed by Salesforce co-founder and CEO Marc Benioff. During their conversation, Altman said (emphases are mine):

“I think the current GPT paradigm, we know how to keep improving and we can make some predictions about – we can predict with confidence it’s gonna get more capable. But exactly how is a little bit hard. Like when, you know, why a new capability emerges at this scale and not that one. We don’t yet understand that as scientifically as we do about saying it’s gonna perform like this on this benchmark.”

In other words, even OpenAI cannot predict what new capabilities would spring forth from the AI models it has developed as their number of parameters and the amount of data they are trained on increases. The unpredictable formation of sophisticated outcomes is an important feature of emergence. It is also why I continue to approach the future of AI with incredible excitement as well as some fear. As AI models train on an ever increasing corpus of data, they are highly likely to develop new abilities. But it’s unknown if these abilities will be a boon or bane for society. We’ll see!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have a vested interest in Alphabet and MongoDB. Holdings are subject to change at any time.

What We’re Reading (Week Ending 05 November 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 05 November 2023:

1. Lessons from Charlie Munger’s podcast interview – Thomas Chua

2. Why algo driven trading firms like Renaissance technology are taking excessive risk

“The easiest trade is to front run what you know, what the average is, what the index funds have to buy, and you know what it is. Exactly. They all know that. And the way they get their returns year after year is taking the leverage, the midday leverage, up higher and higher and higher and higher.

So they’re making smaller and smaller profits on more and more volume, which gives them this big peak leverage risk, which I would not run myself. And that’s the only way they make these big returns, is to have this huge leverage that would make you crazy if you were already rich.”

3. How Warren and Charlie changed their mind quickly with Diversified Retailing after they realized it was too competitive

(and how they made a ton of money after changing their mind)

Some context: On January 30, 1966, Buffett, Munger, and Gottesman formed a holding company, Diversified Retailing Company, Inc., to “acquire diversified businesses, especially in the retail field.”

Buffett and Munger then went to the Maryland National Bank and asked for a loan to make the purchase. The lending officer looked at them goggle-eyed and exclaimed, “Six million dollars for little old Hochschild-Kohn?”  Even after hearing this, Buffett and Munger—characteristically—did not question their own judgment and run screaming out the door.

“We thought we were buying a second-class department store at a third-class price” is how Buffett describes little old Hochschild-Kohn.

“We made nothing but money at Diversified. We didn’t exactly make it in retailing, but we made a lot of money.

What happened was very simple. We bought this little department store chain in Baltimore. Big mistake. Too competitive.

As the ink dried on the closing papers we realized we’d made a terrible mistake. So we decided just to reverse it and take the hits to look foolish rather than go broke. You just told us how to get us out of this. By that time we’d already financed half of it on covenant free debt and so forth. And they had all this extra cash and our own stocks got down to selling an enormous (discounts).

In the middle of one of those recessions, we just bought, bought and bought and bought and all that money went right into those stocks and of course we tripled it.”…

...11. Why Warren’s investment in Japan was a no-brainer

“If you’re as smart as Warren Buffett, maybe two, three times a century, you get an idea like that. The interest rates in Japan were half a percent per year for ten years. And these trading companies were really entrenched old companies, and they had all these cheap copper mines and rubber foundations, and so you could borrow for ten years ahead, all the money, and you could buy the stocks, and the stocks paid 5% dividends.

So there’s a huge flow of cash with no investment, no thought, no anything. How often do you do that? You’ll be lucky if you get one or two a century. We could do that [because of Berkshire credit]. Nobody else could.”…

14. His view on China

“Well, my position in China has been that the Chinese economy has better future prospects over the next 20 years than almost any other big economy.

That’s number one. Number two, the leading companies of China are stronger and better than practically any other leading companies anywhere, and they’re available at a much cheaper price. So naturally I’m willing to have some China risk in the Munger portfolio.

How much China risk? Well, that’s not a scientific subject. But I don’t mind. Whatever it is, 18% or something.”…

15. What about BYD that captivated Munger?

“Guy (Wang Chuanfu) was a genius. He was at a PhD in engineering and he could look at somebody part, he could make that part, look at the morning and look at it in the afternoon. He could make it. I’d never seen anybody like that. He could do anything. He is a natural engineer and get it done type production executive.

And that’s a big thing. It’s a big lot of talent to have in one place. It’s very useful. They’ve solved all these problems on these electric cars and the motors and the acceleration, braking, and so on.”

Comparing Elon with Wang Chuanfu

“Well, he’s a fanatic that knows how to actually make things with his hands, so he has to he’s closer to ground zero. In other words, the guy at BYD is better at actually making things than Elon is.”

2. The Crash Callers Won’t Save You – Ben Carlson

Here’s something Henry Blodget wrote about notorious stock market bear John Hussman:

Every historical indicator Hussman is looking at is suggesting that the stock market is wildly overvalued and headed for a period of lousy returns. How lousy? John Hussman thinks there’s a good chance the stock market will soon crash 40-50 percent.

And even if the market doesn’t crash, Hussman thinks stocks are priced to produce returns of only a couple of percentage points per year over the next decade–far below the 7 percent inflation-adjusted long-term return that everyone is used to and the double-digit returns of the last few years. If you want to feel comfortable and happy, go ahead and ridicule John Hussman with everyone else. If you want to prepare yourself for what seems like a likely possible stock-market future, however, read on.

Sounds scary, right?…

…Here’s the problem with Blodget and Hussman’s predictions — this piece was written in the summer of 2013!… In the 10 years following Hussman’s prediction of a 40-50% crash or lousy returns for a decade, the S&P 500 was up more than 230% in total or 12.8% on an annual basis:…

…Did Hussman relent from his crash-calling ways? No. He’s still out there calling for a crash, only this time it’s going to be even bigger!…

…When Hussman called for a 40-50% crash in August 2013, he said the Dow could fall somewhere in the 7,500-8,500 range. From current levels at around 32,800 the Dow would need to fall 55% just to get back to the point where Hussman made his initial prediction in 2013 and then another 50% from there to hit that target range…

…I looked at the daily returns on the S&P 500 going back to 1950 to see how often the market was in a state of drawdown at different levels of losses… We’ve had 40% and 50% crashes but it’s pretty rare. You don’t spend all that much time there as an investor. Sometimes you’re going to get your face ripped off in the markets and learn to live with it but you can’t shouldn’t expect it to happen all the time…

…I looked at the rolling 10 year returns for the S&P 500 going back to 1950 to find the distribution of annual returns at various levels… More than 3% of the time returns have been negative over 10 year time frames. Annual returns have been 5% or worse 14.1% of the time. That’s not great. However, annual returns have been 10% or higher 55% of the time. Annual returns of 8% or more have occurred in nearly 70% of all rolling 10 year windows since 1950…

…The people who predict a crash every single year will be “right” eventually. The same is true for those who are constantly forecasting a recession. But they will be wrong the majority of the time. The stock market has been up roughly 75% of the time over one year periods and nearly 97% of the time over 10 year time frames over the past 70+ years.

3. The Long, Long View of Interest Rates – Byrne Hobart

The single most important variable in economics is the risk-free interest rate, i.e. the price of money. Over time, the available data indicate that money has gotten much, much cheaper…

…First things first: a loan to the Duke of Burgundy in the 15th century and a loan to the US treasury in 2023 are completely different things. In the latter case, it’s a loan to a global hegemon that issues the world’s most-accepted reserve currency, a currency in which that loan will be repaid. The Duke, by contrast, is a person, not a country. He collects taxes, but perhaps intermittently, and some fraction of Burgundy consists of his personal property. His big source of uncertain expenses is military campaigns, which are also one of the few legible investments that can produce some theoretical return. But they’re usually a bad deal…

…So for the tiny set of people (like the Duke) who regularly borrowed large sums, their big lumpy expenses were probably ransoms, which means ransoms were potentially a big lumpy source of income as well. In other words, they took part in a physical contest which would lead to an unpredictable positive or negative payoff. 12% is expensive for a sovereign credit, but pretty cheap for a loan to an athlete whose primary source of income is wagering on the outcome of their own matches.

The process of replacing personal relationships with institutional ones has been gradual. Over time, though, it’s created more low-risk or even essentially risk-free lending opportunities. It’s hard to draw a dividing line here, especially because some countries move in and out of risk-free status depending on the market’s general fears and the specifics of their political situation (the spread between German and Italian rates, for example, is a good proxy for the market’s view of how stable Europe is). In the case of the US, we really do have a stable, low-risk borrower—but even the US likes to periodically stress-test the market with debt ceiling fights.

There were lower-risk bonds in the early period of the chart; the paper mentions people earning 5% lending to the governments of Florence and Venice, for example. But in practice, the only risk-free investment at the time was taking hard currency and literally burying it; a risk-free asset with a positive yield is a 19th-century innovation.

Adjust for that, and you’d end up with a new chart: the rate of return on a risk-free investment was on average slightly negative (the inconvenience of hiding it, the risk of losing it, and the possibility, for some currencies, of devaluation). Then it jumped some time in the 19th century despite the fact that governments still needed to borrow, they were now larger and better at tax collection than their predecessors, but their bonds now competed with high-return private sector opportunities). And then we see a decline, or a reversion to the mean, starting in the 1980s: growth declined, rates declined, and the real rate of return on risk-free assets dropped. That’s recently been disrupted again, with the resurgence in inflation since 2021.

But the other big driver of that secular-decline chart is still in place, and it pushes the equilibrium interest rate relentlessly lower. The biggest factor is the existence of retirement…

…The key difference in modernity is that we took a luxury previously available only to the elite, i.e. the ability to live well entirely off the labor of others, and made it an option for anyone who chose to sock away enough money in their 401(k). (The “labor of others” is now some miniscule share of the profits from every company in whatever index the retiree in question has invested in, and their capital comes from forgone consumption when they themselves had a job, but the fundamentals are unchanged.) Longer lifespans that mostly lead to longer retirement rather than more working years will necessarily increase global savings; whether this old-age saving is mediated through private sector investments or through public pensions like social security, it creates an implicit asset on the retiree’s (or future retiree’s) economic balance sheet, a corresponding liability on the part of whoever is offering that income, and thus a demand for income-producing assets that match that liability…

…Technology is also a driver of rates. But the direction is noisy. The more sophisticated the financial system, the more likely it is that deploying new technology will be inflationary. There are two forces at work: in the long run, new technology is deflationary over time, since we’re getting more from less—the number of labor-hours required for illuminating a room for an hour, traveling across the country, or getting a nutritious meal has continuously declined. But when technology is being deployed, it’s inflationary, because there’s more demand for investment and labor. So asking whether the impact of a given technological development is net inflationary or deflationary over, say, the next decade, amounts to asking: how quickly is it getting deployed? If we developed some radically transformative new technology, like a way to generate low-cost, low-emissions energy from trivial amounts of a fairly abundant natural resource, taking advantage of this would require spending money on construction labor, equipment, and raw materials, but would lead to energy abundance over time…

…Last big feature in the real rates model is the existence of a reserve currency. Early in the time series, there were reserve-like currencies; some kinds of money were good for transacting or paying taxes in a specific place, but a ducat or florin was useful just about anywhere, because Venetian and Florentine merchants were almost everywhere, and people who did business with them were everywhere else. But these were small, open economies, of the sort that can’t absorb significant inflows. They’re closer to the Swiss franc than to the dollar: everyone knew they were safe, but it wasn’t possible for everyone in the world to denominate savings in the same currency.

What the dollar as a reserve currency does is to create demand for dollar-denominated savings from exporters, who a) want to keep their currency from appreciating too quickly, and b) want to have local dollar liquidity to ensure that they don’t have a ruinous financial crisis if their exports slow down. The relevant exporters, and the policy consequences, have varied over time; sometimes the petrodollar is the dominant form, and sometimes it’s manufacturing economies. But the direction persists, and as long as the dollar has such strong network effects, there will be foreign demand for dollar-denominated savings with minimal interest rate sensitivity.

Extremely long-term trends are important, because they’re the closest thing we have to true economic fundamentals. If something was true under feudalism and democracy, in wartime and peacetime, in an agrarian economy, a manufacturing economy, and a services-based one, it’s probably just a fact of economic life. The decline in real rates is noisy in the chart and noisier still in reality, but it’s something we should accustom ourselves to: if people live longer than they work, and provide for their old age by saving money; if technological advances are deflationary over time and haven’t been happening as often as they did at the peak; and if countries still grudgingly rely on the dollar; then the long-term set point for rates will decline over time. 

4. How Does the World’s Largest Hedge Fund Really Make Its Money? – Rob Copeland

Since founding Bridgewater in his Manhattan apartment in 1975, Mr. Dalio has been said to have developed prodigious skill at spotting, and making money from, big-picture global economic or political changes, such as when a country raises its interest rates or cuts taxes. That made both a lot of sense and none at all; what was it about Bridgewater that made it so much better at predictions than any other investor in the world trying to do the exact same thing?

Bridgewater earned worldwide fame for navigating the 2008 financial crisis, when the firm’s main fund rose 9 percent while stocks dropped 37 percent, making Mr. Dalio a sought-after adviser for the White House and Federal Reserve and attracting new deep-pocketed clients to his fund. Yet the hedge fund’s overall descriptions of its investment approach could be maddeningly vague. Mr. Dalio often said he relied on Bridgewater’s “investment engine,” a collection of hundreds of “signals,” or quantitative indicators that a market was due to rise or fall. Bridgewater rarely revealed any details of these signals, citing competitive pressure, but if they pointed to trouble ahead or even to uncertainty, Bridgewater said it would buy or sell assets accordingly — even if Mr. Dalio’s own gut might have told him otherwise…

…What confused rivals, investors and onlookers alike was that the world’s biggest hedge fund didn’t seem to be much of a Wall Street player at all. Much smaller hedge funds could move the markets just by rumors of one trade or another. Bridgewater’s heft should have made it the ultimate whale, sending waves rolling every time it adjusted a position. Instead, the firm’s footprint was more like that of a minnow.

What if the secret was that there was no secret?…

…In early 2015, Bill Ackman, the endlessly opinionated hedge fund manager, took the first whack. The billionaire founder of Pershing Square Capital had long found Mr. Dalio’s public pronouncements about his quantitative investment style to be generic and even nonsensical. At a charity event in February that year, Mr. Ackman grilled Mr. Dalio during an onstage interview about how Bridgewater handled the assets it managed.

Mr. Dalio responded: “Well, first of all, I think it’s because I could be long and short anything in the world. I’m basically long in liquid stuff. And I can be short or long anything in the world, and I’m short or long practically everything.” He also noted that some 99 percent of Bridgewater trading was automated, based on longtime, unspecified rules. “They’re my criteria, so I’m very comfortable,” Mr. Dalio said.

Mr. Ackman tried another tack. He gave Mr. Dalio a layup, the sort of question asked six times an hour on business television. “Let’s say you were to buy one asset, or one stock, or one market, or one currency. Where would you put your money?” There was a pause, then Mr. Dalio said, “I don’t do that.” He went on to lay out how Bridgewater’s hundreds of investment staff members spent their days, describing a data-driven approach.

Onstage, Mr. Ackman would remark that it was “one of the most interesting conversations I’ve ever had.” But he walked away shaking his head.

“What was he even talking about?” he vented afterward…

…This all piqued the interest of a Boston financial investigator, Harry Markopolos, who had been a no-name analyst in the late 1990s when his boss asked him to reproduce a rival’s trading strategy that seemed to pay off handsomely. Mr. Markopolos couldn’t, but he figured out enough that he began chatting with the Securities and Exchange Commission. Six years later, when his warnings about Bernie Madoff proved right, Mr. Markopolos earned national fame.

To Mr. Markopolos, what was happening in Westport, Conn., where Bridgewater has its headquarters, raised serious questions, according to people who worked with him. Here lay another giant hedge fund famed for an investment approach that no competitors seemed to understand. He got his hands on Bridgewater’s marketing documents, including a summary of the firm’s investment strategy and a detailed chart of fund performance. Bridgewater described itself as a global asset manager, yet these documents didn’t name a single specific asset that had made or lost the firm money. An investment-performance chart indicated the firm seldom had a down year — even when Mr. Dalio’s public predictions proved off, Bridgewater’s main fund, Pure Alpha, consistently seemed to end the year around flat.

As he looked over the documents, Mr. Markopolos felt a familiar flutter in his heart…

…Mr. Markopolos also went to see David Einhorn of Greenlight Capital, the hedge fund billionaire famed for spotting frauds. Mr. Einhorn welcomed Mr. Markopolos into his Manhattan office, and they sat down with a team of Greenlight analysts who Mr. Einhorn said were interested in investigating Bridgewater themselves, two people present recalled.

After hearing Mr. Markopolos’s talk, Mr. Einhorn said it tracked with his suspicions, too. That was all the encouragement Mr. Markopolos needed. Bridgewater, he wrote to the S.E.C., was a Ponzi scheme.

Bridgewater was not a Ponzi scheme. Which is not to say that all was as Mr. Dalio so often described it.

The S.E.C. and other regulators dutifully took meetings with Mr. Markopolos and his team. The whistle-blowers’ report was passed through the organization, and a team at the agency looked into it. (The S.E.C. declined to comment.)

According to a person briefed on the investigation, what they concluded, in part, was that the world’s biggest hedge fund used a complicated sequence of financial machinations — including relatively hard-to-track trading instruments — to make otherwise straightforward-seeming investments. It made sense to the S.E.C. that rivals couldn’t track them…

…As it turned out, by the time the S.E.C. received Mr. Markopolos’s submission, the regulators had already looked into Bridgewater. In the wake of the Madoff fraud, and never having really dug into the world’s biggest hedge fund, S.E.C. staff spent a stretch in Westport, deeply studying the firm’s operations. The S.E.C. did not much bother with how Bridgewater made money, just that it did indeed invest its clients’ accounts…

…Of Bridgewater’s roughly 2,000 employees at its peak — and hundreds more temporary contractors — fewer than 20 percent were assigned to investments or related research. (The rest worked on operations tasks, including the expansion of Mr. Dalio’s “Principles.”) And of those investment staff members, many held responsibilities no more complicated than those of the average college student. They worked on economic history research projects and produced papers that Mr. Dalio would review and edit. As for whether those insights made it into Bridgewater’s trading, most research employees knew not to ask, current and former investment employees said.

Only a tiny group at Bridgewater, no more than about 10 people, enjoyed a different view. Mr. Dalio and his longtime deputy, Greg Jensen, plucked the members from the crew of Bridgewater investment associates and offered them entry to the inner sanctum. In exchange for signing a lifetime contract — and swearing never to work at another trading firm — they would see Bridgewater’s inner secrets…

…There were two versions of how Bridgewater invested hundreds of billions of dollars in the markets. One version, Mr. Dalio told the public and clients about. The other version, current and former investment employees said, happened behind closed doors.

In the first version, Bridgewater’s hedge funds were a meritocracy of ideas. Every investment staff member or researcher could suggest an investment notion, and the Bridgewater team would debate the merits of the thesis dispassionately, incorporating a broad study of history. Ideas from investment employees with a record of accurate predictions would over time carry more weight and earn backing with more client money. Investors flocked to the approach, assured that Bridgewater — unlike other hedge funds — would not rise or fall off a single trade or prediction from the firm founder. It was the Wall Street equivalent of Darwinism, with a thick wallet…

…The bottom line: Mr. Dalio was Bridgewater and Mr. Dalio decided Bridgewater’s investments. True, there was the so-called Circle of Trust. But though more than one person may have weighed in, functionally only one investment opinion mattered at the firm’s flagship fund, employees said. There was no grand system, no artificial intelligence of any substance, no holy grail. There was just Mr. Dalio, in person, over the phone, from his yacht, or for a few weeks many summers from his villa in Spain, calling the shots.

Lawyers for Mr. Dalio and Bridgewater said the hedge fund “is not a place where one man rules because the system makes the decision 98 percent of the time.” They said that “the notion that Mr. Dalio ‘call[ed] the shots’ on Bridgewater’s investments is false.”…

…On Wall Street, the phrase “information advantage” often carries an unseemly implication, suggesting that one is engaged in insider trading. Mr. Dalio’s information advantage, however, was as legal as it was vast.

Bridgewater’s target was information about entire nations. According to employees involved with the effort, Mr. Dalio heavily courted well-connected government officials from whom he might divine how they planned to intervene in their economies — and Bridgewater used these insights to make money in its funds.

Anywhere seemed fair game, even Kazakhstan. The Central Asian nation was not on the first page in any Wall Street manual. Ruled by an authoritarian government, it is the globe’s largest landlocked country yet sparsely populated. In 2013, Kazakhstan began developing what was then the most expensive oil project — a giant field in the Caspian Sea — helping it grow a $77 billion sovereign wealth fund. That money would have to be invested somewhere, and Bridgewater’s client services squad put a meeting on Mr. Dalio’s calendar with Berik Otemurat, the fund’s chief, a bureaucrat who had begun his career barely 10 years earlier…

…Inside Bridgewater, a relationship meant access. The country’s new oil field had taken more than a decade to develop, with near-constant delays. Anyone who knew how the project was proceeding could adjust bets on oil accordingly. Bridgewater’s representatives told the delegation that their firm would be happy to offer free investing advice, and Bridgewater’s team would likewise appreciate the opportunity to ask questions about industries of local expertise…

…The longest-term project for Mr. Dalio was in China, where he made frequent trips. Mr. Dalio hired China Investment Corporation’s former chairman to a cushy job as head of a Dalio charity in China, and he became close with Wang Qishan, who would later become China’s vice premier and widely considered the second most powerful person in the country. Mr. Dalio would occasionally tell Chinese government representatives that when they invested with Bridgewater, their fees were not merely being sent back to America. “Whatever fees you pay, I will donate back to China personally,” he said in one meeting, according to a person present.

In media interviews, Mr. Dalio stuck to a fixed, laudatory line about the country’s leadership. It was “very capable,” he said, over and again, sometimes repeating the phrase more than once in an interview. Those same leaders, he would also say inside Bridgewater, were quick to ask him for advice.

To any reasonable observer — and even to the Chinese themselves — Mr. Dalio was the paradigm of a China booster. But there was also an advantage that could be played. He asked the Circle of Trust to help create a way for Bridgewater’s funds to place bets against Chinese assets, in an offshore way that China’s government couldn’t track. That way, when Bridgewater took the wrong side of China, no one would know…

…With the hope of turning around the firm’s investment performance, members of the Circle of Trust put together a study of Mr. Dalio’s trades. They trawled deep into the Bridgewater archives for a history of Mr. Dalio’s individual investment ideas. The team ran the numbers once, then again, and again. The data had to be perfect. Then they sat down with Mr. Dalio, according to current and former employees who were present. (Lawyers for Mr. Dalio and Bridgewater said that no study was commissioned of Mr. Dalio’s trades and that no meeting took place to discuss them.)

One young employee, hands shaking, handed over the results: The study showed that Mr. Dalio had been wrong as much as he had been right. Trading on his ideas lately was often akin to a coin flip.

5. Palmer Luckey – Inventing The Future Of Defense – Patrick O’Shaughnessy and Palmer Luckey

Patrick: [00:01:42] Palmer, I always like starting somewhere of recent passion, you started to give me some amazing materials, so we stopped and we restarted the recording here. And maybe we’ll just begin with this idea that you were telling me about I’m always interested by major changes that might happen in the world that nobody is really talking about.

And until you said the word synthetic long chain hydrocarbon fuel to me 5 minutes ago, I’d never heard that combination of words before. So maybe you can start there and explain why that topic is of interest to you today.

Palmer: [00:02:16] Well, it’s of interest because there’s a lot of money being bet by companies, but also governments on a handful of specific technological pads for electrifying vehicles, battery electric vehicles, hydrogen electric vehicles.

If you can make synthetic long chain hydrocarbon fuels, in other words, synthetic gasoline, synthetic diesel synthetic jet fuel using carbon from the atmosphere in particular, there’s a lot of ways to do it. Boiling it down, one of the ways to do it. You take water, you crack it into hydrogen and oxygen using some kind of energy source like a nuclear power plant and then you bond it with carbon to make hydrocarbons and then you’ve got artificial gasoline coming out the other end.

If someone can figure out how to do that, cheaply enough. First of all, it’s an incredible carbon capture mechanism. Two, if you can do it cheaply enough, let’s say, $1 per gallon, then all of these trillions of dollars in investment into battery electric vehicles and hydrogen electric vehicles become really a waste of money and a waste of time. There are, of course, some advantages to battery electrical vehicles, hydrogen electric vehicles that wouldn’t apply.

But for the most part, especially on the aviation side, the ability to make fuels to just plug into existing fully known, fully optimized, fully understood and even fully certified systems that are better than the ones that cost hundreds of billions of dollars to develop. That also aren’t as good electric planes spend most of their energy hauling around their energy storage, not people or payload, which, of course, means you need to put more energy into them in the first place than even synthetic fuels with a pretty low conversion efficiency.

The reason it’s so interesting to me is that the bet seems so mis-apportioned, you have so much money going into battery electric vehicles and electrification of electrical infrastructure that’s not moving. And almost nobody betting that you can build systems that make dollar per gallon hydrocarbon fuels using either biological processes like algae farms or mechanical processes, however you’re making the synthetic fuels.

And of course, if someone figures it out, they’re going to really knock a whole bunch of stuff sideways. And we talked about this before, but it’s especially interesting because lots of companies make poor technical decisions and they decide to go down a product path, it doesn’t make sense.

I personally feel like this is a case where you have dozens of governments around the world have decided to commit to a particular product path that isn’t optimal. It’s not the optimal end state or the optimal near term, and they’re dumping hundreds of billions, maybe trillions of dollars into that bet. It’s something that I’m worried about.

Patrick: [00:04:38] When you find an idea like this. First of all, I’m curious how you found this one in particular, but the world has just gone through the superconductor craze, which for a week there it was like, well, if this is real, it changes everything. And then very quickly realize, oh, it’s not real, and it seems like, again, a remote possibility and not terribly likely. So maybe there won’t be loads of dollars trying to create superconductors. With something like this, how do you weigh the potential against the odds of us being able to do it with your own time and investigation?

Palmer: [00:05:05] Well, in this case, it’s 1950s era Department of Energy documents regarding potential energy futures for the United States, accounting for what they assumed would be a nuclear future. I always find it interesting when I go into an area that I don’t understand to try and understand it better.

Even — if I want to understand what’s going on in the modern day, you want to go back to the future and say, what were people saying, back then, what are the ideas that people aren’t even discussing right now? Because I don’t want to be too pessimistic on present, but if you look through a lot of the academic literature and government literature today on energy solutions for the United States, they’re really, really narrow minded.

They are really, really politically driven, it’s all about what is aligned with the current debates going on between political parties. The people in these agencies are largely tied to the things that have already been deemed important. And if you go back on the other hand, to let’s say, post-World War II America, where we were really thinking from first principles, what do we want the world to look like? What do we want the United States to look like?

And what are all of the ways we could get there? They were thinking very expensively. And so this idea of extremely cheap synthetically manufactured biofuels that would get rid of strategic dependence on limited oil supply or allow us to sell off our oil supply to make money in the near term while still having a robust renewable base of energy to power our industrial machine, our war machine, you name it

That was an idea that was of interest to people in the ’40s, the ’50s, the ’60s. I think mostly all of this fell apart when it became clear that we were not going to be a nuclear economy mostly in for political reasons, not practical or technological reasons. So this was the case, right?

I didn’t actually have to be a big thinker. I just had to go say, what were people thinking when they were allowed to think whatever they wanted and when they could think really, really big things? And it’s not even just on fuel. There’s other interesting things they were thinking about back then, like today, if you say, what’s the best way to help the environment in the United States? It’s actually very calcified.

There’s very little consideration for things that are better than what currently exists. You kind of preserving the status quo as the ultimate good. There’s very little consideration for what is better for people, what would be better for more animals. And if you look back again the earlier parts of the United States history, there were serious proposals by the Department of Interior to say, what should the United States ecosystem look like if we could make it whatever we wanted?

What animals would we have? Would we have hippos? Would we have rhinos? Why not? Why don’t we put hippos in Louisiana? There was just this endless possibility, big thinking. But what’s crazy is it’s not even big thinking in the way that we would think of today. When people think a big thinking, they immediately jump to really hard ideas, fusion power and what if we could bio-engineer ourselves? The ideas they were having, they would have a big impact, but they’re actually easy ideas.

Just what if we brought some hippos, put them over there on that swamp? The big idea is what could we do economically? Would that be a good meat source? Could we use that as a better protein source that is less damaging to environments that we’re trying to preserve than what we’re currently doing with cows? And those types of ideas, they’re not taken seriously today. People treat you like a crank if you step outside the orthodoxy…

Patrick: [00:22:27] When you approach those problems, I have heard you talk about those other two, which is totally fascinating how you approach things. Maybe even before I ask this question, I’m just curious what your method of invention is. So you find an interesting problem either that you want to work in this case, don’t want to work on necessarily. Is your method iterative? Is it more theoretical? Describe the way that you start to invent in a field when you approach something for the first time.

Palmer: [00:22:50] Well, it depends on if it’s a field that I know a lot about or don’t know a lot about. If you know a lot about something, it’s easier to get right into the iterative side of things and know that you’re probably on a pretty reasonable path. In that case, iteration is a valuable tool to move very quickly, find out what works, find out what doesn’t and then continuously make it better.

The risk with going with a strongly iterative approach in areas that you maybe don’t understand, and you might even think you do, but let’s say you truly don’t is that there might be much better approaches that you should have started iterating on. Or that you should have examined before you committed to one particular path. I talked about this earlier, but it’s really about going back to the future. I love to go and see what everyone else who solved this problem thinks about it, not in the recent times.

I don’t want to know what my competition looks like because when I started Oculus, I wasn’t looking at what existing companies were doing in VR because clearly, they were all doing it wrong. Whatever they were doing was not working. I was not going to look around at the handful of VR companies that existed in that time and learn anything except how to fail. So I wanted to look into the past, what were people thinking when they were thinking bigger, when they were willing to look at wackier paths.

When they were willing to consider things that have been eliminated often because technologies just weren’t ready. There’s a lot of technologies that have been discarded because they weren’t practical at the time, and nobody ever revisited them and said, “Hey, I actually think the time has come.” A good example is with the Rift. Doing real-time distortion correction is not a new idea. It existed in the 1980s and 1990s in the virtual reality community. They have been discarded even by NASA.

There’s a fascinating NASA paper where they talk about doing real-time geometry distortion correction on a virtual reality headset that made the optics lower distortion and allow them to, therefore, use wider field of view. Lighter weight optics than would have otherwise been required to have an optically perfect image. And the conclusion was, yes, this is a really good way to save money and to save weight, but it’s too computationally expensive. We’re using most of our processing power to warp the image in real time rather than render this wire frame image.

And so this is not a good approach. You should just do it optically, and then you don’t have to have a more expensive computer. But back then, compute was the expensive part and the optic transform used up a lot of your compute. Nobody reexamined this idea even as computers got better. Nobody went back until me and said, “Wait a second, you can do real-time transform on a modern graphics card for like 1% or 2% of your render horsepower.” And also your graphics card doesn’t cost $100,000 anymore. It costs a few hundred dollars.

And so if you’re worried about that 1% or 2% impact, just buy a graphics card that costs a few dollars more, so you can save hundreds or thousands of dollars on the VR headset itself by using optics that have geometric distortion or in chromatic aberration, for example. That was an idea that had been discarded and nobody ever came back to it.

And most of the things that made the Rift successful were ideas like that, there’s a few others where I was just going back to the future and realizing, “Wait a sec, these ideas, they were actually pretty good. They were just a little too early.”…

Patrick: [00:35:55] Just trying to think about a framework to discuss the state of weapons technology or the history of weapons technology with you. And the cleanest I could come up with a stupid consultant 2×2, where on one axis, you have offensive versus defensive technologies dominating. And on another, you have democratic versus very non-democratic. Musket’s on one end, everyone has the same amount of power. And on nuclear weapons on the other, one person has a gazillion times as much as the Musket guy or something. Is that a good way to think about where we might pop through history and weapons technology? Is there some other way that you would approach thinking about an era and weapons technology?

Palmer: [00:36:29] Offensive and defensive is definitely the right scale. Distributed or not distributed, I’m not sure. That I actually think matters less by weapon system and more by the power dynamic of the nation. There’s a question here, are the people aligned with the government or are they opposed to the government and to what degree? If you have them where there are neutral parties that accept each other’s existence, that’s one thing.

Let’s say you have a country like Ukraine where implausibly to the Russians, they had formed a strong national identity, and there were people who were willing to die in large numbers for their country. That’s a case where there’s people who are very much aligned with the broader goals of their nation. On the other hand, if you look at a lot of African nations, even a lot of Middle Eastern nations, you have a huge mismatch between what the political class wants and what the every man wants.

And so I think a better access is actually more like democratic versus autocratic technologies. In that, there are a lot of technologies that are much more useful for controlling your own population than for preserving their rights against hostile actors. There’s a lot of countries whose military effectively is an internal peacekeeping force to crush dissent. That’s actually what it’s for.

And there’s a lot of tools made by companies like SenseTime in China that are fundamentally — they are not useful for going to other countries and preserving our rights. They’re not useful for defending yourself from an invader. They are only useful for controlling people in your nation. This is one of the reasons that China is exporting these technologies in the same way that the Soviet Union exported AK-47s, which you could say are on this distributed and offensive side, if you were to look at it that way.

But the reason they were actually doing that is they wanted to arm nations with the tools that they needed to keep their civilian population in check and keep them in power and they wanted to threaten them and say, “Hey, if you ever get out of line, we’re going to stop providing you with these arms and systems you need, and then you’re going to immediately get a violent revolution and you’re probably going to get killed.” It was a great motivation for people to stay stuck to the Soviet Union. This idea that, “We are the thing that allows you to keep your people in check, and without us, that is over.”

China is pursuing a similar strategy. They’re going to African nations and saying, “Hey, we’re going to help build out this infrastructure in your ports, on your roads, in your police force, in your military. We’re going to build AI camera systems that track dissidents for you. They track where they’re shopping, where they’re going, where they’re riding trains. We’re going to allow you to monitor all their communications on the telecommunications side. So let us sell you telecommunications gear, and you get all these back doors that allow you to control people who’re trying to come after you.”

But a bargain with the devil that they’re making is, “Oh, and by the way, if you ever do anything that’s counter to Chinese interests, we’re going to pull all of this and you’re going to lose all your tools for controlling your population and you’re going to be dead inside of a week.” And say SenseTime is on the autocratic, authoritarian side of that scale because it has almost no application in preserving freedom or in deterring an invasion. It’s only for controlling your own people…

Patrick: [00:48:53] So you built Lattice. Now the world is catching up. Everyone wants to build an AI platform system. What have you learned about building one? What are the components of it? What do you think about AI at large, the cost associated with compute and AI? All of these big things you’ve been working in for a while, and now it matters to everybody. So what would you contribute as the lessons that you’ve learned there so far?

Palmer: [00:49:11] It’s been a double-edged sword, I think. We’ve been working on AI for defense since literally day one. That was the whole pitch. The second page of our pitch deck was a quote from Vladimir Putin. He was talking about artificial intelligence, and he said, “The country that wins in this sphere will become the ruler of the entire world,” which I love. It’s a very James Bond villain quote.

On the one hand, it sounded crazier at that time because AI wasn’t hot seven years ago. There were people who were interested in it, but it’s obviously not even a 0.01 of the attention that’s being dedicated to it today. The flip side of that is now that everyone is saying they’re AI, all of a sudden our message is getting diluted or, “Hey, we’ve actually been doing AI for defense for almost seven years now.” And now everyone is changing to say, “Yes, our systems are all powered by AI. It’s all AI-driven.” Some of that’s true, some of it’s not.

Now it actually is less differentiated than we were. You have to now get very clear about what the difference between a real usable, fieldable AI system looks like versus strapping together ChatGPT with whatever your quadcopter thing is and saying that it’s going to change the world. I do think what’s been helpful to us, though, with members of Congress, people in The Pentagon, even investors has been the explosion of firsthand understanding of how powerful AI can be that’s been driven by these large language models.

Obviously, the things that we are building that fuse data from thermal vision and radar and signals intelligence processors, that then calculate optimal weapons pairing against that target, very different use of AI than a thing that you tell to write a poem about your car.

But the fact that every member of Congress has been able to use ChatGPT, the fact that all these people in The Pentagon have seen and used systems, doing things that they never believed a computer could do has, I think, expanded people’s minds in general towards the possibility that maybe people can be replaced by AI in certain use cases, maybe there are areas where computers really can do a job as well or better than a person. A lot of skeptics, I think, have changed their minds because they type something into ChatGPT, it did something for them, and they said, “Wow, computers sure are amazing these days.

Patrick: [00:51:20] What has AI most unlocked?

Palmer: [00:51:22] The most important thing that it has unlocked for us is ability to scale. People focus on use cases where AI can do better than a person or even better than a team of 100 or 1,000 people at some one specific task. I think a lot of the more interesting use cases are where you can do as good of a person but without a person having to do it.

Let’s use an example. Let’s say that I’m going to deploy 1,000 autonomous cruise missiles. Those are going to be a lot more impactful than they would be if I had to have 1,000 people trying to remotely pilot 1,000 systems and tell them what to do, how to do it, have all those data links active. I guess it’s using AI to do things that would be impossible to do otherwise, either for real technical reasons like bandwidth or just for practical reasons. We don’t have thousands of pilots that we could dedicate to such a task. For me, I think that’s the biggest thing AI enables.

I’m less focused on the superhuman, super intelligent side of things and more, “Hey, this AI that’s running my autonomous helicopter, is it about as good as a pretty good helicopter pilot? Okay, that means that I can have one soldier managing a fleet of 25 airframes himself and just telling them, “Hey, I need you to clear this area. I need you to find this target I’m looking for. I need you to fly ahead of my convoy and watch for anything.” Now he doesn’t need 25 pilots and 25 helicopters to do that. That’s what I’m most excited about.

And it’s really important in a world where I think quantity is going to have a quality all of its own in these types of weapon systems. The best way to defeat a lot of our adversaries’ defenses is not through building a small number of exquisite systems. But quantities that are so large, they can’t possibly stop them.

And it’s especially important in a world where militaries are struggling to recruit. They’re trying to be more cost effective. They’re trying to put less money into salaries and disability payments and more into systems that are going to be fighting the adversary directly, robotic systems.

For example, the United Kingdom has said that they want to, over the next few years, reduce the size of their Navy by 30%, 30% personnel reduction. And because of that, they are doing things like dedicating one of their two aircraft carriers to being an autonomous aircraft launch system.

In other words, they want one of their carriers to only launch autonomous systems. You don’t have to have huge numbers of people to run and maintain these traditional manned systems. If that’s the world we’re going to live in where we need to ramp up the number of systems but also ramp down the number of people, the only thing that can fill the gap is automation.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.